diff --git "a/src/processing/output.jsonl" "b/src/processing/output.jsonl" --- "a/src/processing/output.jsonl" +++ "b/src/processing/output.jsonl" @@ -1,708 +1,1496 @@ -{"global_id": 0, "doc_id": "wavelength", "chunk_id": "0", "question_id": 1, "question": "What does AWS Wavelength enable developers to do?", "answer_span": "AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications.", "chunk": "AWS Wavelength Developer Guide What is AWS Wavelength? AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications. Wavelength deploys standard AWS compute and storage services to the edge of communications service providers' (CSP) networks. You can extend a virtual private cloud (VPC) to one or more Wavelength Zones. You can then use AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances to run the applications that require low latency or edge resiliency within the Wavelength Zone, while seamlessly communicating back to your existing AWS services deployed in the parent AWS Region. For more information, see AWS Wavelength. Wavelength concepts The following are the key concepts: • Wavelength — A new type of AWS infrastructure designed to run workloads that require low latency or edge resiliency. • Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed. Wavelength Zones are associated with an AWS Region. A Wavelength Zone is a logical extension of the Region, and is managed by the control plane in the Region. • VPC — A customer virtual private cloud (VPC) that spans Availability Zones, Local Zones, and Wavelength Zones, and has deployed resources such as Amazon EC2 instances in the subnets that are associated with the zones. • Wavelength subnet — A subnet that you create in a Wavelength Zone. You can create one or more subnets, and then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group"} -{"global_id": 1, "doc_id": "wavelength", "chunk_id": "0", "question_id": 2, "question": "What is a Wavelength Zone?", "answer_span": "Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed.", "chunk": "AWS Wavelength Developer Guide What is AWS Wavelength? AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications. Wavelength deploys standard AWS compute and storage services to the edge of communications service providers' (CSP) networks. You can extend a virtual private cloud (VPC) to one or more Wavelength Zones. You can then use AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances to run the applications that require low latency or edge resiliency within the Wavelength Zone, while seamlessly communicating back to your existing AWS services deployed in the parent AWS Region. For more information, see AWS Wavelength. Wavelength concepts The following are the key concepts: • Wavelength — A new type of AWS infrastructure designed to run workloads that require low latency or edge resiliency. • Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed. Wavelength Zones are associated with an AWS Region. A Wavelength Zone is a logical extension of the Region, and is managed by the control plane in the Region. • VPC — A customer virtual private cloud (VPC) that spans Availability Zones, Local Zones, and Wavelength Zones, and has deployed resources such as Amazon EC2 instances in the subnets that are associated with the zones. • Wavelength subnet — A subnet that you create in a Wavelength Zone. You can create one or more subnets, and then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group"} -{"global_id": 2, "doc_id": "wavelength", "chunk_id": "0", "question_id": 3, "question": "What is the purpose of a carrier gateway?", "answer_span": "A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet.", "chunk": "AWS Wavelength Developer Guide What is AWS Wavelength? AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications. Wavelength deploys standard AWS compute and storage services to the edge of communications service providers' (CSP) networks. You can extend a virtual private cloud (VPC) to one or more Wavelength Zones. You can then use AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances to run the applications that require low latency or edge resiliency within the Wavelength Zone, while seamlessly communicating back to your existing AWS services deployed in the parent AWS Region. For more information, see AWS Wavelength. Wavelength concepts The following are the key concepts: • Wavelength — A new type of AWS infrastructure designed to run workloads that require low latency or edge resiliency. • Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed. Wavelength Zones are associated with an AWS Region. A Wavelength Zone is a logical extension of the Region, and is managed by the control plane in the Region. • VPC — A customer virtual private cloud (VPC) that spans Availability Zones, Local Zones, and Wavelength Zones, and has deployed resources such as Amazon EC2 instances in the subnets that are associated with the zones. • Wavelength subnet — A subnet that you create in a Wavelength Zone. You can create one or more subnets, and then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group"} -{"global_id": 3, "doc_id": "wavelength", "chunk_id": "0", "question_id": 4, "question": "What can you extend to one or more Wavelength Zones?", "answer_span": "You can extend a virtual private cloud (VPC) to one or more Wavelength Zones.", "chunk": "AWS Wavelength Developer Guide What is AWS Wavelength? AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications. Wavelength deploys standard AWS compute and storage services to the edge of communications service providers' (CSP) networks. You can extend a virtual private cloud (VPC) to one or more Wavelength Zones. You can then use AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances to run the applications that require low latency or edge resiliency within the Wavelength Zone, while seamlessly communicating back to your existing AWS services deployed in the parent AWS Region. For more information, see AWS Wavelength. Wavelength concepts The following are the key concepts: • Wavelength — A new type of AWS infrastructure designed to run workloads that require low latency or edge resiliency. • Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed. Wavelength Zones are associated with an AWS Region. A Wavelength Zone is a logical extension of the Region, and is managed by the control plane in the Region. • VPC — A customer virtual private cloud (VPC) that spans Availability Zones, Local Zones, and Wavelength Zones, and has deployed resources such as Amazon EC2 instances in the subnets that are associated with the zones. • Wavelength subnet — A subnet that you create in a Wavelength Zone. You can create one or more subnets, and then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group"} -{"global_id": 4, "doc_id": "wavelength", "chunk_id": "1", "question_id": 1, "question": "What is the purpose of a carrier gateway?", "answer_span": "It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet.", "chunk": "then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group — A unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses. • Wavelength application — An application that you run on an AWS resource in a Wavelength Zone. Wavelength concepts 1 AWS Wavelength Developer Guide AWS resources on Wavelength You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones. You can also use the following: • Amazon EC2 Auto Scaling • Amazon EKS clusters • Amazon ECS clusters • Amazon EC2 Systems Manager • Amazon CloudWatch • AWS CloudTrail • AWS CloudFormation • Application Load Balancer in select Wavelength Zones. For a list of these Zones, see Load balancing. The services in Wavelength are part of a VPC that is connected over a reliable connection to an AWS Region for easy access to services running in Regional subnets. Working with Wavelength You can create, access, and manage your EC2 resources, Wavelength Zones, and carrier gateways using any of the following interfaces: • AWS Management Console— Provides a web interface that you can use to access your Wavelength resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs"} -{"global_id": 5, "doc_id": "wavelength", "chunk_id": "1", "question_id": 2, "question": "What can you create in Wavelength Zones?", "answer_span": "You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones.", "chunk": "then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group — A unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses. • Wavelength application — An application that you run on an AWS resource in a Wavelength Zone. Wavelength concepts 1 AWS Wavelength Developer Guide AWS resources on Wavelength You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones. You can also use the following: • Amazon EC2 Auto Scaling • Amazon EKS clusters • Amazon ECS clusters • Amazon EC2 Systems Manager • Amazon CloudWatch • AWS CloudTrail • AWS CloudFormation • Application Load Balancer in select Wavelength Zones. For a list of these Zones, see Load balancing. The services in Wavelength are part of a VPC that is connected over a reliable connection to an AWS Region for easy access to services running in Regional subnets. Working with Wavelength You can create, access, and manage your EC2 resources, Wavelength Zones, and carrier gateways using any of the following interfaces: • AWS Management Console— Provides a web interface that you can use to access your Wavelength resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs"} -{"global_id": 6, "doc_id": "wavelength", "chunk_id": "1", "question_id": 3, "question": "What interface provides a web interface to access Wavelength resources?", "answer_span": "AWS Management Console— Provides a web interface that you can use to access your Wavelength resources.", "chunk": "then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group — A unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses. • Wavelength application — An application that you run on an AWS resource in a Wavelength Zone. Wavelength concepts 1 AWS Wavelength Developer Guide AWS resources on Wavelength You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones. You can also use the following: • Amazon EC2 Auto Scaling • Amazon EKS clusters • Amazon ECS clusters • Amazon EC2 Systems Manager • Amazon CloudWatch • AWS CloudTrail • AWS CloudFormation • Application Load Balancer in select Wavelength Zones. For a list of these Zones, see Load balancing. The services in Wavelength are part of a VPC that is connected over a reliable connection to an AWS Region for easy access to services running in Regional subnets. Working with Wavelength You can create, access, and manage your EC2 resources, Wavelength Zones, and carrier gateways using any of the following interfaces: • AWS Management Console— Provides a web interface that you can use to access your Wavelength resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs"} -{"global_id": 7, "doc_id": "wavelength", "chunk_id": "1", "question_id": 4, "question": "Which operating systems support the AWS Command Line Interface?", "answer_span": "AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux.", "chunk": "then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group — A unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses. • Wavelength application — An application that you run on an AWS resource in a Wavelength Zone. Wavelength concepts 1 AWS Wavelength Developer Guide AWS resources on Wavelength You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones. You can also use the following: • Amazon EC2 Auto Scaling • Amazon EKS clusters • Amazon ECS clusters • Amazon EC2 Systems Manager • Amazon CloudWatch • AWS CloudTrail • AWS CloudFormation • Application Load Balancer in select Wavelength Zones. For a list of these Zones, see Load balancing. The services in Wavelength are part of a VPC that is connected over a reliable connection to an AWS Region for easy access to services running in Regional subnets. Working with Wavelength You can create, access, and manage your EC2 resources, Wavelength Zones, and carrier gateways using any of the following interfaces: • AWS Management Console— Provides a web interface that you can use to access your Wavelength resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs"} -{"global_id": 8, "doc_id": "wavelength", "chunk_id": "2", "question_id": 1, "question": "What operating systems are supported by the services mentioned in the text?", "answer_span": "is supported on Windows, macOS, and Linux.", "chunk": "services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors. For more information, see AWS SDKs. When you use any of the interfaces for your Wavelength Zones, use the parent Region. AWS resources on Wavelength 2 AWS Wavelength Developer Guide Pricing For more information, see AWS Wavelength Pricing. Use cases for AWS Wavelength Using AWS Wavelength Zones can help you accomplish a variety of goals. This section lists a few to give you an idea of the possibilities. Contents • Online betting and regulated industries • Media and entertainment • Healthcare • Augmented reality (AR) and virtual reality (VR) • Connected vehicles • Smart factories • Real-time gaming Online betting and regulated industries AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting. Using a combination of AWS Wavelength alongside existing AWS hybrid and edge services such as AWS Outposts or AWS Local Zones, you can create highly-available architectures within state or country borders. Media and entertainment Wavelength provides the low latency needed to live stream high-resolution video and high-fidelity audio, and to embed interactive experiences into live video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using"} -{"global_id": 9, "doc_id": "wavelength", "chunk_id": "2", "question_id": 2, "question": "What does AWS SDKs provide?", "answer_span": "Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors.", "chunk": "services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors. For more information, see AWS SDKs. When you use any of the interfaces for your Wavelength Zones, use the parent Region. AWS resources on Wavelength 2 AWS Wavelength Developer Guide Pricing For more information, see AWS Wavelength Pricing. Use cases for AWS Wavelength Using AWS Wavelength Zones can help you accomplish a variety of goals. This section lists a few to give you an idea of the possibilities. Contents • Online betting and regulated industries • Media and entertainment • Healthcare • Augmented reality (AR) and virtual reality (VR) • Connected vehicles • Smart factories • Real-time gaming Online betting and regulated industries AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting. Using a combination of AWS Wavelength alongside existing AWS hybrid and edge services such as AWS Outposts or AWS Local Zones, you can create highly-available architectures within state or country borders. Media and entertainment Wavelength provides the low latency needed to live stream high-resolution video and high-fidelity audio, and to embed interactive experiences into live video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using"} -{"global_id": 10, "doc_id": "wavelength", "chunk_id": "2", "question_id": 3, "question": "What are some use cases for AWS Wavelength mentioned in the text?", "answer_span": "Online betting and regulated industries, Media and entertainment, Healthcare, Augmented reality (AR) and virtual reality (VR), Connected vehicles, Smart factories, Real-time gaming.", "chunk": "services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors. For more information, see AWS SDKs. When you use any of the interfaces for your Wavelength Zones, use the parent Region. AWS resources on Wavelength 2 AWS Wavelength Developer Guide Pricing For more information, see AWS Wavelength Pricing. Use cases for AWS Wavelength Using AWS Wavelength Zones can help you accomplish a variety of goals. This section lists a few to give you an idea of the possibilities. Contents • Online betting and regulated industries • Media and entertainment • Healthcare • Augmented reality (AR) and virtual reality (VR) • Connected vehicles • Smart factories • Real-time gaming Online betting and regulated industries AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting. Using a combination of AWS Wavelength alongside existing AWS hybrid and edge services such as AWS Outposts or AWS Local Zones, you can create highly-available architectures within state or country borders. Media and entertainment Wavelength provides the low latency needed to live stream high-resolution video and high-fidelity audio, and to embed interactive experiences into live video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using"} -{"global_id": 11, "doc_id": "wavelength", "chunk_id": "2", "question_id": 4, "question": "How does AWS Wavelength help online betting and regulated industries?", "answer_span": "AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting.", "chunk": "services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors. For more information, see AWS SDKs. When you use any of the interfaces for your Wavelength Zones, use the parent Region. AWS resources on Wavelength 2 AWS Wavelength Developer Guide Pricing For more information, see AWS Wavelength Pricing. Use cases for AWS Wavelength Using AWS Wavelength Zones can help you accomplish a variety of goals. This section lists a few to give you an idea of the possibilities. Contents • Online betting and regulated industries • Media and entertainment • Healthcare • Augmented reality (AR) and virtual reality (VR) • Connected vehicles • Smart factories • Real-time gaming Online betting and regulated industries AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting. Using a combination of AWS Wavelength alongside existing AWS hybrid and edge services such as AWS Outposts or AWS Local Zones, you can create highly-available architectures within state or country borders. Media and entertainment Wavelength provides the low latency needed to live stream high-resolution video and high-fidelity audio, and to embed interactive experiences into live video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using"} -{"global_id": 12, "doc_id": "wavelength", "chunk_id": "3", "question_id": 1, "question": "What do real-time video analytics provide for live events?", "answer_span": "Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience.", "chunk": "video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using AWS Wavelength to host the remote rendering engine, doctors can experience an immersive training experience without procuring the often-required expensive equipment to do so. Augmented reality (AR) and virtual reality (VR) By accessing compute resources on AWS Wavelength, AR/VR applications can reduce the Motion to Photon (MTP) latencies to the benchmark that is needed to offer a realistic customer experience. When you use AWS Wavelength, you can offer AR/VR in locations where it is not possible to run local system servers. Connected vehicles Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety. Low latency access to the compute infrastructure that's needed to run data processing and analytics on AWS Wavelength enables real-time monitoring of data from sensors on the vehicle. This allows for secure connectivity, in-car telematics, and autonomous driving. Smart factories Industrial automation applications use ML inference at the edge to analyze images and videos to detect quality issues on fast moving assembly lines and to trigger actions that address the issues. With AWS Wavelength, these applications can be deployed without having to use expensive, GPUbased servers on the factory floor. Real-time gaming Real-time game streaming depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength"} -{"global_id": 13, "doc_id": "wavelength", "chunk_id": "3", "question_id": 2, "question": "How does AWS Wavelength benefit medical training providers?", "answer_span": "Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more.", "chunk": "video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using AWS Wavelength to host the remote rendering engine, doctors can experience an immersive training experience without procuring the often-required expensive equipment to do so. Augmented reality (AR) and virtual reality (VR) By accessing compute resources on AWS Wavelength, AR/VR applications can reduce the Motion to Photon (MTP) latencies to the benchmark that is needed to offer a realistic customer experience. When you use AWS Wavelength, you can offer AR/VR in locations where it is not possible to run local system servers. Connected vehicles Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety. Low latency access to the compute infrastructure that's needed to run data processing and analytics on AWS Wavelength enables real-time monitoring of data from sensors on the vehicle. This allows for secure connectivity, in-car telematics, and autonomous driving. Smart factories Industrial automation applications use ML inference at the edge to analyze images and videos to detect quality issues on fast moving assembly lines and to trigger actions that address the issues. With AWS Wavelength, these applications can be deployed without having to use expensive, GPUbased servers on the factory floor. Real-time gaming Real-time game streaming depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength"} -{"global_id": 14, "doc_id": "wavelength", "chunk_id": "3", "question_id": 3, "question": "What is the significance of low latency in real-time gaming?", "answer_span": "Real-time game streaming depends on low latency to preserve the user experience.", "chunk": "video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using AWS Wavelength to host the remote rendering engine, doctors can experience an immersive training experience without procuring the often-required expensive equipment to do so. Augmented reality (AR) and virtual reality (VR) By accessing compute resources on AWS Wavelength, AR/VR applications can reduce the Motion to Photon (MTP) latencies to the benchmark that is needed to offer a realistic customer experience. When you use AWS Wavelength, you can offer AR/VR in locations where it is not possible to run local system servers. Connected vehicles Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety. Low latency access to the compute infrastructure that's needed to run data processing and analytics on AWS Wavelength enables real-time monitoring of data from sensors on the vehicle. This allows for secure connectivity, in-car telematics, and autonomous driving. Smart factories Industrial automation applications use ML inference at the edge to analyze images and videos to detect quality issues on fast moving assembly lines and to trigger actions that address the issues. With AWS Wavelength, these applications can be deployed without having to use expensive, GPUbased servers on the factory floor. Real-time gaming Real-time game streaming depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength"} -{"global_id": 15, "doc_id": "wavelength", "chunk_id": "3", "question_id": 4, "question": "What functionality does Cellular Vehicle-to-Everything (C-V2X) enable?", "answer_span": "Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety.", "chunk": "video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using AWS Wavelength to host the remote rendering engine, doctors can experience an immersive training experience without procuring the often-required expensive equipment to do so. Augmented reality (AR) and virtual reality (VR) By accessing compute resources on AWS Wavelength, AR/VR applications can reduce the Motion to Photon (MTP) latencies to the benchmark that is needed to offer a realistic customer experience. When you use AWS Wavelength, you can offer AR/VR in locations where it is not possible to run local system servers. Connected vehicles Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety. Low latency access to the compute infrastructure that's needed to run data processing and analytics on AWS Wavelength enables real-time monitoring of data from sensors on the vehicle. This allows for secure connectivity, in-car telematics, and autonomous driving. Smart factories Industrial automation applications use ML inference at the edge to analyze images and videos to detect quality issues on fast moving assembly lines and to trigger actions that address the issues. With AWS Wavelength, these applications can be deployed without having to use expensive, GPUbased servers on the factory floor. Real-time gaming Real-time game streaming depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength"} -{"global_id": 16, "doc_id": "wavelength", "chunk_id": "4", "question_id": 1, "question": "What is the purpose of AWS Wavelength?", "answer_span": "depends on low latency to preserve the user experience.", "chunk": "depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength works The following diagram demonstrates how you can create a subnet that uses resources in a communications service provider (CSP) network at a specific location. For resources that must be deployed to the Wavelength Zone, first opt in to the Wavelength Zone, and then create resources in the Wavelength Zone. Contents • VPCs • Subnets • Carrier gateways • Carrier IP address • Routing • DNS • Maximum transmission unit VPCs After you create a VPC in a Region, create a subnet in a Wavelength Zone that is associated with the VPC. In addition to the Wavelength Zone, you can create resources in all of the Availability Zones and Local Zones that are associated with the VPC. VPCs 5 AWS Wavelength Developer Guide You have control over the VPC networking components, such as IP address assignment, subnets, and route table creation. VPCs that contain a subnet in a Wavelength Zone can connect to a carrier gateway. A carrier gateway allows you to connect to the following resources: • 4G/LTE and 5G devices on the telecommunication carrier network • Internet access including fixed wireless access for select Wavelength Zone partners. For more information, see Multi-access AWS Wavelength. • Outbound traffic to public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom"} -{"global_id": 17, "doc_id": "wavelength", "chunk_id": "4", "question_id": 2, "question": "What types of devices can connect to a carrier gateway?", "answer_span": "A carrier gateway allows you to connect to the following resources: • 4G/LTE and 5G devices on the telecommunication carrier network", "chunk": "depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength works The following diagram demonstrates how you can create a subnet that uses resources in a communications service provider (CSP) network at a specific location. For resources that must be deployed to the Wavelength Zone, first opt in to the Wavelength Zone, and then create resources in the Wavelength Zone. Contents • VPCs • Subnets • Carrier gateways • Carrier IP address • Routing • DNS • Maximum transmission unit VPCs After you create a VPC in a Region, create a subnet in a Wavelength Zone that is associated with the VPC. In addition to the Wavelength Zone, you can create resources in all of the Availability Zones and Local Zones that are associated with the VPC. VPCs 5 AWS Wavelength Developer Guide You have control over the VPC networking components, such as IP address assignment, subnets, and route table creation. VPCs that contain a subnet in a Wavelength Zone can connect to a carrier gateway. A carrier gateway allows you to connect to the following resources: • 4G/LTE and 5G devices on the telecommunication carrier network • Internet access including fixed wireless access for select Wavelength Zone partners. For more information, see Multi-access AWS Wavelength. • Outbound traffic to public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom"} -{"global_id": 18, "doc_id": "wavelength", "chunk_id": "4", "question_id": 3, "question": "What must you do before creating resources in the Wavelength Zone?", "answer_span": "first opt in to the Wavelength Zone, and then create resources in the Wavelength Zone.", "chunk": "depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength works The following diagram demonstrates how you can create a subnet that uses resources in a communications service provider (CSP) network at a specific location. For resources that must be deployed to the Wavelength Zone, first opt in to the Wavelength Zone, and then create resources in the Wavelength Zone. Contents • VPCs • Subnets • Carrier gateways • Carrier IP address • Routing • DNS • Maximum transmission unit VPCs After you create a VPC in a Region, create a subnet in a Wavelength Zone that is associated with the VPC. In addition to the Wavelength Zone, you can create resources in all of the Availability Zones and Local Zones that are associated with the VPC. VPCs 5 AWS Wavelength Developer Guide You have control over the VPC networking components, such as IP address assignment, subnets, and route table creation. VPCs that contain a subnet in a Wavelength Zone can connect to a carrier gateway. A carrier gateway allows you to connect to the following resources: • 4G/LTE and 5G devices on the telecommunication carrier network • Internet access including fixed wireless access for select Wavelength Zone partners. For more information, see Multi-access AWS Wavelength. • Outbound traffic to public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom"} -{"global_id": 19, "doc_id": "wavelength", "chunk_id": "4", "question_id": 4, "question": "What does any subnet created in a Wavelength Zone inherit?", "answer_span": "Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route.", "chunk": "depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength works The following diagram demonstrates how you can create a subnet that uses resources in a communications service provider (CSP) network at a specific location. For resources that must be deployed to the Wavelength Zone, first opt in to the Wavelength Zone, and then create resources in the Wavelength Zone. Contents • VPCs • Subnets • Carrier gateways • Carrier IP address • Routing • DNS • Maximum transmission unit VPCs After you create a VPC in a Region, create a subnet in a Wavelength Zone that is associated with the VPC. In addition to the Wavelength Zone, you can create resources in all of the Availability Zones and Local Zones that are associated with the VPC. VPCs 5 AWS Wavelength Developer Guide You have control over the VPC networking components, such as IP address assignment, subnets, and route table creation. VPCs that contain a subnet in a Wavelength Zone can connect to a carrier gateway. A carrier gateway allows you to connect to the following resources: • 4G/LTE and 5G devices on the telecommunication carrier network • Internet access including fixed wireless access for select Wavelength Zone partners. For more information, see Multi-access AWS Wavelength. • Outbound traffic to public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom"} -{"global_id": 20, "doc_id": "wavelength", "chunk_id": "5", "question_id": 1, "question": "What does any subnet created in a Wavelength Zone inherit?", "answer_span": "Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route.", "chunk": "public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom route tables for your subnets in Wavelength Zones. The destinations are the same destinations as a subnet in an Availability Zone or Local Zone, with the addition of a carrier gateway. For more information, see the section called “Routing”. Carrier gateways A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and internet. There is no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the telecommunication carrier, and devices on the telecommunication carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Subnets 6 AWS Wavelength Developer Guide Carrier IP address A Carrier IP address is the address that you assign to a network interface, which resides in a subnet in a Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through"} -{"global_id": 21, "doc_id": "wavelength", "chunk_id": "5", "question_id": 2, "question": "What does the local route enable?", "answer_span": "The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone.", "chunk": "public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom route tables for your subnets in Wavelength Zones. The destinations are the same destinations as a subnet in an Availability Zone or Local Zone, with the addition of a carrier gateway. For more information, see the section called “Routing”. Carrier gateways A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and internet. There is no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the telecommunication carrier, and devices on the telecommunication carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Subnets 6 AWS Wavelength Developer Guide Carrier IP address A Carrier IP address is the address that you assign to a network interface, which resides in a subnet in a Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through"} -{"global_id": 22, "doc_id": "wavelength", "chunk_id": "5", "question_id": 3, "question": "What are the two purposes of a carrier gateway?", "answer_span": "A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and internet.", "chunk": "public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom route tables for your subnets in Wavelength Zones. The destinations are the same destinations as a subnet in an Availability Zone or Local Zone, with the addition of a carrier gateway. For more information, see the section called “Routing”. Carrier gateways A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and internet. There is no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the telecommunication carrier, and devices on the telecommunication carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Subnets 6 AWS Wavelength Developer Guide Carrier IP address A Carrier IP address is the address that you assign to a network interface, which resides in a subnet in a Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through"} -{"global_id": 23, "doc_id": "wavelength", "chunk_id": "5", "question_id": 4, "question": "What does the carrier gateway perform for the Wavelength instances' IP addresses?", "answer_span": "The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group.", "chunk": "public internet resources Subnets Any subnet that you create in a Wavelength Zone inherits the main VPC route table, which includes the local route. The local route enables connectivity between the subnets in the VPC, including the subnets that are in the Wavelength Zone. AWS recommends that you configure custom route tables for your subnets in Wavelength Zones. The destinations are the same destinations as a subnet in an Availability Zone or Local Zone, with the addition of a carrier gateway. For more information, see the section called “Routing”. Carrier gateways A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and internet. There is no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the telecommunication carrier, and devices on the telecommunication carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Subnets 6 AWS Wavelength Developer Guide Carrier IP address A Carrier IP address is the address that you assign to a network interface, which resides in a subnet in a Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through"} -{"global_id": 24, "doc_id": "wavelength", "chunk_id": "6", "question_id": 1, "question": "What does the carrier gateway use to translate the address?", "answer_span": "The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination.", "chunk": "Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through the carrier gateway. You allocate a Carrier IP address from a network border group, which is a unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses, for example, us-east-1-wl1-bos-wlz-1. Routing You can set the carrier gateway as a destination in a route table for the following resources: • VPCs that contain subnets in a Wavelength Zone • Subnets in Wavelength Zones Create a custom route table for the subnets in the Wavelength Zones so that the default route goes to the carrier gateway, which then sends traffic to the internet and telecommunication carrier network. Example: Carrier gateway routing to the public internet Consider a scenario with the following configuration: • A VPC with Availability Zones and a Wavelength Zone • A subnet in the Wavelength Zone • An EC2 instance in the subnet in the Wavelength Zone • A Carrier IP address for the network interface associated with the EC2 instance • An IP address association that maps the private IP address of the EC2 instance to the Carrier IP address Carrier IP address 7 AWS Wavelength Developer Guide You need the following entries in the Wavelength subnet route table. Destination Target Notes VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about"} -{"global_id": 25, "doc_id": "wavelength", "chunk_id": "6", "question_id": 2, "question": "From where do you allocate a Carrier IP address?", "answer_span": "You allocate a Carrier IP address from a network border group, which is a unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses.", "chunk": "Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through the carrier gateway. You allocate a Carrier IP address from a network border group, which is a unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses, for example, us-east-1-wl1-bos-wlz-1. Routing You can set the carrier gateway as a destination in a route table for the following resources: • VPCs that contain subnets in a Wavelength Zone • Subnets in Wavelength Zones Create a custom route table for the subnets in the Wavelength Zones so that the default route goes to the carrier gateway, which then sends traffic to the internet and telecommunication carrier network. Example: Carrier gateway routing to the public internet Consider a scenario with the following configuration: • A VPC with Availability Zones and a Wavelength Zone • A subnet in the Wavelength Zone • An EC2 instance in the subnet in the Wavelength Zone • A Carrier IP address for the network interface associated with the EC2 instance • An IP address association that maps the private IP address of the EC2 instance to the Carrier IP address Carrier IP address 7 AWS Wavelength Developer Guide You need the following entries in the Wavelength subnet route table. Destination Target Notes VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about"} -{"global_id": 26, "doc_id": "wavelength", "chunk_id": "6", "question_id": 3, "question": "What must you create for the subnets in the Wavelength Zones?", "answer_span": "Create a custom route table for the subnets in the Wavelength Zones so that the default route goes to the carrier gateway.", "chunk": "Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through the carrier gateway. You allocate a Carrier IP address from a network border group, which is a unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses, for example, us-east-1-wl1-bos-wlz-1. Routing You can set the carrier gateway as a destination in a route table for the following resources: • VPCs that contain subnets in a Wavelength Zone • Subnets in Wavelength Zones Create a custom route table for the subnets in the Wavelength Zones so that the default route goes to the carrier gateway, which then sends traffic to the internet and telecommunication carrier network. Example: Carrier gateway routing to the public internet Consider a scenario with the following configuration: • A VPC with Availability Zones and a Wavelength Zone • A subnet in the Wavelength Zone • An EC2 instance in the subnet in the Wavelength Zone • A Carrier IP address for the network interface associated with the EC2 instance • An IP address association that maps the private IP address of the EC2 instance to the Carrier IP address Carrier IP address 7 AWS Wavelength Developer Guide You need the following entries in the Wavelength subnet route table. Destination Target Notes VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about"} -{"global_id": 27, "doc_id": "wavelength", "chunk_id": "6", "question_id": 4, "question": "What does the carrier gateway provide access to?", "answer_span": "The carrier gateway provides access to the internet from your Wavelength subnets.", "chunk": "Wavelength Zone (for example an EC2 instance). The carrier gateway uses the address for traffic from the interface to the internet or to mobile devices. The carrier gateway uses NAT to translate the address, and then sends the traffic to the destination. Traffic from the telecommunication carrier network routes through the carrier gateway. You allocate a Carrier IP address from a network border group, which is a unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses, for example, us-east-1-wl1-bos-wlz-1. Routing You can set the carrier gateway as a destination in a route table for the following resources: • VPCs that contain subnets in a Wavelength Zone • Subnets in Wavelength Zones Create a custom route table for the subnets in the Wavelength Zones so that the default route goes to the carrier gateway, which then sends traffic to the internet and telecommunication carrier network. Example: Carrier gateway routing to the public internet Consider a scenario with the following configuration: • A VPC with Availability Zones and a Wavelength Zone • A subnet in the Wavelength Zone • An EC2 instance in the subnet in the Wavelength Zone • A Carrier IP address for the network interface associated with the EC2 instance • An IP address association that maps the private IP address of the EC2 instance to the Carrier IP address Carrier IP address 7 AWS Wavelength Developer Guide You need the following entries in the Wavelength subnet route table. Destination Target Notes VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about"} -{"global_id": 28, "doc_id": "wavelength", "chunk_id": "7", "question_id": 1, "question": "What does the route 0.0.0.0/0 allow for?", "answer_span": "This route allows for intraVPC connectivity, including subnets in the Availability Zones.", "chunk": "VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about protocol considerations, see the section called “Networking considerations”. Traffic initiated from the EC2 instance for the internet uses the 0.0.0.0/0 route to route traffic to the carrier gateway. The carrier gateway maps the EC2 instance IP address to the Carrier IP address, and then sends the traffic to the telecommunication carrier. Example: Carrier gateway routing to the public internet 8 AWS Wavelength Developer Guide DNS EC2 instances use EC2 DNS to resolve domain names to IP addresses. Route 53 supports DNS features, such as domain registration, and DNS routing. Both public and private hosted Wavelength Zones are supported for routing traffic to specific domains. Route 53 resolvers are hosted in the Region. You can also use your own DNS services to resolve domain names. Maximum transmission unit Generally, the maximum transmission unit (MTU) is as follows: • 9001 bytes between EC2 instances in the same Wavelength Zone. • 1500 bytes between carrier gateway and a Wavelength Zone. • 1500 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a public IP address. • 1300 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public"} -{"global_id": 29, "doc_id": "wavelength", "chunk_id": "7", "question_id": 2, "question": "What does the carrier gateway provide access to?", "answer_span": "The carrier gateway provides access to the internet from your Wavelength subnets.", "chunk": "VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about protocol considerations, see the section called “Networking considerations”. Traffic initiated from the EC2 instance for the internet uses the 0.0.0.0/0 route to route traffic to the carrier gateway. The carrier gateway maps the EC2 instance IP address to the Carrier IP address, and then sends the traffic to the telecommunication carrier. Example: Carrier gateway routing to the public internet 8 AWS Wavelength Developer Guide DNS EC2 instances use EC2 DNS to resolve domain names to IP addresses. Route 53 supports DNS features, such as domain registration, and DNS routing. Both public and private hosted Wavelength Zones are supported for routing traffic to specific domains. Route 53 resolvers are hosted in the Region. You can also use your own DNS services to resolve domain names. Maximum transmission unit Generally, the maximum transmission unit (MTU) is as follows: • 9001 bytes between EC2 instances in the same Wavelength Zone. • 1500 bytes between carrier gateway and a Wavelength Zone. • 1500 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a public IP address. • 1300 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public"} -{"global_id": 30, "doc_id": "wavelength", "chunk_id": "7", "question_id": 3, "question": "What is the maximum transmission unit (MTU) between EC2 instances in the same Wavelength Zone?", "answer_span": "Generally, the maximum transmission unit (MTU) is as follows: • 9001 bytes between EC2 instances in the same Wavelength Zone.", "chunk": "VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about protocol considerations, see the section called “Networking considerations”. Traffic initiated from the EC2 instance for the internet uses the 0.0.0.0/0 route to route traffic to the carrier gateway. The carrier gateway maps the EC2 instance IP address to the Carrier IP address, and then sends the traffic to the telecommunication carrier. Example: Carrier gateway routing to the public internet 8 AWS Wavelength Developer Guide DNS EC2 instances use EC2 DNS to resolve domain names to IP addresses. Route 53 supports DNS features, such as domain registration, and DNS routing. Both public and private hosted Wavelength Zones are supported for routing traffic to specific domains. Route 53 resolvers are hosted in the Region. You can also use your own DNS services to resolve domain names. Maximum transmission unit Generally, the maximum transmission unit (MTU) is as follows: • 9001 bytes between EC2 instances in the same Wavelength Zone. • 1500 bytes between carrier gateway and a Wavelength Zone. • 1500 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a public IP address. • 1300 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public"} -{"global_id": 31, "doc_id": "wavelength", "chunk_id": "7", "question_id": 4, "question": "What resources are needed to get started using AWS Wavelength?", "answer_span": "The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public", "chunk": "VPC CIDR Local This route allows for intraVPC connectivity, including subnets in the Availability Zones. 0.0.0.0/0 carrier-gateway-id The Carrier IP address provides internet connectivity through the carrier gateway. Carrier gateway access to the public internet The carrier gateway provides access to the internet from your Wavelength subnets. For information about protocol considerations, see the section called “Networking considerations”. Traffic initiated from the EC2 instance for the internet uses the 0.0.0.0/0 route to route traffic to the carrier gateway. The carrier gateway maps the EC2 instance IP address to the Carrier IP address, and then sends the traffic to the telecommunication carrier. Example: Carrier gateway routing to the public internet 8 AWS Wavelength Developer Guide DNS EC2 instances use EC2 DNS to resolve domain names to IP addresses. Route 53 supports DNS features, such as domain registration, and DNS routing. Both public and private hosted Wavelength Zones are supported for routing traffic to specific domains. Route 53 resolvers are hosted in the Region. You can also use your own DNS services to resolve domain names. Maximum transmission unit Generally, the maximum transmission unit (MTU) is as follows: • 9001 bytes between EC2 instances in the same Wavelength Zone. • 1500 bytes between carrier gateway and a Wavelength Zone. • 1500 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a public IP address. • 1300 bytes between an EC2 instance in a Wavelength Zone and an EC2 instance in the Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public"} -{"global_id": 32, "doc_id": "wavelength", "chunk_id": "8", "question_id": 1, "question": "What is the first step to get started with AWS Wavelength?", "answer_span": "Step 1: Opt in to Wavelength Zones", "chunk": "Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public subnet in an Availability Zone in your Region • An instance in the public subnet • An instance in the Wavelength Zone subnet with a Carrier IP address Tasks • Step 1: Opt in to Wavelength Zones • Step 2: Configure your network • Step 3: Launch an instance in your Availability Zone public subnet 10 AWS Wavelength Developer Guide • Step 4: Launch an instance in the Wavelength zone • Step 5: Test the connectivity Step 1: Opt in to Wavelength Zones Before you specify a Wavelength Zone for a resource or service, you must opt in to the zone. Prerequisites • Some AWS resources are not available in all Regions. Make sure that you can create the resources that you need in the desired Region or Wavelength Zone before launching an instance in a specific Wavelength Zone. • Before you begin, review Quotas and considerations, which includes information about available Wavelength Zones, service differences, and Service Quotas. You should also speak with your mobile operator about mobile service plans and any additional requirements. To opt in to Wavelength Zone using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the Region selector in the navigation bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI"} -{"global_id": 33, "doc_id": "wavelength", "chunk_id": "8", "question_id": 2, "question": "What must you do before specifying a Wavelength Zone for a resource?", "answer_span": "Before you specify a Wavelength Zone for a resource or service, you must opt in to the zone.", "chunk": "Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public subnet in an Availability Zone in your Region • An instance in the public subnet • An instance in the Wavelength Zone subnet with a Carrier IP address Tasks • Step 1: Opt in to Wavelength Zones • Step 2: Configure your network • Step 3: Launch an instance in your Availability Zone public subnet 10 AWS Wavelength Developer Guide • Step 4: Launch an instance in the Wavelength zone • Step 5: Test the connectivity Step 1: Opt in to Wavelength Zones Before you specify a Wavelength Zone for a resource or service, you must opt in to the zone. Prerequisites • Some AWS resources are not available in all Regions. Make sure that you can create the resources that you need in the desired Region or Wavelength Zone before launching an instance in a specific Wavelength Zone. • Before you begin, review Quotas and considerations, which includes information about available Wavelength Zones, service differences, and Service Quotas. You should also speak with your mobile operator about mobile service plans and any additional requirements. To opt in to Wavelength Zone using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the Region selector in the navigation bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI"} -{"global_id": 34, "doc_id": "wavelength", "chunk_id": "8", "question_id": 3, "question": "What should you review before launching an instance in a specific Wavelength Zone?", "answer_span": "Before you begin, review Quotas and considerations, which includes information about available Wavelength Zones, service differences, and Service Quotas.", "chunk": "Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public subnet in an Availability Zone in your Region • An instance in the public subnet • An instance in the Wavelength Zone subnet with a Carrier IP address Tasks • Step 1: Opt in to Wavelength Zones • Step 2: Configure your network • Step 3: Launch an instance in your Availability Zone public subnet 10 AWS Wavelength Developer Guide • Step 4: Launch an instance in the Wavelength zone • Step 5: Test the connectivity Step 1: Opt in to Wavelength Zones Before you specify a Wavelength Zone for a resource or service, you must opt in to the zone. Prerequisites • Some AWS resources are not available in all Regions. Make sure that you can create the resources that you need in the desired Region or Wavelength Zone before launching an instance in a specific Wavelength Zone. • Before you begin, review Quotas and considerations, which includes information about available Wavelength Zones, service differences, and Service Quotas. You should also speak with your mobile operator about mobile service plans and any additional requirements. To opt in to Wavelength Zone using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the Region selector in the navigation bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI"} -{"global_id": 35, "doc_id": "wavelength", "chunk_id": "8", "question_id": 4, "question": "Where can you find the Amazon EC2 console?", "answer_span": "Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.", "chunk": "Region when the traffic uses a private IP address. DNS 9 AWS Wavelength Developer Guide Get started with AWS Wavelength The following diagram shows the resources that you need to configure to get started using AWS Wavelength. • A VPC in your Region • A carrier gateway • A public subnet in an Availability Zone in your Region • An instance in the public subnet • An instance in the Wavelength Zone subnet with a Carrier IP address Tasks • Step 1: Opt in to Wavelength Zones • Step 2: Configure your network • Step 3: Launch an instance in your Availability Zone public subnet 10 AWS Wavelength Developer Guide • Step 4: Launch an instance in the Wavelength zone • Step 5: Test the connectivity Step 1: Opt in to Wavelength Zones Before you specify a Wavelength Zone for a resource or service, you must opt in to the zone. Prerequisites • Some AWS resources are not available in all Regions. Make sure that you can create the resources that you need in the desired Region or Wavelength Zone before launching an instance in a specific Wavelength Zone. • Before you begin, review Quotas and considerations, which includes information about available Wavelength Zones, service differences, and Service Quotas. You should also speak with your mobile operator about mobile service plans and any additional requirements. To opt in to Wavelength Zone using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the Region selector in the navigation bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI"} -{"global_id": 36, "doc_id": "wavelength", "chunk_id": "9", "question_id": 1, "question": "What is the first step to enable Wavelength Zones using the AWS CLI?", "answer_span": "To do so, use the modify-availabilityzone-group command.", "chunk": "bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI Alternatively, use the AWS CLI to enable Wavelength Zones. To do so, use the modify-availabilityzone-group command. Step 2: Configure your network After you opt in to the Wavelength Zone, create a VPC, a carrier gateway, and a public subnet in the Availability Zone. Step 1: Opt in to Wavelength Zones 11 AWS Wavelength Developer Guide Tasks • Create a VPC • Create a carrier gateway and a subnet associated with the Wavelength Zone • Create a public subnet in an Availability Zone Create a VPC Create a VPC to extend to your Wavelength Zone. To create a VPC using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. Choose Create VPC. 3. For Resources to create, choose VPC only. 4. For Name tag, optionally provide a name for your VPC. Doing so creates the tag Name=value. 5. For IPv4 CIDR block, specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Note You can specify a range of publicly routable IPv4 addresses. However, we currently do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create"} -{"global_id": 37, "doc_id": "wavelength", "chunk_id": "9", "question_id": 2, "question": "What should you create after opting in to the Wavelength Zone?", "answer_span": "create a VPC, a carrier gateway, and a public subnet in the Availability Zone.", "chunk": "bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI Alternatively, use the AWS CLI to enable Wavelength Zones. To do so, use the modify-availabilityzone-group command. Step 2: Configure your network After you opt in to the Wavelength Zone, create a VPC, a carrier gateway, and a public subnet in the Availability Zone. Step 1: Opt in to Wavelength Zones 11 AWS Wavelength Developer Guide Tasks • Create a VPC • Create a carrier gateway and a subnet associated with the Wavelength Zone • Create a public subnet in an Availability Zone Create a VPC Create a VPC to extend to your Wavelength Zone. To create a VPC using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. Choose Create VPC. 3. For Resources to create, choose VPC only. 4. For Name tag, optionally provide a name for your VPC. Doing so creates the tag Name=value. 5. For IPv4 CIDR block, specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Note You can specify a range of publicly routable IPv4 addresses. However, we currently do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create"} -{"global_id": 38, "doc_id": "wavelength", "chunk_id": "9", "question_id": 3, "question": "What is the URL to open the Amazon VPC console?", "answer_span": "Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.", "chunk": "bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI Alternatively, use the AWS CLI to enable Wavelength Zones. To do so, use the modify-availabilityzone-group command. Step 2: Configure your network After you opt in to the Wavelength Zone, create a VPC, a carrier gateway, and a public subnet in the Availability Zone. Step 1: Opt in to Wavelength Zones 11 AWS Wavelength Developer Guide Tasks • Create a VPC • Create a carrier gateway and a subnet associated with the Wavelength Zone • Create a public subnet in an Availability Zone Create a VPC Create a VPC to extend to your Wavelength Zone. To create a VPC using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. Choose Create VPC. 3. For Resources to create, choose VPC only. 4. For Name tag, optionally provide a name for your VPC. Doing so creates the tag Name=value. 5. For IPv4 CIDR block, specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Note You can specify a range of publicly routable IPv4 addresses. However, we currently do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create"} -{"global_id": 39, "doc_id": "wavelength", "chunk_id": "9", "question_id": 4, "question": "What is recommended for the IPv4 CIDR block when creating a VPC?", "answer_span": "We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918.", "chunk": "bar, select the Region for the Wavelength Zone. 3. On the navigation pane, choose EC2 Dashboard. 4. In the upper-right corner of the page, choose Account attributes, Zones. 5. Under Wavelength Zones, choose Manage. 6. Choose Enabled. 7. Choose Update zone group. To enable Wavelength Zones using the AWS CLI Alternatively, use the AWS CLI to enable Wavelength Zones. To do so, use the modify-availabilityzone-group command. Step 2: Configure your network After you opt in to the Wavelength Zone, create a VPC, a carrier gateway, and a public subnet in the Availability Zone. Step 1: Opt in to Wavelength Zones 11 AWS Wavelength Developer Guide Tasks • Create a VPC • Create a carrier gateway and a subnet associated with the Wavelength Zone • Create a public subnet in an Availability Zone Create a VPC Create a VPC to extend to your Wavelength Zone. To create a VPC using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. Choose Create VPC. 3. For Resources to create, choose VPC only. 4. For Name tag, optionally provide a name for your VPC. Doing so creates the tag Name=value. 5. For IPv4 CIDR block, specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Note You can specify a range of publicly routable IPv4 addresses. However, we currently do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create"} -{"global_id": 40, "doc_id": "wavelength", "chunk_id": "10", "question_id": 1, "question": "What happens if Windows instances are launched into a VPC with certain IP address ranges?", "answer_span": "Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges).", "chunk": "publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create a VPC, create a carrier gateway, and then select the subnets that route traffic to the carrier gateway. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: Create a VPC 12 AWS Wavelength Developer Guide • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags except the Name tag to the subnet. • A network ACL with the following resources: • A subnet association with the subnet in the Wavelength Zone • Default inbound and outbound rules for your traffic. • A route table with the following resources: • A route for local traffic • A route that routes non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier gateways, and then choose Create carrier gateway. 3. (Optional) For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following: a. Under Existing subnets in Wavelength Zone, select the box for each Wavelength subnet to route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value."} -{"global_id": 41, "doc_id": "wavelength", "chunk_id": "10", "question_id": 2, "question": "What resources are created when traffic is automatically routed from subnets to the carrier gateway?", "answer_span": "we create the following resources: Create a VPC 12 AWS Wavelength Developer Guide • A carrier gateway • A subnet.", "chunk": "publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create a VPC, create a carrier gateway, and then select the subnets that route traffic to the carrier gateway. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: Create a VPC 12 AWS Wavelength Developer Guide • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags except the Name tag to the subnet. • A network ACL with the following resources: • A subnet association with the subnet in the Wavelength Zone • Default inbound and outbound rules for your traffic. • A route table with the following resources: • A route for local traffic • A route that routes non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier gateways, and then choose Create carrier gateway. 3. (Optional) For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following: a. Under Existing subnets in Wavelength Zone, select the box for each Wavelength subnet to route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value."} -{"global_id": 42, "doc_id": "wavelength", "chunk_id": "10", "question_id": 3, "question": "What is the first step to create a carrier gateway?", "answer_span": "Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.", "chunk": "publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create a VPC, create a carrier gateway, and then select the subnets that route traffic to the carrier gateway. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: Create a VPC 12 AWS Wavelength Developer Guide • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags except the Name tag to the subnet. • A network ACL with the following resources: • A subnet association with the subnet in the Wavelength Zone • Default inbound and outbound rules for your traffic. • A route table with the following resources: • A route for local traffic • A route that routes non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier gateways, and then choose Create carrier gateway. 3. (Optional) For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following: a. Under Existing subnets in Wavelength Zone, select the box for each Wavelength subnet to route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value."} -{"global_id": 43, "doc_id": "wavelength", "chunk_id": "10", "question_id": 4, "question": "What optional action can be taken when creating a carrier gateway?", "answer_span": "(Optional) For Name, enter a name for the carrier gateway.", "chunk": "publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 6. Choose Create VPC. Create a carrier gateway and a subnet associated with the Wavelength Zone After you create a VPC, create a carrier gateway, and then select the subnets that route traffic to the carrier gateway. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: Create a VPC 12 AWS Wavelength Developer Guide • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags except the Name tag to the subnet. • A network ACL with the following resources: • A subnet association with the subnet in the Wavelength Zone • Default inbound and outbound rules for your traffic. • A route table with the following resources: • A route for local traffic • A route that routes non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier gateways, and then choose Create carrier gateway. 3. (Optional) For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following: a. Under Existing subnets in Wavelength Zone, select the box for each Wavelength subnet to route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value."} -{"global_id": 44, "doc_id": "wavelength", "chunk_id": "11", "question_id": 1, "question": "What is the first step to create a subnet in the Wavelength Zone?", "answer_span": "To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet.", "chunk": "route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value. 7. Choose Create carrier gateway. Create a public subnet in an Availability Zone Create a subnet in an Availability Zone in the Region. To add a subnet 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Subnets. Create a public subnet in an Availability Zone 13 AWS Wavelength Developer Guide 3. Choose Create subnet. 4. For VPC, choose the VPC. 5. For Subnet name, provide a name for the subnet. Doing so creates the tag Name=value. 6. For Availability Zone, chose an Availability Zone, or choose No Preference to have AWS choose one for you. 7. For IPv4 CIDR block, specify an IPv4 address range for your subnet, using CIDR notation. 8. Choose Create subnet. Step 3: Launch an instance in your Availability Zone public subnet Launch an EC2 instance in the subnet that you created in the Availability Zone. You will use this instance to test the connectivity from the Region to the Wavelength Zone. You can launch EC2 instances in the public subnet that you created. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the"} -{"global_id": 45, "doc_id": "wavelength", "chunk_id": "11", "question_id": 2, "question": "What should you do to add a tag to the carrier gateway?", "answer_span": "To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value.", "chunk": "route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value. 7. Choose Create carrier gateway. Create a public subnet in an Availability Zone Create a subnet in an Availability Zone in the Region. To add a subnet 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Subnets. Create a public subnet in an Availability Zone 13 AWS Wavelength Developer Guide 3. Choose Create subnet. 4. For VPC, choose the VPC. 5. For Subnet name, provide a name for the subnet. Doing so creates the tag Name=value. 6. For Availability Zone, chose an Availability Zone, or choose No Preference to have AWS choose one for you. 7. For IPv4 CIDR block, specify an IPv4 address range for your subnet, using CIDR notation. 8. Choose Create subnet. Step 3: Launch an instance in your Availability Zone public subnet Launch an EC2 instance in the subnet that you created in the Availability Zone. You will use this instance to test the connectivity from the Region to the Wavelength Zone. You can launch EC2 instances in the public subnet that you created. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the"} -{"global_id": 46, "doc_id": "wavelength", "chunk_id": "11", "question_id": 3, "question": "What is the URL for the Amazon VPC console?", "answer_span": "Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.", "chunk": "route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value. 7. Choose Create carrier gateway. Create a public subnet in an Availability Zone Create a subnet in an Availability Zone in the Region. To add a subnet 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Subnets. Create a public subnet in an Availability Zone 13 AWS Wavelength Developer Guide 3. Choose Create subnet. 4. For VPC, choose the VPC. 5. For Subnet name, provide a name for the subnet. Doing so creates the tag Name=value. 6. For Availability Zone, chose an Availability Zone, or choose No Preference to have AWS choose one for you. 7. For IPv4 CIDR block, specify an IPv4 address range for your subnet, using CIDR notation. 8. Choose Create subnet. Step 3: Launch an instance in your Availability Zone public subnet Launch an EC2 instance in the subnet that you created in the Availability Zone. You will use this instance to test the connectivity from the Region to the Wavelength Zone. You can launch EC2 instances in the public subnet that you created. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the"} -{"global_id": 47, "doc_id": "wavelength", "chunk_id": "11", "question_id": 4, "question": "What is the purpose of launching an EC2 instance in the public subnet?", "answer_span": "You will use this instance to test the connectivity from the Region to the Wavelength Zone.", "chunk": "route to the carrier gateway. b. To create a subnet in the Wavelength Zone, choose Add new subnet, enter the required information, and then choose Add new subnet. 6. (Optional) To add a tag to the carrier gateway, choose Add tag, and then enter the tag key and tag value. 7. Choose Create carrier gateway. Create a public subnet in an Availability Zone Create a subnet in an Availability Zone in the Region. To add a subnet 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Subnets. Create a public subnet in an Availability Zone 13 AWS Wavelength Developer Guide 3. Choose Create subnet. 4. For VPC, choose the VPC. 5. For Subnet name, provide a name for the subnet. Doing so creates the tag Name=value. 6. For Availability Zone, chose an Availability Zone, or choose No Preference to have AWS choose one for you. 7. For IPv4 CIDR block, specify an IPv4 address range for your subnet, using CIDR notation. 8. Choose Create subnet. Step 3: Launch an instance in your Availability Zone public subnet Launch an EC2 instance in the subnet that you created in the Availability Zone. You will use this instance to test the connectivity from the Region to the Wavelength Zone. You can launch EC2 instances in the public subnet that you created. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the"} -{"global_id": 48, "doc_id": "wavelength", "chunk_id": "12", "question_id": 1, "question": "What is the first step after completing the networking configuration?", "answer_span": "After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance.", "chunk": "Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the network border group Option 1: Auto assign a Carrier IP address AWS recommends that you use the AWS CLI because you can automatically allocate and associate the Carrier IP address with the network interface. Use the run-instances command as follows to launch an instance in the Wavelength Zone subnet. Step 3: Launch an instance in your Availability Zone public subnet 14 AWS Wavelength Developer Guide aws ec2 run-instances --region us-east-1 --network-interfaces \"DeviceIndex=0,AssociateCarrierIpAddress=true,SubnetId=subnet-036aa298f4EXAMPLE\" -image-id ami-04125ecea1EXAMPLE --instance-type t3.medium • DeviceIndex – Specify 0 to indicate the primary network interface (eth0). • SubnetId – Specify the ID of the subnet in the Wavelength Zone. • AssociateCarrierIpAddress – Set this value to true to assign a Carrier IP address to the network interface. Option 2: Allocate and associate a Carrier IP address from the network border group You can launch EC2 instances in the subnet that you created when you added the carrier gateway. For more information, see the section called “Create a carrier gateway and a subnet associated with the Wavelength Zone”. Security groups control inbound and outbound traffic for instances in a subnet, just as they do for instances in an Availability Zone subnet. To connect to an EC2 instance in a subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate"} -{"global_id": 49, "doc_id": "wavelength", "chunk_id": "12", "question_id": 2, "question": "What does AWS recommend for allocating and associating a Carrier IP address?", "answer_span": "AWS recommends that you use the AWS CLI because you can automatically allocate and associate the Carrier IP address with the network interface.", "chunk": "Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the network border group Option 1: Auto assign a Carrier IP address AWS recommends that you use the AWS CLI because you can automatically allocate and associate the Carrier IP address with the network interface. Use the run-instances command as follows to launch an instance in the Wavelength Zone subnet. Step 3: Launch an instance in your Availability Zone public subnet 14 AWS Wavelength Developer Guide aws ec2 run-instances --region us-east-1 --network-interfaces \"DeviceIndex=0,AssociateCarrierIpAddress=true,SubnetId=subnet-036aa298f4EXAMPLE\" -image-id ami-04125ecea1EXAMPLE --instance-type t3.medium • DeviceIndex – Specify 0 to indicate the primary network interface (eth0). • SubnetId – Specify the ID of the subnet in the Wavelength Zone. • AssociateCarrierIpAddress – Set this value to true to assign a Carrier IP address to the network interface. Option 2: Allocate and associate a Carrier IP address from the network border group You can launch EC2 instances in the subnet that you created when you added the carrier gateway. For more information, see the section called “Create a carrier gateway and a subnet associated with the Wavelength Zone”. Security groups control inbound and outbound traffic for instances in a subnet, just as they do for instances in an Availability Zone subnet. To connect to an EC2 instance in a subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate"} -{"global_id": 50, "doc_id": "wavelength", "chunk_id": "12", "question_id": 3, "question": "What command is used to launch an instance in the Wavelength Zone subnet?", "answer_span": "Use the run-instances command as follows to launch an instance in the Wavelength Zone subnet.", "chunk": "Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the network border group Option 1: Auto assign a Carrier IP address AWS recommends that you use the AWS CLI because you can automatically allocate and associate the Carrier IP address with the network interface. Use the run-instances command as follows to launch an instance in the Wavelength Zone subnet. Step 3: Launch an instance in your Availability Zone public subnet 14 AWS Wavelength Developer Guide aws ec2 run-instances --region us-east-1 --network-interfaces \"DeviceIndex=0,AssociateCarrierIpAddress=true,SubnetId=subnet-036aa298f4EXAMPLE\" -image-id ami-04125ecea1EXAMPLE --instance-type t3.medium • DeviceIndex – Specify 0 to indicate the primary network interface (eth0). • SubnetId – Specify the ID of the subnet in the Wavelength Zone. • AssociateCarrierIpAddress – Set this value to true to assign a Carrier IP address to the network interface. Option 2: Allocate and associate a Carrier IP address from the network border group You can launch EC2 instances in the subnet that you created when you added the carrier gateway. For more information, see the section called “Create a carrier gateway and a subnet associated with the Wavelength Zone”. Security groups control inbound and outbound traffic for instances in a subnet, just as they do for instances in an Availability Zone subnet. To connect to an EC2 instance in a subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate"} -{"global_id": 51, "doc_id": "wavelength", "chunk_id": "12", "question_id": 4, "question": "What must you specify to connect to an EC2 instance in a subnet?", "answer_span": "To connect to an EC2 instance in a subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet.", "chunk": "Step 4: Launch an instance in the Wavelength zone After you complete the networking configuration, launch an instance, and then allocate a Carrier IP address for the instance. Options • Option 1: Auto assign a Carrier IP address • Option 2: Allocate and associate a Carrier IP address from the network border group Option 1: Auto assign a Carrier IP address AWS recommends that you use the AWS CLI because you can automatically allocate and associate the Carrier IP address with the network interface. Use the run-instances command as follows to launch an instance in the Wavelength Zone subnet. Step 3: Launch an instance in your Availability Zone public subnet 14 AWS Wavelength Developer Guide aws ec2 run-instances --region us-east-1 --network-interfaces \"DeviceIndex=0,AssociateCarrierIpAddress=true,SubnetId=subnet-036aa298f4EXAMPLE\" -image-id ami-04125ecea1EXAMPLE --instance-type t3.medium • DeviceIndex – Specify 0 to indicate the primary network interface (eth0). • SubnetId – Specify the ID of the subnet in the Wavelength Zone. • AssociateCarrierIpAddress – Set this value to true to assign a Carrier IP address to the network interface. Option 2: Allocate and associate a Carrier IP address from the network border group You can launch EC2 instances in the subnet that you created when you added the carrier gateway. For more information, see the section called “Create a carrier gateway and a subnet associated with the Wavelength Zone”. Security groups control inbound and outbound traffic for instances in a subnet, just as they do for instances in an Availability Zone subnet. To connect to an EC2 instance in a subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate"} -{"global_id": 52, "doc_id": "wavelength", "chunk_id": "13", "question_id": 1, "question": "What command is used to allocate a Carrier IP address?", "answer_span": "Use the allocate-address command as follows to allocate a Carrier IP address.", "chunk": "subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate and associate a Carrier IP address 1. Use the allocate-address command as follows to allocate a Carrier IP address. aws ec2 allocate-address --region us-east-1 --domain vpc --network-border-group useast-1-wl1-bos-wlz-1 The following is example output. { \"AllocationId\": \"eipalloc-05807b62acEXAMPLE\", \"PublicIpv4Pool\": \"amazon\", \"NetworkBorderGroup\": \"us-east-1-wl1-bos-wlz-1\", \"Domain\": \"vpc\", \"CarrierIp\": \"155.146.10.111\" } Option 2: Allocate and associate a Carrier IP address from the network border group 15 AWS Wavelength 2. Developer Guide Use the associate-address command as follows to associate the Carrier IP address with the EC2 instance. aws ec2 associate-address --allocation-id eipalloc-05807b62acEXAMPLE --networkinterface-id eni-1a2b3c4d The following is example output. { \"AssociationId\": \"eipassoc-02463d08ceEXAMPLE\", } Step 5: Test the connectivity Before you test the connectivity, do the following: • Review the section called “Networking considerations” • Configure the instance security group to allow ICMP traffic. Test the connectivity from the instance in the Region to the Wavelength Zone instance. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. Run the ping command to the Wavelength Zone instance. In the following example, the IP address of the subnet in the Wavelength Zone is 10.0.3.112. ping 10.0.3.112 Pinging 10.0.3.112 Reply from 10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test"} -{"global_id": 53, "doc_id": "wavelength", "chunk_id": "13", "question_id": 2, "question": "What is the example output for allocating a Carrier IP address?", "answer_span": "{ \"AllocationId\": \"eipalloc-05807b62acEXAMPLE\", \"PublicIpv4Pool\": \"amazon\", \"NetworkBorderGroup\": \"us-east-1-wl1-bos-wlz-1\", \"Domain\": \"vpc\", \"CarrierIp\": \"155.146.10.111\" }", "chunk": "subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate and associate a Carrier IP address 1. Use the allocate-address command as follows to allocate a Carrier IP address. aws ec2 allocate-address --region us-east-1 --domain vpc --network-border-group useast-1-wl1-bos-wlz-1 The following is example output. { \"AllocationId\": \"eipalloc-05807b62acEXAMPLE\", \"PublicIpv4Pool\": \"amazon\", \"NetworkBorderGroup\": \"us-east-1-wl1-bos-wlz-1\", \"Domain\": \"vpc\", \"CarrierIp\": \"155.146.10.111\" } Option 2: Allocate and associate a Carrier IP address from the network border group 15 AWS Wavelength 2. Developer Guide Use the associate-address command as follows to associate the Carrier IP address with the EC2 instance. aws ec2 associate-address --allocation-id eipalloc-05807b62acEXAMPLE --networkinterface-id eni-1a2b3c4d The following is example output. { \"AssociationId\": \"eipassoc-02463d08ceEXAMPLE\", } Step 5: Test the connectivity Before you test the connectivity, do the following: • Review the section called “Networking considerations” • Configure the instance security group to allow ICMP traffic. Test the connectivity from the instance in the Region to the Wavelength Zone instance. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. Run the ping command to the Wavelength Zone instance. In the following example, the IP address of the subnet in the Wavelength Zone is 10.0.3.112. ping 10.0.3.112 Pinging 10.0.3.112 Reply from 10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test"} -{"global_id": 54, "doc_id": "wavelength", "chunk_id": "13", "question_id": 3, "question": "What command is used to associate a Carrier IP address with an EC2 instance?", "answer_span": "Use the associate-address command as follows to associate the Carrier IP address with the EC2 instance.", "chunk": "subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate and associate a Carrier IP address 1. Use the allocate-address command as follows to allocate a Carrier IP address. aws ec2 allocate-address --region us-east-1 --domain vpc --network-border-group useast-1-wl1-bos-wlz-1 The following is example output. { \"AllocationId\": \"eipalloc-05807b62acEXAMPLE\", \"PublicIpv4Pool\": \"amazon\", \"NetworkBorderGroup\": \"us-east-1-wl1-bos-wlz-1\", \"Domain\": \"vpc\", \"CarrierIp\": \"155.146.10.111\" } Option 2: Allocate and associate a Carrier IP address from the network border group 15 AWS Wavelength 2. Developer Guide Use the associate-address command as follows to associate the Carrier IP address with the EC2 instance. aws ec2 associate-address --allocation-id eipalloc-05807b62acEXAMPLE --networkinterface-id eni-1a2b3c4d The following is example output. { \"AssociationId\": \"eipassoc-02463d08ceEXAMPLE\", } Step 5: Test the connectivity Before you test the connectivity, do the following: • Review the section called “Networking considerations” • Configure the instance security group to allow ICMP traffic. Test the connectivity from the instance in the Region to the Wavelength Zone instance. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. Run the ping command to the Wavelength Zone instance. In the following example, the IP address of the subnet in the Wavelength Zone is 10.0.3.112. ping 10.0.3.112 Pinging 10.0.3.112 Reply from 10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test"} -{"global_id": 55, "doc_id": "wavelength", "chunk_id": "13", "question_id": 4, "question": "What should you do before testing the connectivity?", "answer_span": "Before you test the connectivity, do the following: • Review the section called “Networking considerations” • Configure the instance security group to allow ICMP traffic.", "chunk": "subnet, specify a key pair when you launch the instance, just as you do for instances in an Availability Zone subnet. For information about how to launch an instance using the Amazon EC2 console, see Launch an EC2 instance using the console in the Amazon EC2 User Guide. To allocate and associate a Carrier IP address 1. Use the allocate-address command as follows to allocate a Carrier IP address. aws ec2 allocate-address --region us-east-1 --domain vpc --network-border-group useast-1-wl1-bos-wlz-1 The following is example output. { \"AllocationId\": \"eipalloc-05807b62acEXAMPLE\", \"PublicIpv4Pool\": \"amazon\", \"NetworkBorderGroup\": \"us-east-1-wl1-bos-wlz-1\", \"Domain\": \"vpc\", \"CarrierIp\": \"155.146.10.111\" } Option 2: Allocate and associate a Carrier IP address from the network border group 15 AWS Wavelength 2. Developer Guide Use the associate-address command as follows to associate the Carrier IP address with the EC2 instance. aws ec2 associate-address --allocation-id eipalloc-05807b62acEXAMPLE --networkinterface-id eni-1a2b3c4d The following is example output. { \"AssociationId\": \"eipassoc-02463d08ceEXAMPLE\", } Step 5: Test the connectivity Before you test the connectivity, do the following: • Review the section called “Networking considerations” • Configure the instance security group to allow ICMP traffic. Test the connectivity from the instance in the Region to the Wavelength Zone instance. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. Run the ping command to the Wavelength Zone instance. In the following example, the IP address of the subnet in the Wavelength Zone is 10.0.3.112. ping 10.0.3.112 Pinging 10.0.3.112 Reply from 10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test"} -{"global_id": 56, "doc_id": "wavelength", "chunk_id": "14", "question_id": 1, "question": "What is the IP address used to test connectivity from the Wavelength Zone instance to the carrier network?", "answer_span": "In the following example, the carrier network IP address is 198.51.100.130.", "chunk": "10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 16 AWS Wavelength Developer Guide Test the connectivity from the instance in the Wavelength Zone instance to the carrier network. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. You need a device on the carrier network in order to test the connectivity from the Wavelength Zone to the carrier network. Run the ping command to an address in the carrier network. In the following example, the carrier network IP address is 198.51.100.130. ping 198.51.100.130 Pinging 198.51.100.130 Reply from 198.51.100.130: Reply from 198.51.100.130: Reply from 198.51.100.130: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 198.51.100.130 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 17 AWS Wavelength Developer Guide Carrier gateway for AWS Wavelength A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and the internet. There is generally no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your"} -{"global_id": 57, "doc_id": "wavelength", "chunk_id": "14", "question_id": 2, "question": "What does a carrier gateway allow in terms of traffic?", "answer_span": "It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and the internet.", "chunk": "10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 16 AWS Wavelength Developer Guide Test the connectivity from the instance in the Wavelength Zone instance to the carrier network. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. You need a device on the carrier network in order to test the connectivity from the Wavelength Zone to the carrier network. Run the ping command to an address in the carrier network. In the following example, the carrier network IP address is 198.51.100.130. ping 198.51.100.130 Pinging 198.51.100.130 Reply from 198.51.100.130: Reply from 198.51.100.130: Reply from 198.51.100.130: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 198.51.100.130 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 17 AWS Wavelength Developer Guide Carrier gateway for AWS Wavelength A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and the internet. There is generally no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your"} -{"global_id": 58, "doc_id": "wavelength", "chunk_id": "14", "question_id": 3, "question": "What is the round trip time in milliseconds for the ping to 10.0.3.112?", "answer_span": "Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms.", "chunk": "10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 16 AWS Wavelength Developer Guide Test the connectivity from the instance in the Wavelength Zone instance to the carrier network. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. You need a device on the carrier network in order to test the connectivity from the Wavelength Zone to the carrier network. Run the ping command to an address in the carrier network. In the following example, the carrier network IP address is 198.51.100.130. ping 198.51.100.130 Pinging 198.51.100.130 Reply from 198.51.100.130: Reply from 198.51.100.130: Reply from 198.51.100.130: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 198.51.100.130 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 17 AWS Wavelength Developer Guide Carrier gateway for AWS Wavelength A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and the internet. There is generally no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your"} -{"global_id": 59, "doc_id": "wavelength", "chunk_id": "14", "question_id": 4, "question": "What type of traffic does a carrier gateway support?", "answer_span": "A carrier gateway supports IPv4 traffic.", "chunk": "10.0.3.112: Reply from 10.0.3.112: Reply from 10.0.3.112: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 10.0.3.112 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 16 AWS Wavelength Developer Guide Test the connectivity from the instance in the Wavelength Zone instance to the carrier network. Depending on your operating system, use SSH or RDP to connect to the Carrier IP address of your Wavelength Zone instance. You can use a secure bastion host. You need a device on the carrier network in order to test the connectivity from the Wavelength Zone to the carrier network. Run the ping command to an address in the carrier network. In the following example, the carrier network IP address is 198.51.100.130. ping 198.51.100.130 Pinging 198.51.100.130 Reply from 198.51.100.130: Reply from 198.51.100.130: Reply from 198.51.100.130: bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 bytes=32 time=<1ms TTL=128 Ping statistics for 198.51.100.130 Packets: Sent = 3, Received = 3, Lost = 0 (0% lost) Approximate round trip time in milliseconds Minimum = 0ms, Maximum = 0ms, Average = 0ms Step 5: Test the connectivity 17 AWS Wavelength Developer Guide Carrier gateway for AWS Wavelength A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and it allows outbound traffic to the carrier network and the internet. There is generally no inbound connection configuration from the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your"} -{"global_id": 60, "doc_id": "wavelength", "chunk_id": "15", "question_id": 1, "question": "What is the purpose of a carrier gateway?", "answer_span": "The carrier gateway provides connectivity between your Wavelength Zone and the carrier, and devices on the carrier network.", "chunk": "the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the carrier, and devices on the carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Enable access to the carrier network To enable access to or from the carrier network for instances in a Wavelength subnet, you must do the following: • Create a VPC. • Create a carrier gateway and attach the carrier gateway to your VPC. When you create the carrier gateway, you can optionally choose which subnets route to the carrier gateway. When you select this option, we automatically create the resources related to carrier gateways, such as route tables and network ACLs. If you do not choose this option, then you must perform the following tasks: • Select the subnets that route traffic to the carrier gateway. • Ensure that your subnet route tables have a route that directs traffic to the carrier gateway. • Ensure that instances in your subnet have a globally unique Carrier IP address. • Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier"} -{"global_id": 61, "doc_id": "wavelength", "chunk_id": "15", "question_id": 2, "question": "What must you do to enable access to the carrier network for instances in a Wavelength subnet?", "answer_span": "To enable access to or from the carrier network for instances in a Wavelength subnet, you must do the following: • Create a VPC. • Create a carrier gateway and attach the carrier gateway to your VPC.", "chunk": "the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the carrier, and devices on the carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Enable access to the carrier network To enable access to or from the carrier network for instances in a Wavelength subnet, you must do the following: • Create a VPC. • Create a carrier gateway and attach the carrier gateway to your VPC. When you create the carrier gateway, you can optionally choose which subnets route to the carrier gateway. When you select this option, we automatically create the resources related to carrier gateways, such as route tables and network ACLs. If you do not choose this option, then you must perform the following tasks: • Select the subnets that route traffic to the carrier gateway. • Ensure that your subnet route tables have a route that directs traffic to the carrier gateway. • Ensure that instances in your subnet have a globally unique Carrier IP address. • Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier"} -{"global_id": 62, "doc_id": "wavelength", "chunk_id": "15", "question_id": 3, "question": "What is similar to how a carrier gateway performs NAT?", "answer_span": "The carrier gateway NAT function is similar to how an internet gateway functions in a Region.", "chunk": "the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the carrier, and devices on the carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Enable access to the carrier network To enable access to or from the carrier network for instances in a Wavelength subnet, you must do the following: • Create a VPC. • Create a carrier gateway and attach the carrier gateway to your VPC. When you create the carrier gateway, you can optionally choose which subnets route to the carrier gateway. When you select this option, we automatically create the resources related to carrier gateways, such as route tables and network ACLs. If you do not choose this option, then you must perform the following tasks: • Select the subnets that route traffic to the carrier gateway. • Ensure that your subnet route tables have a route that directs traffic to the carrier gateway. • Ensure that instances in your subnet have a globally unique Carrier IP address. • Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier"} -{"global_id": 63, "doc_id": "wavelength", "chunk_id": "15", "question_id": 4, "question": "What happens if you do not choose the option to automatically create resources related to carrier gateways?", "answer_span": "If you do not choose this option, then you must perform the following tasks: • Select the subnets that route traffic to the carrier gateway.", "chunk": "the internet to a Wavelength Zone through the carrier gateway with the exception of select partners. For more information, see Multi-access AWS Wavelength. A carrier gateway supports IPv4 traffic. Carrier gateways are only available for VPCs that contain subnets in a Wavelength Zone. The carrier gateway provides connectivity between your Wavelength Zone and the carrier, and devices on the carrier network. The carrier gateway performs NAT of the Wavelength instances' IP addresses to the Carrier IP addresses from a pool that is assigned to the network border group. The carrier gateway NAT function is similar to how an internet gateway functions in a Region. Enable access to the carrier network To enable access to or from the carrier network for instances in a Wavelength subnet, you must do the following: • Create a VPC. • Create a carrier gateway and attach the carrier gateway to your VPC. When you create the carrier gateway, you can optionally choose which subnets route to the carrier gateway. When you select this option, we automatically create the resources related to carrier gateways, such as route tables and network ACLs. If you do not choose this option, then you must perform the following tasks: • Select the subnets that route traffic to the carrier gateway. • Ensure that your subnet route tables have a route that directs traffic to the carrier gateway. • Ensure that instances in your subnet have a globally unique Carrier IP address. • Ensure that your network access control lists and security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier"} -{"global_id": 64, "doc_id": "wavelength", "chunk_id": "16", "question_id": 1, "question": "What do security group rules allow?", "answer_span": "security group rules allow the relevant traffic to flow to and from your instance.", "chunk": "security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier network (for example, mobile phones), and to support outbound traffic to the carrier network and the internet. Tasks • Create a VPC • Create a carrier gateway • Create a security group to access the carrier network • Allocate and associate a Carrier IP address with the instance in the Wavelength Zone subnet • Routing to a Wavelength Zone carrier gateway • View the carrier gateway details • Manage carrier gateway tags • Delete a carrier gateway Create a VPC You can create an empty Wavelength VPC as follows. Limitation You can specify a range of publicly routable IPv4 addresses. However, we do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Your VPCs, Create VPC. 3. Do the following and then choose Create. • Name tag: Optionally provide a name for your VPC. Doing so creates a tag with a key of Name and the value that you specify. • IPv4 CIDR block: Specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use"} -{"global_id": 65, "doc_id": "wavelength", "chunk_id": "16", "question_id": 2, "question": "What is the purpose of creating a carrier gateway?", "answer_span": "to support inbound traffic from the carrier network (for example, mobile phones), and to support outbound traffic to the carrier network and the internet.", "chunk": "security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier network (for example, mobile phones), and to support outbound traffic to the carrier network and the internet. Tasks • Create a VPC • Create a carrier gateway • Create a security group to access the carrier network • Allocate and associate a Carrier IP address with the instance in the Wavelength Zone subnet • Routing to a Wavelength Zone carrier gateway • View the carrier gateway details • Manage carrier gateway tags • Delete a carrier gateway Create a VPC You can create an empty Wavelength VPC as follows. Limitation You can specify a range of publicly routable IPv4 addresses. However, we do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Your VPCs, Create VPC. 3. Do the following and then choose Create. • Name tag: Optionally provide a name for your VPC. Doing so creates a tag with a key of Name and the value that you specify. • IPv4 CIDR block: Specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use"} -{"global_id": 66, "doc_id": "wavelength", "chunk_id": "16", "question_id": 3, "question": "What limitation is mentioned regarding CIDR blocks in a VPC?", "answer_span": "we do not support direct access to the internet from publicly routable CIDR blocks in a VPC.", "chunk": "security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier network (for example, mobile phones), and to support outbound traffic to the carrier network and the internet. Tasks • Create a VPC • Create a carrier gateway • Create a security group to access the carrier network • Allocate and associate a Carrier IP address with the instance in the Wavelength Zone subnet • Routing to a Wavelength Zone carrier gateway • View the carrier gateway details • Manage carrier gateway tags • Delete a carrier gateway Create a VPC You can create an empty Wavelength VPC as follows. Limitation You can specify a range of publicly routable IPv4 addresses. However, we do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Your VPCs, Create VPC. 3. Do the following and then choose Create. • Name tag: Optionally provide a name for your VPC. Doing so creates a tag with a key of Name and the value that you specify. • IPv4 CIDR block: Specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use"} -{"global_id": 67, "doc_id": "wavelength", "chunk_id": "16", "question_id": 4, "question": "What is recommended for specifying an IPv4 CIDR block for the VPC?", "answer_span": "We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918.", "chunk": "security group rules allow the relevant traffic to flow to and from your instance. Enable access to the carrier network 18 AWS Wavelength Developer Guide Work with carrier gateways The following sections describe how to manually create a carrier gateway for your VPC to support inbound traffic from the carrier network (for example, mobile phones), and to support outbound traffic to the carrier network and the internet. Tasks • Create a VPC • Create a carrier gateway • Create a security group to access the carrier network • Allocate and associate a Carrier IP address with the instance in the Wavelength Zone subnet • Routing to a Wavelength Zone carrier gateway • View the carrier gateway details • Manage carrier gateway tags • Delete a carrier gateway Create a VPC You can create an empty Wavelength VPC as follows. Limitation You can specify a range of publicly routable IPv4 addresses. However, we do not support direct access to the internet from publicly routable CIDR blocks in a VPC. Windows instances cannot boot correctly if launched into a VPC with ranges from 224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges). 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Your VPCs, Create VPC. 3. Do the following and then choose Create. • Name tag: Optionally provide a name for your VPC. Doing so creates a tag with a key of Name and the value that you specify. • IPv4 CIDR block: Specify an IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use"} -{"global_id": 68, "doc_id": "wavelength", "chunk_id": "17", "question_id": 1, "question": "What type of IP address ranges is recommended for the CIDR block in the VPC?", "answer_span": "We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918;", "chunk": "IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use the create-vpc command. Create a carrier gateway After you create a VPC, create a carrier gateway and then select the subnets that route traffic to the carrier gateway. If you have not opted in to a Wavelength Zone, the Amazon Virtual Private Cloud Console prompts you to opt in. For more information, see the section called “Manage Zones”. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags that do not have a Key value of Name to the subnet. • A network ACL with the following resources: • A subnet associated with the subnet in the Wavelength Zone • Default inbound and outbound rules for all of your traffic. • A route table with the following resources: • A route for all local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier Gateways, and then choose Create carrier gateway. 3. Optional: For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20"} -{"global_id": 69, "doc_id": "wavelength", "chunk_id": "17", "question_id": 2, "question": "What command is used to create a VPC using the AWS CLI?", "answer_span": "Use the create-vpc command.", "chunk": "IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use the create-vpc command. Create a carrier gateway After you create a VPC, create a carrier gateway and then select the subnets that route traffic to the carrier gateway. If you have not opted in to a Wavelength Zone, the Amazon Virtual Private Cloud Console prompts you to opt in. For more information, see the section called “Manage Zones”. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags that do not have a Key value of Name to the subnet. • A network ACL with the following resources: • A subnet associated with the subnet in the Wavelength Zone • Default inbound and outbound rules for all of your traffic. • A route table with the following resources: • A route for all local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier Gateways, and then choose Create carrier gateway. 3. Optional: For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20"} -{"global_id": 70, "doc_id": "wavelength", "chunk_id": "17", "question_id": 3, "question": "What happens if you have not opted in to a Wavelength Zone?", "answer_span": "the Amazon Virtual Private Cloud Console prompts you to opt in.", "chunk": "IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use the create-vpc command. Create a carrier gateway After you create a VPC, create a carrier gateway and then select the subnets that route traffic to the carrier gateway. If you have not opted in to a Wavelength Zone, the Amazon Virtual Private Cloud Console prompts you to opt in. For more information, see the section called “Manage Zones”. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags that do not have a Key value of Name to the subnet. • A network ACL with the following resources: • A subnet associated with the subnet in the Wavelength Zone • Default inbound and outbound rules for all of your traffic. • A route table with the following resources: • A route for all local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier Gateways, and then choose Create carrier gateway. 3. Optional: For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20"} -{"global_id": 71, "doc_id": "wavelength", "chunk_id": "17", "question_id": 4, "question": "What resources are created when you choose to automatically route traffic to the carrier gateway?", "answer_span": "we create the following resources: • A carrier gateway • A subnet.", "chunk": "IPv4 CIDR block for the VPC. We recommend that you specify a CIDR block from the private (non-publicly routable) IP address ranges as specified in RFC 1918; for example, 10.0.0.0/16, or 192.168.0.0/16. Work with carrier gateways 19 AWS Wavelength Developer Guide To create a VPC using the AWS CLI Use the create-vpc command. Create a carrier gateway After you create a VPC, create a carrier gateway and then select the subnets that route traffic to the carrier gateway. If you have not opted in to a Wavelength Zone, the Amazon Virtual Private Cloud Console prompts you to opt in. For more information, see the section called “Manage Zones”. When you choose to automatically route traffic from subnets to the carrier gateway, we create the following resources: • A carrier gateway • A subnet. You can optionally assign all carrier gateway tags that do not have a Key value of Name to the subnet. • A network ACL with the following resources: • A subnet associated with the subnet in the Wavelength Zone • Default inbound and outbound rules for all of your traffic. • A route table with the following resources: • A route for all local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnet To create a carrier gateway 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Carrier Gateways, and then choose Create carrier gateway. 3. Optional: For Name, enter a name for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20"} -{"global_id": 72, "doc_id": "wavelength", "chunk_id": "18", "question_id": 1, "question": "What is the first step to create a carrier gateway?", "answer_span": "For VPC, choose the VPC.", "chunk": "for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20 AWS Wavelength b. Developer Guide To create a subnet in the Wavelength Zone, choose Add new subnet, specify the following information, and then choose Add new subnet: • Name tag: Optionally provide a name for your subnet. Doing so creates a tag with a key of Name and the value that you specify. • VPC: Choose the VPC. • Availability Zone: Choose the Wavelength Zone. • IPv4 CIDR block: Specify an IPv4 CIDR block for your subnet, for example, 10.0.1.0/24. • To apply the carrier gateway tags to the subnet, select Apply same tags from this carrier gateway. 6. 7. (Optional) To add a tag to the carrier gateway, choose Add tag, and then do the following: • For Key, enter the key name. • For Value, enter the key value. Choose Create carrier gateway. To create a carrier gateway using the AWS CLI 1. Use the create-carrier-gateway command. 2. Add a VPC route table with the following resources: • A route for all VPC local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnets in the Wavelength Zone For more information, see the section called “Routing to a Wavelength Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet."} -{"global_id": 73, "doc_id": "wavelength", "chunk_id": "18", "question_id": 2, "question": "What should you do to apply the carrier gateway tags to the subnet?", "answer_span": "To apply the carrier gateway tags to the subnet, select Apply same tags from this carrier gateway.", "chunk": "for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20 AWS Wavelength b. Developer Guide To create a subnet in the Wavelength Zone, choose Add new subnet, specify the following information, and then choose Add new subnet: • Name tag: Optionally provide a name for your subnet. Doing so creates a tag with a key of Name and the value that you specify. • VPC: Choose the VPC. • Availability Zone: Choose the Wavelength Zone. • IPv4 CIDR block: Specify an IPv4 CIDR block for your subnet, for example, 10.0.1.0/24. • To apply the carrier gateway tags to the subnet, select Apply same tags from this carrier gateway. 6. 7. (Optional) To add a tag to the carrier gateway, choose Add tag, and then do the following: • For Key, enter the key name. • For Value, enter the key value. Choose Create carrier gateway. To create a carrier gateway using the AWS CLI 1. Use the create-carrier-gateway command. 2. Add a VPC route table with the following resources: • A route for all VPC local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnets in the Wavelength Zone For more information, see the section called “Routing to a Wavelength Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet."} -{"global_id": 74, "doc_id": "wavelength", "chunk_id": "18", "question_id": 3, "question": "What command is used to create a carrier gateway using the AWS CLI?", "answer_span": "Use the create-carrier-gateway command.", "chunk": "for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20 AWS Wavelength b. Developer Guide To create a subnet in the Wavelength Zone, choose Add new subnet, specify the following information, and then choose Add new subnet: • Name tag: Optionally provide a name for your subnet. Doing so creates a tag with a key of Name and the value that you specify. • VPC: Choose the VPC. • Availability Zone: Choose the Wavelength Zone. • IPv4 CIDR block: Specify an IPv4 CIDR block for your subnet, for example, 10.0.1.0/24. • To apply the carrier gateway tags to the subnet, select Apply same tags from this carrier gateway. 6. 7. (Optional) To add a tag to the carrier gateway, choose Add tag, and then do the following: • For Key, enter the key name. • For Value, enter the key value. Choose Create carrier gateway. To create a carrier gateway using the AWS CLI 1. Use the create-carrier-gateway command. 2. Add a VPC route table with the following resources: • A route for all VPC local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnets in the Wavelength Zone For more information, see the section called “Routing to a Wavelength Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet."} -{"global_id": 75, "doc_id": "wavelength", "chunk_id": "18", "question_id": 4, "question": "What does a VPC security group allow by default?", "answer_span": "By default, a VPC security group allows all outbound traffic.", "chunk": "for the carrier gateway. 4. For VPC, choose the VPC. 5. Choose Route subnet traffic to carrier gateway, and under Subnets to route do the following. a. Under Existing subnets in Wavelength Zone, select the box for each subnet to route to the carrier gateway. Create a carrier gateway 20 AWS Wavelength b. Developer Guide To create a subnet in the Wavelength Zone, choose Add new subnet, specify the following information, and then choose Add new subnet: • Name tag: Optionally provide a name for your subnet. Doing so creates a tag with a key of Name and the value that you specify. • VPC: Choose the VPC. • Availability Zone: Choose the Wavelength Zone. • IPv4 CIDR block: Specify an IPv4 CIDR block for your subnet, for example, 10.0.1.0/24. • To apply the carrier gateway tags to the subnet, select Apply same tags from this carrier gateway. 6. 7. (Optional) To add a tag to the carrier gateway, choose Add tag, and then do the following: • For Key, enter the key name. • For Value, enter the key value. Choose Create carrier gateway. To create a carrier gateway using the AWS CLI 1. Use the create-carrier-gateway command. 2. Add a VPC route table with the following resources: • A route for all VPC local traffic • A route that routes all non-local traffic to the carrier gateway • An association with the subnets in the Wavelength Zone For more information, see the section called “Routing to a Wavelength Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet."} -{"global_id": 76, "doc_id": "wavelength", "chunk_id": "19", "question_id": 1, "question": "What is the purpose of creating a security group in this context?", "answer_span": "Create a security group to access the carrier network", "chunk": "Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet. Create a security group to access the carrier network 21"} -{"global_id": 77, "doc_id": "wavelength", "chunk_id": "19", "question_id": 2, "question": "What does a default VPC security group allow?", "answer_span": "By default, a VPC security group allows all outbound traffic.", "chunk": "Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet. Create a security group to access the carrier network 21"} -{"global_id": 78, "doc_id": "wavelength", "chunk_id": "19", "question_id": 3, "question": "What can you do to allow inbound traffic from the carrier?", "answer_span": "You can create a new security group and add rules that allow inbound traffic from the carrier.", "chunk": "Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet. Create a security group to access the carrier network 21"} -{"global_id": 79, "doc_id": "wavelength", "chunk_id": "19", "question_id": 4, "question": "What must be done after creating a security group?", "answer_span": "Then, you associate the security group with instances in the subnet.", "chunk": "Zone carrier gateway”. Create a security group to access the carrier network By default, a VPC security group allows all outbound traffic. You can create a new security group and add rules that allow inbound traffic from the carrier. Then, you associate the security group with instances in the subnet. Create a security group to access the carrier network 21"} -{"global_id": 80, "doc_id": "beanstalk", "chunk_id": "0", "question_id": 1, "question": "What can you deploy using AWS Elastic Beanstalk?", "answer_span": "With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms.", "chunk": "AWS Elastic Beanstalk Developer Guide What is AWS Elastic Beanstalk? With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms. You build and deploy your applications. Elastic Beanstalk provisions Amazon EC2 instances, configures load balancing, sets up health monitoring, and dynamically scales your environment. In addition to web server environments, Elastic Beanstalk also provides worker environments which you can use to process messages from an Amazon SQS queue, useful for asynchronous or longrunning tasks. For more information, see Elastic Beanstalk worker environments. 1 AWS Elastic Beanstalk Developer Guide Supported platforms Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. Elastic Beanstalk also supports Docker containers, where you can choose your own programming language and application dependencies. When you deploy your application, Elastic Supported platforms 2 AWS Elastic Beanstalk Developer Guide Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, in your AWS account to run your application. You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk. You can perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console). To learn more about how to deploy a sample web application using Elastic Beanstalk, see Learn how to get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your"} -{"global_id": 81, "doc_id": "beanstalk", "chunk_id": "0", "question_id": 2, "question": "What types of environments does Elastic Beanstalk provide?", "answer_span": "In addition to web server environments, Elastic Beanstalk also provides worker environments which you can use to process messages from an Amazon SQS queue, useful for asynchronous or longrunning tasks.", "chunk": "AWS Elastic Beanstalk Developer Guide What is AWS Elastic Beanstalk? With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms. You build and deploy your applications. Elastic Beanstalk provisions Amazon EC2 instances, configures load balancing, sets up health monitoring, and dynamically scales your environment. In addition to web server environments, Elastic Beanstalk also provides worker environments which you can use to process messages from an Amazon SQS queue, useful for asynchronous or longrunning tasks. For more information, see Elastic Beanstalk worker environments. 1 AWS Elastic Beanstalk Developer Guide Supported platforms Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. Elastic Beanstalk also supports Docker containers, where you can choose your own programming language and application dependencies. When you deploy your application, Elastic Supported platforms 2 AWS Elastic Beanstalk Developer Guide Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, in your AWS account to run your application. You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk. You can perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console). To learn more about how to deploy a sample web application using Elastic Beanstalk, see Learn how to get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your"} -{"global_id": 82, "doc_id": "beanstalk", "chunk_id": "0", "question_id": 3, "question": "Which programming languages are supported by Elastic Beanstalk?", "answer_span": "Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby.", "chunk": "AWS Elastic Beanstalk Developer Guide What is AWS Elastic Beanstalk? With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms. You build and deploy your applications. Elastic Beanstalk provisions Amazon EC2 instances, configures load balancing, sets up health monitoring, and dynamically scales your environment. In addition to web server environments, Elastic Beanstalk also provides worker environments which you can use to process messages from an Amazon SQS queue, useful for asynchronous or longrunning tasks. For more information, see Elastic Beanstalk worker environments. 1 AWS Elastic Beanstalk Developer Guide Supported platforms Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. Elastic Beanstalk also supports Docker containers, where you can choose your own programming language and application dependencies. When you deploy your application, Elastic Supported platforms 2 AWS Elastic Beanstalk Developer Guide Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, in your AWS account to run your application. You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk. You can perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console). To learn more about how to deploy a sample web application using Elastic Beanstalk, see Learn how to get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your"} -{"global_id": 83, "doc_id": "beanstalk", "chunk_id": "0", "question_id": 4, "question": "How can you interact with Elastic Beanstalk?", "answer_span": "You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk.", "chunk": "AWS Elastic Beanstalk Developer Guide What is AWS Elastic Beanstalk? With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms. You build and deploy your applications. Elastic Beanstalk provisions Amazon EC2 instances, configures load balancing, sets up health monitoring, and dynamically scales your environment. In addition to web server environments, Elastic Beanstalk also provides worker environments which you can use to process messages from an Amazon SQS queue, useful for asynchronous or longrunning tasks. For more information, see Elastic Beanstalk worker environments. 1 AWS Elastic Beanstalk Developer Guide Supported platforms Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. Elastic Beanstalk also supports Docker containers, where you can choose your own programming language and application dependencies. When you deploy your application, Elastic Supported platforms 2 AWS Elastic Beanstalk Developer Guide Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, in your AWS account to run your application. You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk. You can perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console). To learn more about how to deploy a sample web application using Elastic Beanstalk, see Learn how to get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your"} -{"global_id": 84, "doc_id": "beanstalk", "chunk_id": "1", "question_id": 1, "question": "What is the first step to use Elastic Beanstalk?", "answer_span": "To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk.", "chunk": "get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After you create and deploy your application and your environment is launched, you can manage your environment and deploy new application versions. Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces. The following diagram illustrates Elastic Beanstalk workflow: Pricing There is no additional charge for Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes. For details about pricing, see the Elastic Beanstalk service detail page. Application deploy workflow 3 AWS Elastic Beanstalk Developer Guide Next steps We recommend the tutorial, Getting started tutorial, to start using Elastic Beanstalk. The tutorial steps you through creating, viewing, and updating a sample Elastic Beanstalk application. Next steps 4 AWS Elastic Beanstalk Developer Guide Learn how to get started with Elastic Beanstalk With Elastic Beanstalk you can deploy, monitor, and scale web applications and services. Typically, you will develop your code locally then deploy it to Amazon EC2 server instances. Theses instances, also called environments, run on platforms that can be upgraded through the AWS console or the command line. To get started, we recommend deploying a pre-built sample application directly from the console. Then, you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them"} -{"global_id": 85, "doc_id": "beanstalk", "chunk_id": "1", "question_id": 2, "question": "What information is made available through the Elastic Beanstalk console?", "answer_span": "Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces.", "chunk": "get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After you create and deploy your application and your environment is launched, you can manage your environment and deploy new application versions. Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces. The following diagram illustrates Elastic Beanstalk workflow: Pricing There is no additional charge for Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes. For details about pricing, see the Elastic Beanstalk service detail page. Application deploy workflow 3 AWS Elastic Beanstalk Developer Guide Next steps We recommend the tutorial, Getting started tutorial, to start using Elastic Beanstalk. The tutorial steps you through creating, viewing, and updating a sample Elastic Beanstalk application. Next steps 4 AWS Elastic Beanstalk Developer Guide Learn how to get started with Elastic Beanstalk With Elastic Beanstalk you can deploy, monitor, and scale web applications and services. Typically, you will develop your code locally then deploy it to Amazon EC2 server instances. Theses instances, also called environments, run on platforms that can be upgraded through the AWS console or the command line. To get started, we recommend deploying a pre-built sample application directly from the console. Then, you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them"} -{"global_id": 86, "doc_id": "beanstalk", "chunk_id": "1", "question_id": 3, "question": "Are there any additional charges for using Elastic Beanstalk?", "answer_span": "There is no additional charge for Elastic Beanstalk.", "chunk": "get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After you create and deploy your application and your environment is launched, you can manage your environment and deploy new application versions. Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces. The following diagram illustrates Elastic Beanstalk workflow: Pricing There is no additional charge for Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes. For details about pricing, see the Elastic Beanstalk service detail page. Application deploy workflow 3 AWS Elastic Beanstalk Developer Guide Next steps We recommend the tutorial, Getting started tutorial, to start using Elastic Beanstalk. The tutorial steps you through creating, viewing, and updating a sample Elastic Beanstalk application. Next steps 4 AWS Elastic Beanstalk Developer Guide Learn how to get started with Elastic Beanstalk With Elastic Beanstalk you can deploy, monitor, and scale web applications and services. Typically, you will develop your code locally then deploy it to Amazon EC2 server instances. Theses instances, also called environments, run on platforms that can be upgraded through the AWS console or the command line. To get started, we recommend deploying a pre-built sample application directly from the console. Then, you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them"} -{"global_id": 87, "doc_id": "beanstalk", "chunk_id": "1", "question_id": 4, "question": "What do you typically do before deploying your code to Elastic Beanstalk?", "answer_span": "Typically, you will develop your code locally then deploy it to Amazon EC2 server instances.", "chunk": "get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After you create and deploy your application and your environment is launched, you can manage your environment and deploy new application versions. Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces. The following diagram illustrates Elastic Beanstalk workflow: Pricing There is no additional charge for Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes. For details about pricing, see the Elastic Beanstalk service detail page. Application deploy workflow 3 AWS Elastic Beanstalk Developer Guide Next steps We recommend the tutorial, Getting started tutorial, to start using Elastic Beanstalk. The tutorial steps you through creating, viewing, and updating a sample Elastic Beanstalk application. Next steps 4 AWS Elastic Beanstalk Developer Guide Learn how to get started with Elastic Beanstalk With Elastic Beanstalk you can deploy, monitor, and scale web applications and services. Typically, you will develop your code locally then deploy it to Amazon EC2 server instances. Theses instances, also called environments, run on platforms that can be upgraded through the AWS console or the command line. To get started, we recommend deploying a pre-built sample application directly from the console. Then, you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them"} -{"global_id": 88, "doc_id": "beanstalk", "chunk_id": "2", "question_id": 1, "question": "What section should you refer to for learning how to develop locally and deploy from the command line?", "answer_span": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”.", "chunk": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end. The total charges are typically less than a dollar. For information about how to minimize charges, see AWS free tier. After completing this tutorial, you will understand the basics of creating, configuring, deploying, updating, and monitoring an Elastic Beanstalk application with environments running on Amazon EC2 instances. Estimated duration: 35-45 minutes 5 AWS Elastic Beanstalk Developer Guide 6 AWS Elastic Beanstalk Developer Guide What you will build Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform. Elastic Beanstalk application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments where your application code runs on platforms provided and managed by Elastic Beanstalk, or in custom containers that you provide. Environment An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance. When you create an environment, Elastic Beanstalk provisions the necessary resources into your AWS account. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and additional Elastic Beanstalk components. Elastic Beanstalk provides manged platforms, or you can provide your own platform in a container. Elastic Beanstalk supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to"} -{"global_id": 89, "doc_id": "beanstalk", "chunk_id": "2", "question_id": 2, "question": "Are there any costs associated with using Elastic Beanstalk?", "answer_span": "There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end.", "chunk": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end. The total charges are typically less than a dollar. For information about how to minimize charges, see AWS free tier. After completing this tutorial, you will understand the basics of creating, configuring, deploying, updating, and monitoring an Elastic Beanstalk application with environments running on Amazon EC2 instances. Estimated duration: 35-45 minutes 5 AWS Elastic Beanstalk Developer Guide 6 AWS Elastic Beanstalk Developer Guide What you will build Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform. Elastic Beanstalk application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments where your application code runs on platforms provided and managed by Elastic Beanstalk, or in custom containers that you provide. Environment An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance. When you create an environment, Elastic Beanstalk provisions the necessary resources into your AWS account. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and additional Elastic Beanstalk components. Elastic Beanstalk provides manged platforms, or you can provide your own platform in a container. Elastic Beanstalk supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to"} -{"global_id": 90, "doc_id": "beanstalk", "chunk_id": "2", "question_id": 3, "question": "What will your first Elastic Beanstalk application consist of?", "answer_span": "Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform.", "chunk": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end. The total charges are typically less than a dollar. For information about how to minimize charges, see AWS free tier. After completing this tutorial, you will understand the basics of creating, configuring, deploying, updating, and monitoring an Elastic Beanstalk application with environments running on Amazon EC2 instances. Estimated duration: 35-45 minutes 5 AWS Elastic Beanstalk Developer Guide 6 AWS Elastic Beanstalk Developer Guide What you will build Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform. Elastic Beanstalk application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments where your application code runs on platforms provided and managed by Elastic Beanstalk, or in custom containers that you provide. Environment An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance. When you create an environment, Elastic Beanstalk provisions the necessary resources into your AWS account. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and additional Elastic Beanstalk components. Elastic Beanstalk provides manged platforms, or you can provide your own platform in a container. Elastic Beanstalk supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to"} -{"global_id": 91, "doc_id": "beanstalk", "chunk_id": "2", "question_id": 4, "question": "What is an Elastic Beanstalk environment?", "answer_span": "An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance.", "chunk": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end. The total charges are typically less than a dollar. For information about how to minimize charges, see AWS free tier. After completing this tutorial, you will understand the basics of creating, configuring, deploying, updating, and monitoring an Elastic Beanstalk application with environments running on Amazon EC2 instances. Estimated duration: 35-45 minutes 5 AWS Elastic Beanstalk Developer Guide 6 AWS Elastic Beanstalk Developer Guide What you will build Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform. Elastic Beanstalk application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments where your application code runs on platforms provided and managed by Elastic Beanstalk, or in custom containers that you provide. Environment An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance. When you create an environment, Elastic Beanstalk provisions the necessary resources into your AWS account. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and additional Elastic Beanstalk components. Elastic Beanstalk provides manged platforms, or you can provide your own platform in a container. Elastic Beanstalk supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to"} -{"global_id": 92, "doc_id": "beanstalk", "chunk_id": "3", "question_id": 1, "question": "What must you choose when creating an environment?", "answer_span": "When you create an environment, you must choose the platform.", "chunk": "supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different platform. Step 1 - Create an application To create your example application, you'll use the Create application console wizard. It creates an Elastic Beanstalk application and launches an environment within it. Reminder: an environment is a collection of AWS resources required to run your application code. What you will build 7 AWS Elastic Beanstalk Developer Guide To create an application 1. Open the Elastic Beanstalk console. 2. Choose Create application. 3. For Application name enter getting-started-app. The console provides a six step process for creating an application and configuring an environment. For this quick start, you'll only need to focus on the first two steps, then you can skip ahead to review and create your application and environment. To configure an environment 1. In Environment information, for Environment name enter: gs-app-web-env. 2. For Platform, choose the PHP platform. 3. For Application code and Presets, accept the defaults (Sample application and Single instance), then choose Next. To configure service access Next, you need two roles. A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform. An EC2 instance profile role permits tasks such as writing logs and interacting with other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include"} -{"global_id": 93, "doc_id": "beanstalk", "chunk_id": "3", "question_id": 2, "question": "What happens if you need to change programming languages?", "answer_span": "If you need to change programming languages, you must create and switch to a new environment on a different platform.", "chunk": "supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different platform. Step 1 - Create an application To create your example application, you'll use the Create application console wizard. It creates an Elastic Beanstalk application and launches an environment within it. Reminder: an environment is a collection of AWS resources required to run your application code. What you will build 7 AWS Elastic Beanstalk Developer Guide To create an application 1. Open the Elastic Beanstalk console. 2. Choose Create application. 3. For Application name enter getting-started-app. The console provides a six step process for creating an application and configuring an environment. For this quick start, you'll only need to focus on the first two steps, then you can skip ahead to review and create your application and environment. To configure an environment 1. In Environment information, for Environment name enter: gs-app-web-env. 2. For Platform, choose the PHP platform. 3. For Application code and Presets, accept the defaults (Sample application and Single instance), then choose Next. To configure service access Next, you need two roles. A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform. An EC2 instance profile role permits tasks such as writing logs and interacting with other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include"} -{"global_id": 94, "doc_id": "beanstalk", "chunk_id": "3", "question_id": 3, "question": "What is the first step to create an application?", "answer_span": "To create your example application, you'll use the Create application console wizard.", "chunk": "supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different platform. Step 1 - Create an application To create your example application, you'll use the Create application console wizard. It creates an Elastic Beanstalk application and launches an environment within it. Reminder: an environment is a collection of AWS resources required to run your application code. What you will build 7 AWS Elastic Beanstalk Developer Guide To create an application 1. Open the Elastic Beanstalk console. 2. Choose Create application. 3. For Application name enter getting-started-app. The console provides a six step process for creating an application and configuring an environment. For this quick start, you'll only need to focus on the first two steps, then you can skip ahead to review and create your application and environment. To configure an environment 1. In Environment information, for Environment name enter: gs-app-web-env. 2. For Platform, choose the PHP platform. 3. For Application code and Presets, accept the defaults (Sample application and Single instance), then choose Next. To configure service access Next, you need two roles. A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform. An EC2 instance profile role permits tasks such as writing logs and interacting with other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include"} -{"global_id": 95, "doc_id": "beanstalk", "chunk_id": "3", "question_id": 4, "question": "What role allows Elastic Beanstalk to monitor your EC2 instances?", "answer_span": "A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform.", "chunk": "supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different platform. Step 1 - Create an application To create your example application, you'll use the Create application console wizard. It creates an Elastic Beanstalk application and launches an environment within it. Reminder: an environment is a collection of AWS resources required to run your application code. What you will build 7 AWS Elastic Beanstalk Developer Guide To create an application 1. Open the Elastic Beanstalk console. 2. Choose Create application. 3. For Application name enter getting-started-app. The console provides a six step process for creating an application and configuring an environment. For this quick start, you'll only need to focus on the first two steps, then you can skip ahead to review and create your application and environment. To configure an environment 1. In Environment information, for Environment name enter: gs-app-web-env. 2. For Platform, choose the PHP platform. 3. For Application code and Presets, accept the defaults (Sample application and Single instance), then choose Next. To configure service access Next, you need two roles. A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform. An EC2 instance profile role permits tasks such as writing logs and interacting with other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include"} -{"global_id": 96, "doc_id": "beanstalk", "chunk_id": "4", "question_id": 1, "question": "What is the first step to create the Service role?", "answer_span": "For Service role, choose Create role.", "chunk": "other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer Guide • AWSElasticBeanstalkEnhancedHealth • AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created service role. To create the EC2 instance profile 1. Choose Create role. 2. For Trusted entity type, choose AWS service. 3. For Use case, choose Elastic Beanstalk – Compute. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: • AWSElasticBeanstalkWebTier • AWSElasticBeanstalkWorkerTier • AWSElasticBeanstalkMulticontainerDocker 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created EC2 instance profile. To finish configuring and creating your application 1. Skip over EC2 key pair. We'll show you other ways to connect to your Amazon EC2 instances through the Console. 2. Choose Skip to Review to move over several optional steps. Optional steps: networking, databases, scaling parameters, advanced configuration for updates, monitoring, and logging. 3. On the Review page which shows a summary of your choices, choose Submit. Step 1 - Create an application 9 AWS Elastic Beanstalk Developer Guide Congratulations! You have created an application and configured an environment! Now you need to wait for the resources to deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will"} -{"global_id": 97, "doc_id": "beanstalk", "chunk_id": "4", "question_id": 2, "question": "Which permissions policies need to be verified when creating the Service role?", "answer_span": "Verify that Permissions policies include the following, then choose Next: Developer Guide • AWSElasticBeanstalkEnhancedHealth • AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy", "chunk": "other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer Guide • AWSElasticBeanstalkEnhancedHealth • AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created service role. To create the EC2 instance profile 1. Choose Create role. 2. For Trusted entity type, choose AWS service. 3. For Use case, choose Elastic Beanstalk – Compute. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: • AWSElasticBeanstalkWebTier • AWSElasticBeanstalkWorkerTier • AWSElasticBeanstalkMulticontainerDocker 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created EC2 instance profile. To finish configuring and creating your application 1. Skip over EC2 key pair. We'll show you other ways to connect to your Amazon EC2 instances through the Console. 2. Choose Skip to Review to move over several optional steps. Optional steps: networking, databases, scaling parameters, advanced configuration for updates, monitoring, and logging. 3. On the Review page which shows a summary of your choices, choose Submit. Step 1 - Create an application 9 AWS Elastic Beanstalk Developer Guide Congratulations! You have created an application and configured an environment! Now you need to wait for the resources to deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will"} -{"global_id": 98, "doc_id": "beanstalk", "chunk_id": "4", "question_id": 3, "question": "What should you choose to skip when finishing configuring and creating your application?", "answer_span": "Skip over EC2 key pair.", "chunk": "other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer Guide • AWSElasticBeanstalkEnhancedHealth • AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created service role. To create the EC2 instance profile 1. Choose Create role. 2. For Trusted entity type, choose AWS service. 3. For Use case, choose Elastic Beanstalk – Compute. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: • AWSElasticBeanstalkWebTier • AWSElasticBeanstalkWorkerTier • AWSElasticBeanstalkMulticontainerDocker 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created EC2 instance profile. To finish configuring and creating your application 1. Skip over EC2 key pair. We'll show you other ways to connect to your Amazon EC2 instances through the Console. 2. Choose Skip to Review to move over several optional steps. Optional steps: networking, databases, scaling parameters, advanced configuration for updates, monitoring, and logging. 3. On the Review page which shows a summary of your choices, choose Submit. Step 1 - Create an application 9 AWS Elastic Beanstalk Developer Guide Congratulations! You have created an application and configured an environment! Now you need to wait for the resources to deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will"} -{"global_id": 99, "doc_id": "beanstalk", "chunk_id": "4", "question_id": 4, "question": "How long can the initial deploy take to create the resources?", "answer_span": "The initial deploy can take up to five minutes to create the resources.", "chunk": "other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer Guide • AWSElasticBeanstalkEnhancedHealth • AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created service role. To create the EC2 instance profile 1. Choose Create role. 2. For Trusted entity type, choose AWS service. 3. For Use case, choose Elastic Beanstalk – Compute. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: • AWSElasticBeanstalkWebTier • AWSElasticBeanstalkWorkerTier • AWSElasticBeanstalkMulticontainerDocker 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created EC2 instance profile. To finish configuring and creating your application 1. Skip over EC2 key pair. We'll show you other ways to connect to your Amazon EC2 instances through the Console. 2. Choose Skip to Review to move over several optional steps. Optional steps: networking, databases, scaling parameters, advanced configuration for updates, monitoring, and logging. 3. On the Review page which shows a summary of your choices, choose Submit. Step 1 - Create an application 9 AWS Elastic Beanstalk Developer Guide Congratulations! You have created an application and configured an environment! Now you need to wait for the resources to deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will"} -{"global_id": 100, "doc_id": "beanstalk", "chunk_id": "5", "question_id": 1, "question": "What does Elastic Beanstalk do when you create an application?", "answer_span": "When you create an application, Elastic Beanstalk sets up the environments for you.", "chunk": "deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When you create the example application, Elastic Beanstalk creates the following resources: • EC2 instance – An Amazon EC2 virtual machine configured to run web apps on the platform you selected. Every platform runs a different set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy to forward web traffic to your web app, serve static assets, and generate access and error logs. You can connect to your Amazon EC2 instances to view configuration and logs. Step 2 - Deploy your application 10 AWS Elastic Beanstalk Developer Guide • Instance security group – An Amazon EC2 security group will be created to allow incoming requests on port 80, so inbound traffic on a load balancer can reach your web app. • Amazon S3 bucket – A storage location for your source code, logs, and other artifacts. • Amazon CloudWatch alarms – Two CloudWatch alarms are created to monitor the load on your instances and scale them up or down as needed. • AWS CloudFormation stack – Elastic Beanstalk uses AWS CloudFormation to deploy the resources in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then"} -{"global_id": 101, "doc_id": "beanstalk", "chunk_id": "5", "question_id": 2, "question": "How long can the initial deploy take?", "answer_span": "The initial deploy can take up to five minutes to create the resources.", "chunk": "deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When you create the example application, Elastic Beanstalk creates the following resources: • EC2 instance – An Amazon EC2 virtual machine configured to run web apps on the platform you selected. Every platform runs a different set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy to forward web traffic to your web app, serve static assets, and generate access and error logs. You can connect to your Amazon EC2 instances to view configuration and logs. Step 2 - Deploy your application 10 AWS Elastic Beanstalk Developer Guide • Instance security group – An Amazon EC2 security group will be created to allow incoming requests on port 80, so inbound traffic on a load balancer can reach your web app. • Amazon S3 bucket – A storage location for your source code, logs, and other artifacts. • Amazon CloudWatch alarms – Two CloudWatch alarms are created to monitor the load on your instances and scale them up or down as needed. • AWS CloudFormation stack – Elastic Beanstalk uses AWS CloudFormation to deploy the resources in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then"} -{"global_id": 102, "doc_id": "beanstalk", "chunk_id": "5", "question_id": 3, "question": "What type of virtual machine does Elastic Beanstalk create?", "answer_span": "An Amazon EC2 virtual machine configured to run web apps on the platform you selected.", "chunk": "deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When you create the example application, Elastic Beanstalk creates the following resources: • EC2 instance – An Amazon EC2 virtual machine configured to run web apps on the platform you selected. Every platform runs a different set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy to forward web traffic to your web app, serve static assets, and generate access and error logs. You can connect to your Amazon EC2 instances to view configuration and logs. Step 2 - Deploy your application 10 AWS Elastic Beanstalk Developer Guide • Instance security group – An Amazon EC2 security group will be created to allow incoming requests on port 80, so inbound traffic on a load balancer can reach your web app. • Amazon S3 bucket – A storage location for your source code, logs, and other artifacts. • Amazon CloudWatch alarms – Two CloudWatch alarms are created to monitor the load on your instances and scale them up or down as needed. • AWS CloudFormation stack – Elastic Beanstalk uses AWS CloudFormation to deploy the resources in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then"} -{"global_id": 103, "doc_id": "beanstalk", "chunk_id": "5", "question_id": 4, "question": "What is the purpose of the Amazon S3 bucket created by Elastic Beanstalk?", "answer_span": "A storage location for your source code, logs, and other artifacts.", "chunk": "deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When you create the example application, Elastic Beanstalk creates the following resources: • EC2 instance – An Amazon EC2 virtual machine configured to run web apps on the platform you selected. Every platform runs a different set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy to forward web traffic to your web app, serve static assets, and generate access and error logs. You can connect to your Amazon EC2 instances to view configuration and logs. Step 2 - Deploy your application 10 AWS Elastic Beanstalk Developer Guide • Instance security group – An Amazon EC2 security group will be created to allow incoming requests on port 80, so inbound traffic on a load balancer can reach your web app. • Amazon S3 bucket – A storage location for your source code, logs, and other artifacts. • Amazon CloudWatch alarms – Two CloudWatch alarms are created to monitor the load on your instances and scale them up or down as needed. • AWS CloudFormation stack – Elastic Beanstalk uses AWS CloudFormation to deploy the resources in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then"} -{"global_id": 104, "doc_id": "beanstalk", "chunk_id": "6", "question_id": 1, "question": "What is the format of the domain name that routes to your web app?", "answer_span": "A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com.", "chunk": "in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment. During the process, the console tracks progress and displays event status in the Events tab. Step 2 - Deploy your application 11 AWS Elastic Beanstalk Developer Guide After all of the resources are deployed, the environment's health should change to Ok. Step 2 - Deploy your application 12 AWS Elastic Beanstalk Developer Guide Your application is ready! After you see your application health change to Ok, you can browse to your web application's website. Step 3 - Explore the Elastic Beanstalk environment You'll start exploring your deployed application environment from the Environment overview page in the console. To view the environment and your application 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. Choose Go to environment to browse your application! (You can also choose the URL link listed for Domain to browse your application.) The connection will be HTTP (not HTTPS), so you might see a warning in your browser. Step 3 - Explore the environment 13 AWS Elastic Beanstalk Developer Guide Back in the Elastic Beanstalk console, the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you"} -{"global_id": 105, "doc_id": "beanstalk", "chunk_id": "6", "question_id": 2, "question": "What happens after all of the resources are deployed?", "answer_span": "the environment's health should change to Ok.", "chunk": "in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment. During the process, the console tracks progress and displays event status in the Events tab. Step 2 - Deploy your application 11 AWS Elastic Beanstalk Developer Guide After all of the resources are deployed, the environment's health should change to Ok. Step 2 - Deploy your application 12 AWS Elastic Beanstalk Developer Guide Your application is ready! After you see your application health change to Ok, you can browse to your web application's website. Step 3 - Explore the Elastic Beanstalk environment You'll start exploring your deployed application environment from the Environment overview page in the console. To view the environment and your application 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. Choose Go to environment to browse your application! (You can also choose the URL link listed for Domain to browse your application.) The connection will be HTTP (not HTTPS), so you might see a warning in your browser. Step 3 - Explore the environment 13 AWS Elastic Beanstalk Developer Guide Back in the Elastic Beanstalk console, the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you"} -{"global_id": 106, "doc_id": "beanstalk", "chunk_id": "6", "question_id": 3, "question": "How can you start exploring your deployed application environment?", "answer_span": "You'll start exploring your deployed application environment from the Environment overview page in the console.", "chunk": "in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment. During the process, the console tracks progress and displays event status in the Events tab. Step 2 - Deploy your application 11 AWS Elastic Beanstalk Developer Guide After all of the resources are deployed, the environment's health should change to Ok. Step 2 - Deploy your application 12 AWS Elastic Beanstalk Developer Guide Your application is ready! After you see your application health change to Ok, you can browse to your web application's website. Step 3 - Explore the Elastic Beanstalk environment You'll start exploring your deployed application environment from the Environment overview page in the console. To view the environment and your application 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. Choose Go to environment to browse your application! (You can also choose the URL link listed for Domain to browse your application.) The connection will be HTTP (not HTTPS), so you might see a warning in your browser. Step 3 - Explore the environment 13 AWS Elastic Beanstalk Developer Guide Back in the Elastic Beanstalk console, the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you"} -{"global_id": 107, "doc_id": "beanstalk", "chunk_id": "6", "question_id": 4, "question": "What information is shown in the Environment overview of the Elastic Beanstalk console?", "answer_span": "the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on.", "chunk": "in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment. During the process, the console tracks progress and displays event status in the Events tab. Step 2 - Deploy your application 11 AWS Elastic Beanstalk Developer Guide After all of the resources are deployed, the environment's health should change to Ok. Step 2 - Deploy your application 12 AWS Elastic Beanstalk Developer Guide Your application is ready! After you see your application health change to Ok, you can browse to your web application's website. Step 3 - Explore the Elastic Beanstalk environment You'll start exploring your deployed application environment from the Environment overview page in the console. To view the environment and your application 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. Choose Go to environment to browse your application! (You can also choose the URL link listed for Domain to browse your application.) The connection will be HTTP (not HTTPS), so you might see a warning in your browser. Step 3 - Explore the environment 13 AWS Elastic Beanstalk Developer Guide Back in the Elastic Beanstalk console, the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you"} -{"global_id": 108, "doc_id": "beanstalk", "chunk_id": "7", "question_id": 1, "question": "What information is included in the Environment overview?", "answer_span": "the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on.", "chunk": "the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application, the environment is in a Pending state. Status messages about launch events are continuously added to the list of Events . The environment's Domain is the URL for your deployed web application. In the left navigation pane, Go to environment also takes you to your domain. Similarly, the left navigation pane has links that correspond to the various tabs. Take note of the Configuration link in the left navigation pane. which displays a summary of environment configuration option values, grouped by category. Environment configuration settings Take note of the Configuration link in the left navigation pane. You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more! The various tabs contain detailed information about your environment: Step 3 - Explore the environment 15 AWS Elastic Beanstalk Developer Guide • Events – View an updating list of information and error messages from the Elastic Beanstalk service and other services for resources in your environment. • Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring –"} -{"global_id": 109, "doc_id": "beanstalk", "chunk_id": "7", "question_id": 2, "question": "What is the status of the environment while Elastic Beanstalk is launching the application?", "answer_span": "the environment is in a Pending state.", "chunk": "the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application, the environment is in a Pending state. Status messages about launch events are continuously added to the list of Events . The environment's Domain is the URL for your deployed web application. In the left navigation pane, Go to environment also takes you to your domain. Similarly, the left navigation pane has links that correspond to the various tabs. Take note of the Configuration link in the left navigation pane. which displays a summary of environment configuration option values, grouped by category. Environment configuration settings Take note of the Configuration link in the left navigation pane. You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more! The various tabs contain detailed information about your environment: Step 3 - Explore the environment 15 AWS Elastic Beanstalk Developer Guide • Events – View an updating list of information and error messages from the Elastic Beanstalk service and other services for resources in your environment. • Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring –"} -{"global_id": 110, "doc_id": "beanstalk", "chunk_id": "7", "question_id": 3, "question": "What can you view and edit in the Configuration link?", "answer_span": "You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more!", "chunk": "the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application, the environment is in a Pending state. Status messages about launch events are continuously added to the list of Events . The environment's Domain is the URL for your deployed web application. In the left navigation pane, Go to environment also takes you to your domain. Similarly, the left navigation pane has links that correspond to the various tabs. Take note of the Configuration link in the left navigation pane. which displays a summary of environment configuration option values, grouped by category. Environment configuration settings Take note of the Configuration link in the left navigation pane. You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more! The various tabs contain detailed information about your environment: Step 3 - Explore the environment 15 AWS Elastic Beanstalk Developer Guide • Events – View an updating list of information and error messages from the Elastic Beanstalk service and other services for resources in your environment. • Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring –"} -{"global_id": 111, "doc_id": "beanstalk", "chunk_id": "7", "question_id": 4, "question": "How long are the retrieved logs available for?", "answer_span": "The retrieved logs are available for 15 minutes.", "chunk": "the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application, the environment is in a Pending state. Status messages about launch events are continuously added to the list of Events . The environment's Domain is the URL for your deployed web application. In the left navigation pane, Go to environment also takes you to your domain. Similarly, the left navigation pane has links that correspond to the various tabs. Take note of the Configuration link in the left navigation pane. which displays a summary of environment configuration option values, grouped by category. Environment configuration settings Take note of the Configuration link in the left navigation pane. You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more! The various tabs contain detailed information about your environment: Step 3 - Explore the environment 15 AWS Elastic Beanstalk Developer Guide • Events – View an updating list of information and error messages from the Elastic Beanstalk service and other services for resources in your environment. • Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring –"} -{"global_id": 112, "doc_id": "beanstalk", "chunk_id": "8", "question_id": 1, "question": "What can you view regarding the health of Amazon EC2 instances?", "answer_span": "View status and detailed health information for the Amazon EC2 instances running your application.", "chunk": "• Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring – View statistics for the environment, such as average latency and CPU utilization. • Alarms – View and edit alarms that are configured for environment metrics. • Managed updates – View information about upcoming and completed managed platform updates and instance replacement. • Tags – View and edit key-value pairs that are applied to your environment. Note Links in the console navigation pane will display the corresponding tab. Troubleshooting with logs For troubleshooting unexpected behaviors or debugging deployments, you might want to check the logs in your environments. You can request 100 lines of all the log files under the Logs tab in the Elastic Beanstalk console. Alternatively, you can connect directly to the Amazon EC2 instance and tail the logs in realtime. To request the logs (Elastic Beanstalk console) 1. Navigate to your environment in the Elastic Beanstalk console. 2. Choose the Logs tab or left-nav, then choose Request logs. 3. Select Last 100 lines. 4. After the logs are created, choose the Download link to view the logs in the browser. In the logs, find the log and note the directory for the nginx access log. Troubleshooting with logs 16 AWS Elastic Beanstalk Developer Guide Add a policy to enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that"} -{"global_id": 113, "doc_id": "beanstalk", "chunk_id": "8", "question_id": 2, "question": "How long are the retrieved logs available for?", "answer_span": "The retrieved logs are available for 15 minutes.", "chunk": "• Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring – View statistics for the environment, such as average latency and CPU utilization. • Alarms – View and edit alarms that are configured for environment metrics. • Managed updates – View information about upcoming and completed managed platform updates and instance replacement. • Tags – View and edit key-value pairs that are applied to your environment. Note Links in the console navigation pane will display the corresponding tab. Troubleshooting with logs For troubleshooting unexpected behaviors or debugging deployments, you might want to check the logs in your environments. You can request 100 lines of all the log files under the Logs tab in the Elastic Beanstalk console. Alternatively, you can connect directly to the Amazon EC2 instance and tail the logs in realtime. To request the logs (Elastic Beanstalk console) 1. Navigate to your environment in the Elastic Beanstalk console. 2. Choose the Logs tab or left-nav, then choose Request logs. 3. Select Last 100 lines. 4. After the logs are created, choose the Download link to view the logs in the browser. In the logs, find the log and note the directory for the nginx access log. Troubleshooting with logs 16 AWS Elastic Beanstalk Developer Guide Add a policy to enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that"} -{"global_id": 114, "doc_id": "beanstalk", "chunk_id": "8", "question_id": 3, "question": "What is the first step to request logs in the Elastic Beanstalk console?", "answer_span": "Navigate to your environment in the Elastic Beanstalk console.", "chunk": "• Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring – View statistics for the environment, such as average latency and CPU utilization. • Alarms – View and edit alarms that are configured for environment metrics. • Managed updates – View information about upcoming and completed managed platform updates and instance replacement. • Tags – View and edit key-value pairs that are applied to your environment. Note Links in the console navigation pane will display the corresponding tab. Troubleshooting with logs For troubleshooting unexpected behaviors or debugging deployments, you might want to check the logs in your environments. You can request 100 lines of all the log files under the Logs tab in the Elastic Beanstalk console. Alternatively, you can connect directly to the Amazon EC2 instance and tail the logs in realtime. To request the logs (Elastic Beanstalk console) 1. Navigate to your environment in the Elastic Beanstalk console. 2. Choose the Logs tab or left-nav, then choose Request logs. 3. Select Last 100 lines. 4. After the logs are created, choose the Download link to view the logs in the browser. In the logs, find the log and note the directory for the nginx access log. Troubleshooting with logs 16 AWS Elastic Beanstalk Developer Guide Add a policy to enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that"} -{"global_id": 115, "doc_id": "beanstalk", "chunk_id": "8", "question_id": 4, "question": "What must you do before connecting to Amazon EC2 with Session Manager?", "answer_span": "Add a policy that enables connections to Amazon EC2 with Session Manager.", "chunk": "• Health – View status and detailed health information for the Amazon EC2 instances running your application. • Logs – Retrieve and download logs from the Amazon EC2 in your environment. You can retrieve full logs or recent activity. The retrieved logs are available for 15 minutes. • Monitoring – View statistics for the environment, such as average latency and CPU utilization. • Alarms – View and edit alarms that are configured for environment metrics. • Managed updates – View information about upcoming and completed managed platform updates and instance replacement. • Tags – View and edit key-value pairs that are applied to your environment. Note Links in the console navigation pane will display the corresponding tab. Troubleshooting with logs For troubleshooting unexpected behaviors or debugging deployments, you might want to check the logs in your environments. You can request 100 lines of all the log files under the Logs tab in the Elastic Beanstalk console. Alternatively, you can connect directly to the Amazon EC2 instance and tail the logs in realtime. To request the logs (Elastic Beanstalk console) 1. Navigate to your environment in the Elastic Beanstalk console. 2. Choose the Logs tab or left-nav, then choose Request logs. 3. Select Last 100 lines. 4. After the logs are created, choose the Download link to view the logs in the browser. In the logs, find the log and note the directory for the nginx access log. Troubleshooting with logs 16 AWS Elastic Beanstalk Developer Guide Add a policy to enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that"} -{"global_id": 116, "doc_id": "beanstalk", "chunk_id": "9", "question_id": 1, "question": "What must you add to enable connections to Amazon EC2 with Session Manager?", "answer_span": "you must add a policy that enables connections to Amazon EC2 with Session Manager.", "chunk": "enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that begins with the following text: AmazonSSMManagedEC2Instance, then add it to the role. To connect to your Amazon EC2 with Session Manager 1. Navigate to the Amazon EC2 console. 2. Choose Instances, then select your gs-app-web-env instance. 3. Choose Connect, then Session Manager. 4. Choose Connect. After connecting to the instance, start a bash shell and tail the logs: 1. Run the command bash. 2. Run the command cd /var/log/nginx. 3. Run the command tail -f access.log. 4. In your browser, go to the application domain URL. Refresh. Congratulations, you're connected! You should see log entries in your instance update every time you refresh the page. Connect button not working? If the connect button is not available, go back to IAM and verify that you added the necessary policy to the role. Troubleshooting with logs 17 AWS Elastic Beanstalk Developer Guide Step 4 - Update your application Eventually, you will want to update your application. You can deploy a new version at any time, as long as no other update operations are in progress on your environment. The application version that you started this tutorial with is called Sample Application. To update your application version 1. Download the following PHP sample application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose"} -{"global_id": 117, "doc_id": "beanstalk", "chunk_id": "9", "question_id": 2, "question": "What is the first step to connect to your Amazon EC2 with Session Manager?", "answer_span": "Navigate to the Amazon EC2 console.", "chunk": "enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that begins with the following text: AmazonSSMManagedEC2Instance, then add it to the role. To connect to your Amazon EC2 with Session Manager 1. Navigate to the Amazon EC2 console. 2. Choose Instances, then select your gs-app-web-env instance. 3. Choose Connect, then Session Manager. 4. Choose Connect. After connecting to the instance, start a bash shell and tail the logs: 1. Run the command bash. 2. Run the command cd /var/log/nginx. 3. Run the command tail -f access.log. 4. In your browser, go to the application domain URL. Refresh. Congratulations, you're connected! You should see log entries in your instance update every time you refresh the page. Connect button not working? If the connect button is not available, go back to IAM and verify that you added the necessary policy to the role. Troubleshooting with logs 17 AWS Elastic Beanstalk Developer Guide Step 4 - Update your application Eventually, you will want to update your application. You can deploy a new version at any time, as long as no other update operations are in progress on your environment. The application version that you started this tutorial with is called Sample Application. To update your application version 1. Download the following PHP sample application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose"} -{"global_id": 118, "doc_id": "beanstalk", "chunk_id": "9", "question_id": 3, "question": "What command should you run to start a bash shell after connecting to the instance?", "answer_span": "Run the command bash.", "chunk": "enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that begins with the following text: AmazonSSMManagedEC2Instance, then add it to the role. To connect to your Amazon EC2 with Session Manager 1. Navigate to the Amazon EC2 console. 2. Choose Instances, then select your gs-app-web-env instance. 3. Choose Connect, then Session Manager. 4. Choose Connect. After connecting to the instance, start a bash shell and tail the logs: 1. Run the command bash. 2. Run the command cd /var/log/nginx. 3. Run the command tail -f access.log. 4. In your browser, go to the application domain URL. Refresh. Congratulations, you're connected! You should see log entries in your instance update every time you refresh the page. Connect button not working? If the connect button is not available, go back to IAM and verify that you added the necessary policy to the role. Troubleshooting with logs 17 AWS Elastic Beanstalk Developer Guide Step 4 - Update your application Eventually, you will want to update your application. You can deploy a new version at any time, as long as no other update operations are in progress on your environment. The application version that you started this tutorial with is called Sample Application. To update your application version 1. Download the following PHP sample application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose"} -{"global_id": 119, "doc_id": "beanstalk", "chunk_id": "9", "question_id": 4, "question": "What should you do if the connect button is not available?", "answer_span": "go back to IAM and verify that you added the necessary policy to the role.", "chunk": "enable connections to Amazon EC2 Before you can connect, you must add a policy that enables connections to Amazon EC2 with Session Manager. 1. Navigate to the IAM console. 2. Find and select the aws-elasticbeanstalk-ec2-role role. 3. Choose Add permission, then Attach policies. 4. Search for a default policy that begins with the following text: AmazonSSMManagedEC2Instance, then add it to the role. To connect to your Amazon EC2 with Session Manager 1. Navigate to the Amazon EC2 console. 2. Choose Instances, then select your gs-app-web-env instance. 3. Choose Connect, then Session Manager. 4. Choose Connect. After connecting to the instance, start a bash shell and tail the logs: 1. Run the command bash. 2. Run the command cd /var/log/nginx. 3. Run the command tail -f access.log. 4. In your browser, go to the application domain URL. Refresh. Congratulations, you're connected! You should see log entries in your instance update every time you refresh the page. Connect button not working? If the connect button is not available, go back to IAM and verify that you added the necessary policy to the role. Troubleshooting with logs 17 AWS Elastic Beanstalk Developer Guide Step 4 - Update your application Eventually, you will want to update your application. You can deploy a new version at any time, as long as no other update operations are in progress on your environment. The application version that you started this tutorial with is called Sample Application. To update your application version 1. Download the following PHP sample application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose"} -{"global_id": 120, "doc_id": "beanstalk", "chunk_id": "10", "question_id": 1, "question": "What is the first step to deploy an application using Elastic Beanstalk?", "answer_span": "Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region.", "chunk": "application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose file, and then upload the sample application source bundle that you downloaded. The console automatically fills in the Version label with a new unique label, automatically incrementing a trailing integer. If you choose your own version label, ensure that it's unique. Step 4 - Update your application 18 AWS Elastic Beanstalk 6. Developer Guide Choose Deploy. While Elastic Beanstalk deploys your file to your Amazon EC2 instances, you can view the deployment status on the Environment overview page. While the application version is updated, the environment Health status is gray. When the deployment is complete, Elastic Beanstalk performs an application health check. When the application responds to the health check, it's considered healthy and the status returns to green. The environment overview shows the new Running Version—the name you provided as the Version label. Elastic Beanstalk also uploads your new application version and adds it to the table of application versions. To view the table, choose Application versions under getting-started-app on the navigation pane. Update success! You should see an updated \"v2\" message after refreshing your browser. If you want to edit the source yourself, unzip, edit, then re-zip the source bundle. On macOS, use the following command from inside your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud"} -{"global_id": 121, "doc_id": "beanstalk", "chunk_id": "10", "question_id": 2, "question": "What happens to the environment Health status while the application version is updated?", "answer_span": "While the application version is updated, the environment Health status is gray.", "chunk": "application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose file, and then upload the sample application source bundle that you downloaded. The console automatically fills in the Version label with a new unique label, automatically incrementing a trailing integer. If you choose your own version label, ensure that it's unique. Step 4 - Update your application 18 AWS Elastic Beanstalk 6. Developer Guide Choose Deploy. While Elastic Beanstalk deploys your file to your Amazon EC2 instances, you can view the deployment status on the Environment overview page. While the application version is updated, the environment Health status is gray. When the deployment is complete, Elastic Beanstalk performs an application health check. When the application responds to the health check, it's considered healthy and the status returns to green. The environment overview shows the new Running Version—the name you provided as the Version label. Elastic Beanstalk also uploads your new application version and adds it to the table of application versions. To view the table, choose Application versions under getting-started-app on the navigation pane. Update success! You should see an updated \"v2\" message after refreshing your browser. If you want to edit the source yourself, unzip, edit, then re-zip the source bundle. On macOS, use the following command from inside your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud"} -{"global_id": 122, "doc_id": "beanstalk", "chunk_id": "10", "question_id": 3, "question": "What should you do if you want to edit the source of the application?", "answer_span": "If you want to edit the source yourself, unzip, edit, then re-zip the source bundle.", "chunk": "application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose file, and then upload the sample application source bundle that you downloaded. The console automatically fills in the Version label with a new unique label, automatically incrementing a trailing integer. If you choose your own version label, ensure that it's unique. Step 4 - Update your application 18 AWS Elastic Beanstalk 6. Developer Guide Choose Deploy. While Elastic Beanstalk deploys your file to your Amazon EC2 instances, you can view the deployment status on the Environment overview page. While the application version is updated, the environment Health status is gray. When the deployment is complete, Elastic Beanstalk performs an application health check. When the application responds to the health check, it's considered healthy and the status returns to green. The environment overview shows the new Running Version—the name you provided as the Version label. Elastic Beanstalk also uploads your new application version and adds it to the table of application versions. To view the table, choose Application versions under getting-started-app on the navigation pane. Update success! You should see an updated \"v2\" message after refreshing your browser. If you want to edit the source yourself, unzip, edit, then re-zip the source bundle. On macOS, use the following command from inside your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud"} -{"global_id": 123, "doc_id": "beanstalk", "chunk_id": "10", "question_id": 4, "question": "What command should be used on macOS to re-zip the source bundle excluding extra file attributes?", "answer_span": "On macOS, use the following command from inside your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip .", "chunk": "application: PHP – php-v2.zip 2. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 3. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 4. On the environment overview page, choose Upload and deploy. 5. Select Choose file, and then upload the sample application source bundle that you downloaded. The console automatically fills in the Version label with a new unique label, automatically incrementing a trailing integer. If you choose your own version label, ensure that it's unique. Step 4 - Update your application 18 AWS Elastic Beanstalk 6. Developer Guide Choose Deploy. While Elastic Beanstalk deploys your file to your Amazon EC2 instances, you can view the deployment status on the Environment overview page. While the application version is updated, the environment Health status is gray. When the deployment is complete, Elastic Beanstalk performs an application health check. When the application responds to the health check, it's considered healthy and the status returns to green. The environment overview shows the new Running Version—the name you provided as the Version label. Elastic Beanstalk also uploads your new application version and adds it to the table of application versions. To view the table, choose Application versions under getting-started-app on the navigation pane. Update success! You should see an updated \"v2\" message after refreshing your browser. If you want to edit the source yourself, unzip, edit, then re-zip the source bundle. On macOS, use the following command from inside your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud"} -{"global_id": 124, "doc_id": "beanstalk", "chunk_id": "11", "question_id": 1, "question": "What command is used to zip the PHP directory while excluding extra file attributes?", "answer_span": "zip -X -r ../php-v2.zip .", "chunk": "your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud (Amazon EC2) instance that is running your application. To apply configuration changes, Elastic Beanstalk performs an environment update. Some configuration changes are simple and happen quickly. Some changes require deleting and recreating AWS resources, which can take several minutes. When you change configuration settings, Elastic Beanstalk warns you about potential application downtime. Step 5 - Scale your application 19 AWS Elastic Beanstalk Developer Guide Increase capacity settings In this example of a configuration change, you edit your environment's capacity settings. You configure a load-balanced, scalable environment that has between two and four Amazon EC2 instances in its Auto Scaling group, and then you verify that the change occurred. Elastic Beanstalk creates an additional Amazon EC2 instance, adding to the single instance that it created initially. Then, Elastic Beanstalk associates both instances with the environment's load balancer. As a result, your application's responsiveness is improved and its availability is increased. To change your environment's capacity 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. In the navigation pane, choose Configuration. 4. In the Instance traffic and scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7."} -{"global_id": 125, "doc_id": "beanstalk", "chunk_id": "11", "question_id": 2, "question": "What does Elastic Beanstalk do to apply configuration changes?", "answer_span": "Elastic Beanstalk performs an environment update.", "chunk": "your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud (Amazon EC2) instance that is running your application. To apply configuration changes, Elastic Beanstalk performs an environment update. Some configuration changes are simple and happen quickly. Some changes require deleting and recreating AWS resources, which can take several minutes. When you change configuration settings, Elastic Beanstalk warns you about potential application downtime. Step 5 - Scale your application 19 AWS Elastic Beanstalk Developer Guide Increase capacity settings In this example of a configuration change, you edit your environment's capacity settings. You configure a load-balanced, scalable environment that has between two and four Amazon EC2 instances in its Auto Scaling group, and then you verify that the change occurred. Elastic Beanstalk creates an additional Amazon EC2 instance, adding to the single instance that it created initially. Then, Elastic Beanstalk associates both instances with the environment's load balancer. As a result, your application's responsiveness is improved and its availability is increased. To change your environment's capacity 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. In the navigation pane, choose Configuration. 4. In the Instance traffic and scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7."} -{"global_id": 126, "doc_id": "beanstalk", "chunk_id": "11", "question_id": 3, "question": "How many Amazon EC2 instances can be configured in the Auto Scaling group?", "answer_span": "between two and four Amazon EC2 instances in its Auto Scaling group", "chunk": "your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud (Amazon EC2) instance that is running your application. To apply configuration changes, Elastic Beanstalk performs an environment update. Some configuration changes are simple and happen quickly. Some changes require deleting and recreating AWS resources, which can take several minutes. When you change configuration settings, Elastic Beanstalk warns you about potential application downtime. Step 5 - Scale your application 19 AWS Elastic Beanstalk Developer Guide Increase capacity settings In this example of a configuration change, you edit your environment's capacity settings. You configure a load-balanced, scalable environment that has between two and four Amazon EC2 instances in its Auto Scaling group, and then you verify that the change occurred. Elastic Beanstalk creates an additional Amazon EC2 instance, adding to the single instance that it created initially. Then, Elastic Beanstalk associates both instances with the environment's load balancer. As a result, your application's responsiveness is improved and its availability is increased. To change your environment's capacity 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. In the navigation pane, choose Configuration. 4. In the Instance traffic and scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7."} -{"global_id": 127, "doc_id": "beanstalk", "chunk_id": "11", "question_id": 4, "question": "What should you change the Environment type to in the Auto Scaling group?", "answer_span": "change Environment type to Load balanced.", "chunk": "your php directory with the -X to exclude extra file attributes: zip -X -r ../php-v2.zip . Step 5 - Scale your application You can configure your environment to better suit your application. For example, if you have a compute-intensive application, you can change the type of Amazon Elastic Compute Cloud (Amazon EC2) instance that is running your application. To apply configuration changes, Elastic Beanstalk performs an environment update. Some configuration changes are simple and happen quickly. Some changes require deleting and recreating AWS resources, which can take several minutes. When you change configuration settings, Elastic Beanstalk warns you about potential application downtime. Step 5 - Scale your application 19 AWS Elastic Beanstalk Developer Guide Increase capacity settings In this example of a configuration change, you edit your environment's capacity settings. You configure a load-balanced, scalable environment that has between two and four Amazon EC2 instances in its Auto Scaling group, and then you verify that the change occurred. Elastic Beanstalk creates an additional Amazon EC2 instance, adding to the single instance that it created initially. Then, Elastic Beanstalk associates both instances with the environment's load balancer. As a result, your application's responsiveness is improved and its availability is increased. To change your environment's capacity 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. In the navigation pane, choose Configuration. 4. In the Instance traffic and scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7."} -{"global_id": 128, "doc_id": "beanstalk", "chunk_id": "12", "question_id": 1, "question": "What should you do to change the environment type to Load balanced?", "answer_span": "Under Auto Scaling group change Environment type to Load balanced.", "chunk": "scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7. Developer Guide To save the changes choose Apply at the bottom of the page. If you are warned that the update will replace all of your current instances. Choose Confirm. The environment update can take a few minutes. You should see several updates in the list of events. Watch for the event Successfully deployed new configuration to environment. Verify increased capacity After the environment update is complete and the environment is ready, Elastic Beanstalk automatically launched a second instance to meet your new minimum capacity setting. To verify the increased capacity 1. Choose Health from either the tab list or left navigation pane. 2. Review the Enhanced instance health section. You just scaled up! With two Amazon EC2 instances, your environment capacity has doubled, and it only took a few minutes. Cleaning up your Elastic Beanstalk environment To ensure that you're not charged for any services you aren't using, delete all application versions and terminate environments, which also deletes the AWS resources that the environment created for you. Verify increased capacity 21"} -{"global_id": 129, "doc_id": "beanstalk", "chunk_id": "12", "question_id": 2, "question": "What is the new minimum and maximum capacity setting after the changes?", "answer_span": "In the Instances row, change Min to 2 and Max to 4.", "chunk": "scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7. Developer Guide To save the changes choose Apply at the bottom of the page. If you are warned that the update will replace all of your current instances. Choose Confirm. The environment update can take a few minutes. You should see several updates in the list of events. Watch for the event Successfully deployed new configuration to environment. Verify increased capacity After the environment update is complete and the environment is ready, Elastic Beanstalk automatically launched a second instance to meet your new minimum capacity setting. To verify the increased capacity 1. Choose Health from either the tab list or left navigation pane. 2. Review the Enhanced instance health section. You just scaled up! With two Amazon EC2 instances, your environment capacity has doubled, and it only took a few minutes. Cleaning up your Elastic Beanstalk environment To ensure that you're not charged for any services you aren't using, delete all application versions and terminate environments, which also deletes the AWS resources that the environment created for you. Verify increased capacity 21"} -{"global_id": 130, "doc_id": "beanstalk", "chunk_id": "12", "question_id": 3, "question": "What should you do if warned that the update will replace all current instances?", "answer_span": "Choose Confirm.", "chunk": "scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7. Developer Guide To save the changes choose Apply at the bottom of the page. If you are warned that the update will replace all of your current instances. Choose Confirm. The environment update can take a few minutes. You should see several updates in the list of events. Watch for the event Successfully deployed new configuration to environment. Verify increased capacity After the environment update is complete and the environment is ready, Elastic Beanstalk automatically launched a second instance to meet your new minimum capacity setting. To verify the increased capacity 1. Choose Health from either the tab list or left navigation pane. 2. Review the Enhanced instance health section. You just scaled up! With two Amazon EC2 instances, your environment capacity has doubled, and it only took a few minutes. Cleaning up your Elastic Beanstalk environment To ensure that you're not charged for any services you aren't using, delete all application versions and terminate environments, which also deletes the AWS resources that the environment created for you. Verify increased capacity 21"} -{"global_id": 131, "doc_id": "beanstalk", "chunk_id": "12", "question_id": 4, "question": "How can you verify that the capacity has increased after the environment update?", "answer_span": "To verify the increased capacity 1. Choose Health from either the tab list or left navigation pane.", "chunk": "scaling configuration category, choose Edit. 5. Collapse the Instances section, so you can more easily see the Capacity section. Under Auto Scaling group change Environment type to Load balanced. 6. In the Instances row, change Min to 2 and Max to 4. Increase capacity settings 20 AWS Elastic Beanstalk 7. Developer Guide To save the changes choose Apply at the bottom of the page. If you are warned that the update will replace all of your current instances. Choose Confirm. The environment update can take a few minutes. You should see several updates in the list of events. Watch for the event Successfully deployed new configuration to environment. Verify increased capacity After the environment update is complete and the environment is ready, Elastic Beanstalk automatically launched a second instance to meet your new minimum capacity setting. To verify the increased capacity 1. Choose Health from either the tab list or left navigation pane. 2. Review the Enhanced instance health section. You just scaled up! With two Amazon EC2 instances, your environment capacity has doubled, and it only took a few minutes. Cleaning up your Elastic Beanstalk environment To ensure that you're not charged for any services you aren't using, delete all application versions and terminate environments, which also deletes the AWS resources that the environment created for you. Verify increased capacity 21"} -{"global_id": 132, "doc_id": "fargate", "chunk_id": "0", "question_id": 1, "question": "What technology does AWS Fargate use with Amazon ECS?", "answer_span": "AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances.", "chunk": "Amazon Elastic Container Service Developer Guide AWS Fargate for Amazon ECS AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. When you run your tasks and services with the Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task. You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE. For more information, see Launch types. Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.Unless otherwise specified, the information on this page applies to all Fargate platforms. This topic describes the different components of Fargate tasks and services, and calls out special considerations for using Fargate with Amazon ECS. For information about the Regions that support Linux containers on Fargate, see the section called “Linux containers on AWS Fargate”. For information about the Regions that support Windows containers on Fargate, see the section called “Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about"} -{"global_id": 133, "doc_id": "fargate", "chunk_id": "0", "question_id": 2, "question": "What do you no longer have to do when using AWS Fargate?", "answer_span": "With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers.", "chunk": "Amazon Elastic Container Service Developer Guide AWS Fargate for Amazon ECS AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. When you run your tasks and services with the Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task. You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE. For more information, see Launch types. Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.Unless otherwise specified, the information on this page applies to all Fargate platforms. This topic describes the different components of Fargate tasks and services, and calls out special considerations for using Fargate with Amazon ECS. For information about the Regions that support Linux containers on Fargate, see the section called “Linux containers on AWS Fargate”. For information about the Regions that support Windows containers on Fargate, see the section called “Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about"} -{"global_id": 134, "doc_id": "fargate", "chunk_id": "0", "question_id": 3, "question": "What must you set to configure your task definitions for Fargate?", "answer_span": "You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE.", "chunk": "Amazon Elastic Container Service Developer Guide AWS Fargate for Amazon ECS AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. When you run your tasks and services with the Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task. You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE. For more information, see Launch types. Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.Unless otherwise specified, the information on this page applies to all Fargate platforms. This topic describes the different components of Fargate tasks and services, and calls out special considerations for using Fargate with Amazon ECS. For information about the Regions that support Linux containers on Fargate, see the section called “Linux containers on AWS Fargate”. For information about the Regions that support Windows containers on Fargate, see the section called “Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about"} -{"global_id": 135, "doc_id": "fargate", "chunk_id": "0", "question_id": 4, "question": "What operating systems does Fargate offer platform versions for?", "answer_span": "Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.", "chunk": "Amazon Elastic Container Service Developer Guide AWS Fargate for Amazon ECS AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. When you run your tasks and services with the Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task. You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE. For more information, see Launch types. Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.Unless otherwise specified, the information on this page applies to all Fargate platforms. This topic describes the different components of Fargate tasks and services, and calls out special considerations for using Fargate with Amazon ECS. For information about the Regions that support Linux containers on Fargate, see the section called “Linux containers on AWS Fargate”. For information about the Regions that support Windows containers on Fargate, see the section called “Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about"} -{"global_id": 136, "doc_id": "fargate", "chunk_id": "1", "question_id": 1, "question": "What are the two types of Amazon ECS tasks mentioned for the Fargate launch type?", "answer_span": "• Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type", "chunk": "“Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Walkthroughs 167 Amazon Elastic Container Service Developer Guide • Creating an Amazon ECS Windows task for the Fargate launch type with the AWS CLI Capacity providers The following capacity providers are available: • Fargate • Fargate Spot - Run interruption tolerant Amazon ECS tasks at a discounted rate compared to the AWS Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks will be interrupted with a two-minute warning. For more information, see Amazon ECS clusters for Fargate. Task definitions Tasks that use the Fargate launch type don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently for Fargate tasks. For more information, see Task CPU and memory. Platform versions AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each"} -{"global_id": 137, "doc_id": "fargate", "chunk_id": "1", "question_id": 2, "question": "What is Fargate Spot?", "answer_span": "Fargate Spot - Run interruption tolerant Amazon ECS tasks at a discounted rate compared to the AWS Fargate price.", "chunk": "“Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Walkthroughs 167 Amazon Elastic Container Service Developer Guide • Creating an Amazon ECS Windows task for the Fargate launch type with the AWS CLI Capacity providers The following capacity providers are available: • Fargate • Fargate Spot - Run interruption tolerant Amazon ECS tasks at a discounted rate compared to the AWS Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks will be interrupted with a two-minute warning. For more information, see Amazon ECS clusters for Fargate. Task definitions Tasks that use the Fargate launch type don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently for Fargate tasks. For more information, see Task CPU and memory. Platform versions AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each"} -{"global_id": 138, "doc_id": "fargate", "chunk_id": "1", "question_id": 3, "question": "What do tasks that use the Fargate launch type not support?", "answer_span": "Tasks that use the Fargate launch type don't support all of the Amazon ECS task definition parameters that are available.", "chunk": "“Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Walkthroughs 167 Amazon Elastic Container Service Developer Guide • Creating an Amazon ECS Windows task for the Fargate launch type with the AWS CLI Capacity providers The following capacity providers are available: • Fargate • Fargate Spot - Run interruption tolerant Amazon ECS tasks at a discounted rate compared to the AWS Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks will be interrupted with a two-minute warning. For more information, see Amazon ECS clusters for Fargate. Task definitions Tasks that use the Fargate launch type don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently for Fargate tasks. For more information, see Task CPU and memory. Platform versions AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each"} -{"global_id": 139, "doc_id": "fargate", "chunk_id": "1", "question_id": 4, "question": "What is a Fargate platform version?", "answer_span": "AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure.", "chunk": "“Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Walkthroughs 167 Amazon Elastic Container Service Developer Guide • Creating an Amazon ECS Windows task for the Fargate launch type with the AWS CLI Capacity providers The following capacity providers are available: • Fargate • Fargate Spot - Run interruption tolerant Amazon ECS tasks at a discounted rate compared to the AWS Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks will be interrupted with a two-minute warning. For more information, see Amazon ECS clusters for Fargate. Task definitions Tasks that use the Fargate launch type don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently for Fargate tasks. For more information, see Task CPU and memory. Platform versions AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each"} -{"global_id": 140, "doc_id": "fargate", "chunk_id": "2", "question_id": 1, "question": "What happens when a security issue is found that affects an existing platform version?", "answer_span": "If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision.", "chunk": "to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . Capacity providers 168 Amazon Elastic Container Service Developer Guide For more information see Fargate platform versions for Amazon ECS. Service load balancing Your Amazon ECS service on AWS Fargate can optionally be configured to use Elastic Load Balancing to distribute traffic evenly across the tasks in your service. Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types. Application Load Balancers are used to route HTTP/HTTPS (or layer 7) traffic. Network Load Balancers are used to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network"} -{"global_id": 141, "doc_id": "fargate", "chunk_id": "2", "question_id": 2, "question": "How can you ensure that tasks are always started on secure and patched infrastructure?", "answer_span": "A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure.", "chunk": "to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . Capacity providers 168 Amazon Elastic Container Service Developer Guide For more information see Fargate platform versions for Amazon ECS. Service load balancing Your Amazon ECS service on AWS Fargate can optionally be configured to use Elastic Load Balancing to distribute traffic evenly across the tasks in your service. Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types. Application Load Balancers are used to route HTTP/HTTPS (or layer 7) traffic. Network Load Balancers are used to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network"} -{"global_id": 142, "doc_id": "fargate", "chunk_id": "2", "question_id": 3, "question": "What types of load balancers are supported by Amazon ECS services on AWS Fargate?", "answer_span": "Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types.", "chunk": "to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . Capacity providers 168 Amazon Elastic Container Service Developer Guide For more information see Fargate platform versions for Amazon ECS. Service load balancing Your Amazon ECS service on AWS Fargate can optionally be configured to use Elastic Load Balancing to distribute traffic evenly across the tasks in your service. Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types. Application Load Balancers are used to route HTTP/HTTPS (or layer 7) traffic. Network Load Balancers are used to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network"} -{"global_id": 143, "doc_id": "fargate", "chunk_id": "2", "question_id": 4, "question": "What must you choose as the target type when creating a target group for these services?", "answer_span": "When you create a target group for these services, you must choose ip as the target type, not instance.", "chunk": "to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . Capacity providers 168 Amazon Elastic Container Service Developer Guide For more information see Fargate platform versions for Amazon ECS. Service load balancing Your Amazon ECS service on AWS Fargate can optionally be configured to use Elastic Load Balancing to distribute traffic evenly across the tasks in your service. Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types. Application Load Balancers are used to route HTTP/HTTPS (or layer 7) traffic. Network Load Balancers are used to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network"} -{"global_id": 144, "doc_id": "fargate", "chunk_id": "3", "question_id": 1, "question": "What types of traffic can be routed using the described method?", "answer_span": "to route TCP or UDP (or layer 4) traffic.", "chunk": "to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information, see Use load balancing to distribute Amazon ECS service traffic. Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later. Usage metrics You can use CloudWatch usage metrics to provide visibility into your accounts usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards. AWS Fargate usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about AWS Fargate service quotas, Amazon ECS endpoints and quotas in the Amazon Web Services General Reference.. For more information about AWS Fargate usage metrics, see AWS Fargate usage metrics. Amazon ECS security considerations for when to use the Fargate launch type We recommend that customers looking for strong isolation for their tasks use Fargate. Fargate runs each task in a hardware virtualization environment. This ensures that these containerized workloads do not share network interfaces, Fargate ephemeral storage, CPU, or memory with other tasks. For more information, see Security Overview of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt"} -{"global_id": 145, "doc_id": "fargate", "chunk_id": "3", "question_id": 2, "question": "What must you choose as the target type when creating a target group for certain services?", "answer_span": "you must choose ip as the target type, not instance.", "chunk": "to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information, see Use load balancing to distribute Amazon ECS service traffic. Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later. Usage metrics You can use CloudWatch usage metrics to provide visibility into your accounts usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards. AWS Fargate usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about AWS Fargate service quotas, Amazon ECS endpoints and quotas in the Amazon Web Services General Reference.. For more information about AWS Fargate usage metrics, see AWS Fargate usage metrics. Amazon ECS security considerations for when to use the Fargate launch type We recommend that customers looking for strong isolation for their tasks use Fargate. Fargate runs each task in a hardware virtualization environment. This ensures that these containerized workloads do not share network interfaces, Fargate ephemeral storage, CPU, or memory with other tasks. For more information, see Security Overview of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt"} -{"global_id": 146, "doc_id": "fargate", "chunk_id": "3", "question_id": 3, "question": "When is using a Network Load Balancer to route UDP traffic supported?", "answer_span": "Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later.", "chunk": "to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information, see Use load balancing to distribute Amazon ECS service traffic. Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later. Usage metrics You can use CloudWatch usage metrics to provide visibility into your accounts usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards. AWS Fargate usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about AWS Fargate service quotas, Amazon ECS endpoints and quotas in the Amazon Web Services General Reference.. For more information about AWS Fargate usage metrics, see AWS Fargate usage metrics. Amazon ECS security considerations for when to use the Fargate launch type We recommend that customers looking for strong isolation for their tasks use Fargate. Fargate runs each task in a hardware virtualization environment. This ensures that these containerized workloads do not share network interfaces, Fargate ephemeral storage, CPU, or memory with other tasks. For more information, see Security Overview of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt"} -{"global_id": 147, "doc_id": "fargate", "chunk_id": "3", "question_id": 4, "question": "What does AWS Fargate usage metrics correspond to?", "answer_span": "AWS Fargate usage metrics correspond to AWS service quotas.", "chunk": "to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information, see Use load balancing to distribute Amazon ECS service traffic. Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later. Usage metrics You can use CloudWatch usage metrics to provide visibility into your accounts usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards. AWS Fargate usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about AWS Fargate service quotas, Amazon ECS endpoints and quotas in the Amazon Web Services General Reference.. For more information about AWS Fargate usage metrics, see AWS Fargate usage metrics. Amazon ECS security considerations for when to use the Fargate launch type We recommend that customers looking for strong isolation for their tasks use Fargate. Fargate runs each task in a hardware virtualization environment. This ensures that these containerized workloads do not share network interfaces, Fargate ephemeral storage, CPU, or memory with other tasks. For more information, see Security Overview of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt"} -{"global_id": 148, "doc_id": "fargate", "chunk_id": "4", "question_id": 1, "question": "What is recommended for encrypting ephemeral storage for Fargate?", "answer_span": "You should have your ephemeral storage encrypted by either AWS KMS or your own customer managed keys.", "chunk": "of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or your own customer managed keys. For tasks that are hosted on Fargate using platform version 1.4.0 or later, each task receives 20 GiB of ephemeral storage. For more information, see customer managed key (CMK). You can increase the total amount of ephemeral storage, up to a maximum of 200 GiB, by specifying the ephemeralStorage parameter in your task definition. For such tasks that were launched on May 28, 2020 or later, the ephemeral storage is encrypted with an AES-256 encryption algorithm using an encryption key managed by Fargate. For more information, see Storage options for Amazon ECS tasks. Example: Launching an task on Fargate platform version 1.4.0 with ephemeral storage encryption The following command will launch a task on Fargate platform version 1.4. Because this task is launched as part of the cluster, it uses the 20 GiB of ephemeral storage that's automatically encrypted. aws ecs run-task --cluster clustername \\ --task-definition taskdefinition:version \\ --count 1 --launch-type \"FARGATE\" \\ --platform-version 1.4.0 \\ --network-configuration \"awsvpcConfiguration={subnets=[subnetid],securityGroups=[securitygroupid]}\" \\ --region region SYS_PTRACE capability for kernel syscall tracing with Fargate The default configuration of Linux capabilities that are added or removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your"} -{"global_id": 149, "doc_id": "fargate", "chunk_id": "4", "question_id": 2, "question": "What is the maximum amount of ephemeral storage that can be specified in a task definition?", "answer_span": "You can increase the total amount of ephemeral storage, up to a maximum of 200 GiB, by specifying the ephemeralStorage parameter in your task definition.", "chunk": "of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or your own customer managed keys. For tasks that are hosted on Fargate using platform version 1.4.0 or later, each task receives 20 GiB of ephemeral storage. For more information, see customer managed key (CMK). You can increase the total amount of ephemeral storage, up to a maximum of 200 GiB, by specifying the ephemeralStorage parameter in your task definition. For such tasks that were launched on May 28, 2020 or later, the ephemeral storage is encrypted with an AES-256 encryption algorithm using an encryption key managed by Fargate. For more information, see Storage options for Amazon ECS tasks. Example: Launching an task on Fargate platform version 1.4.0 with ephemeral storage encryption The following command will launch a task on Fargate platform version 1.4. Because this task is launched as part of the cluster, it uses the 20 GiB of ephemeral storage that's automatically encrypted. aws ecs run-task --cluster clustername \\ --task-definition taskdefinition:version \\ --count 1 --launch-type \"FARGATE\" \\ --platform-version 1.4.0 \\ --network-configuration \"awsvpcConfiguration={subnets=[subnetid],securityGroups=[securitygroupid]}\" \\ --region region SYS_PTRACE capability for kernel syscall tracing with Fargate The default configuration of Linux capabilities that are added or removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your"} -{"global_id": 150, "doc_id": "fargate", "chunk_id": "4", "question_id": 3, "question": "What encryption algorithm is used for ephemeral storage launched on Fargate after May 28, 2020?", "answer_span": "the ephemeral storage is encrypted with an AES-256 encryption algorithm using an encryption key managed by Fargate.", "chunk": "of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or your own customer managed keys. For tasks that are hosted on Fargate using platform version 1.4.0 or later, each task receives 20 GiB of ephemeral storage. For more information, see customer managed key (CMK). You can increase the total amount of ephemeral storage, up to a maximum of 200 GiB, by specifying the ephemeralStorage parameter in your task definition. For such tasks that were launched on May 28, 2020 or later, the ephemeral storage is encrypted with an AES-256 encryption algorithm using an encryption key managed by Fargate. For more information, see Storage options for Amazon ECS tasks. Example: Launching an task on Fargate platform version 1.4.0 with ephemeral storage encryption The following command will launch a task on Fargate platform version 1.4. Because this task is launched as part of the cluster, it uses the 20 GiB of ephemeral storage that's automatically encrypted. aws ecs run-task --cluster clustername \\ --task-definition taskdefinition:version \\ --count 1 --launch-type \"FARGATE\" \\ --platform-version 1.4.0 \\ --network-configuration \"awsvpcConfiguration={subnets=[subnetid],securityGroups=[securitygroupid]}\" \\ --region region SYS_PTRACE capability for kernel syscall tracing with Fargate The default configuration of Linux capabilities that are added or removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your"} -{"global_id": 151, "doc_id": "fargate", "chunk_id": "4", "question_id": 4, "question": "Which kernel capability is supported for tasks launched on Fargate?", "answer_span": "Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability.", "chunk": "of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or your own customer managed keys. For tasks that are hosted on Fargate using platform version 1.4.0 or later, each task receives 20 GiB of ephemeral storage. For more information, see customer managed key (CMK). You can increase the total amount of ephemeral storage, up to a maximum of 200 GiB, by specifying the ephemeralStorage parameter in your task definition. For such tasks that were launched on May 28, 2020 or later, the ephemeral storage is encrypted with an AES-256 encryption algorithm using an encryption key managed by Fargate. For more information, see Storage options for Amazon ECS tasks. Example: Launching an task on Fargate platform version 1.4.0 with ephemeral storage encryption The following command will launch a task on Fargate platform version 1.4. Because this task is launched as part of the cluster, it uses the 20 GiB of ephemeral storage that's automatically encrypted. aws ecs run-task --cluster clustername \\ --task-definition taskdefinition:version \\ --count 1 --launch-type \"FARGATE\" \\ --platform-version 1.4.0 \\ --network-configuration \"awsvpcConfiguration={subnets=[subnetid],securityGroups=[securitygroupid]}\" \\ --region region SYS_PTRACE capability for kernel syscall tracing with Fargate The default configuration of Linux capabilities that are added or removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your"} -{"global_id": 152, "doc_id": "fargate", "chunk_id": "5", "question_id": 1, "question": "What kernel capability does Fargate support for tasks?", "answer_span": "Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability.", "chunk": "removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on GitHub here. Use Amazon GuardDuty with Fargate Runtime Monitoring Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment. Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior. Runtime Monitoring uses a lightweight, fully managed GuardDuty security agent that analyzes onhost behavior, such as file access, process execution, and network connections. This covers issues including escalation of privileges, use of exposed credentials, or communication with malicious IP addresses, domains, and the presence of malware on your Amazon EC2 instances and container workloads. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. Fargate security considerations for Amazon ECS Each task has a dedicated infrastructure capacity because Fargate runs each workload on an isolated virtual environment. Workloads that run on Fargate do not share network interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code,"} -{"global_id": 153, "doc_id": "fargate", "chunk_id": "5", "question_id": 2, "question": "What service does Amazon GuardDuty provide?", "answer_span": "Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment.", "chunk": "removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on GitHub here. Use Amazon GuardDuty with Fargate Runtime Monitoring Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment. Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior. Runtime Monitoring uses a lightweight, fully managed GuardDuty security agent that analyzes onhost behavior, such as file access, process execution, and network connections. This covers issues including escalation of privileges, use of exposed credentials, or communication with malicious IP addresses, domains, and the presence of malware on your Amazon EC2 instances and container workloads. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. Fargate security considerations for Amazon ECS Each task has a dedicated infrastructure capacity because Fargate runs each workload on an isolated virtual environment. Workloads that run on Fargate do not share network interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code,"} -{"global_id": 154, "doc_id": "fargate", "chunk_id": "5", "question_id": 3, "question": "What does Runtime Monitoring in GuardDuty do?", "answer_span": "Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior.", "chunk": "removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on GitHub here. Use Amazon GuardDuty with Fargate Runtime Monitoring Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment. Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior. Runtime Monitoring uses a lightweight, fully managed GuardDuty security agent that analyzes onhost behavior, such as file access, process execution, and network connections. This covers issues including escalation of privileges, use of exposed credentials, or communication with malicious IP addresses, domains, and the presence of malware on your Amazon EC2 instances and container workloads. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. Fargate security considerations for Amazon ECS Each task has a dedicated infrastructure capacity because Fargate runs each workload on an isolated virtual environment. Workloads that run on Fargate do not share network interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code,"} -{"global_id": 155, "doc_id": "fargate", "chunk_id": "5", "question_id": 4, "question": "How does Fargate ensure isolation for workloads?", "answer_span": "Each task has a dedicated infrastructure capacity because Fargate runs each workload on an isolated virtual environment.", "chunk": "removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on GitHub here. Use Amazon GuardDuty with Fargate Runtime Monitoring Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment. Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior. Runtime Monitoring uses a lightweight, fully managed GuardDuty security agent that analyzes onhost behavior, such as file access, process execution, and network connections. This covers issues including escalation of privileges, use of exposed credentials, or communication with malicious IP addresses, domains, and the presence of malware on your Amazon EC2 instances and container workloads. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. Fargate security considerations for Amazon ECS Each task has a dedicated infrastructure capacity because Fargate runs each workload on an isolated virtual environment. Workloads that run on Fargate do not share network interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code,"} -{"global_id": 156, "doc_id": "fargate", "chunk_id": "6", "question_id": 1, "question": "What is a sidecar in the context of Amazon ECS tasks?", "answer_span": "A sidecar is a container that runs alongside an application container in an Amazon ECS task.", "chunk": "interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application. Containers that are part of the same task share resources for the Fargate launch type because these containers will always run on the same host and share compute resources. These containers also share the ephemeral storage provided by Fargate. Linux containers in a task share network namespaces, including the IP address and network ports. Inside a task, containers that belong to the task can inter-communicate over localhost. The runtime environment in Fargate prevents you from using certain controller features that are supported on EC2 instances. Consider the following when you architect workloads that run on Fargate: Use Amazon GuardDuty with Fargate Runtime Monitoring 171 Amazon Elastic Container Service Developer Guide • No privileged containers or access - Features such as privileged containers or access are currently unavailable on Fargate. This will affect uses cases such as running Docker in Docker. • Limited access to Linux capabilities - The environment in which containers run on Fargate is locked down. Additional Linux capabilities, such as CAP_SYS_ADMIN and CAP_NET_ADMIN, are restricted to prevent a privilege escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS"} -{"global_id": 157, "doc_id": "fargate", "chunk_id": "6", "question_id": 2, "question": "How do sidecars benefit application functions?", "answer_span": "Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application.", "chunk": "interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application. Containers that are part of the same task share resources for the Fargate launch type because these containers will always run on the same host and share compute resources. These containers also share the ephemeral storage provided by Fargate. Linux containers in a task share network namespaces, including the IP address and network ports. Inside a task, containers that belong to the task can inter-communicate over localhost. The runtime environment in Fargate prevents you from using certain controller features that are supported on EC2 instances. Consider the following when you architect workloads that run on Fargate: Use Amazon GuardDuty with Fargate Runtime Monitoring 171 Amazon Elastic Container Service Developer Guide • No privileged containers or access - Features such as privileged containers or access are currently unavailable on Fargate. This will affect uses cases such as running Docker in Docker. • Limited access to Linux capabilities - The environment in which containers run on Fargate is locked down. Additional Linux capabilities, such as CAP_SYS_ADMIN and CAP_NET_ADMIN, are restricted to prevent a privilege escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS"} -{"global_id": 158, "doc_id": "fargate", "chunk_id": "6", "question_id": 3, "question": "What resources do containers in the same task share when using the Fargate launch type?", "answer_span": "These containers will always run on the same host and share compute resources.", "chunk": "interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application. Containers that are part of the same task share resources for the Fargate launch type because these containers will always run on the same host and share compute resources. These containers also share the ephemeral storage provided by Fargate. Linux containers in a task share network namespaces, including the IP address and network ports. Inside a task, containers that belong to the task can inter-communicate over localhost. The runtime environment in Fargate prevents you from using certain controller features that are supported on EC2 instances. Consider the following when you architect workloads that run on Fargate: Use Amazon GuardDuty with Fargate Runtime Monitoring 171 Amazon Elastic Container Service Developer Guide • No privileged containers or access - Features such as privileged containers or access are currently unavailable on Fargate. This will affect uses cases such as running Docker in Docker. • Limited access to Linux capabilities - The environment in which containers run on Fargate is locked down. Additional Linux capabilities, such as CAP_SYS_ADMIN and CAP_NET_ADMIN, are restricted to prevent a privilege escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS"} -{"global_id": 159, "doc_id": "fargate", "chunk_id": "6", "question_id": 4, "question": "What is a limitation of the Fargate runtime environment regarding Linux capabilities?", "answer_span": "The environment in which containers run on Fargate is locked down.", "chunk": "interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application. Containers that are part of the same task share resources for the Fargate launch type because these containers will always run on the same host and share compute resources. These containers also share the ephemeral storage provided by Fargate. Linux containers in a task share network namespaces, including the IP address and network ports. Inside a task, containers that belong to the task can inter-communicate over localhost. The runtime environment in Fargate prevents you from using certain controller features that are supported on EC2 instances. Consider the following when you architect workloads that run on Fargate: Use Amazon GuardDuty with Fargate Runtime Monitoring 171 Amazon Elastic Container Service Developer Guide • No privileged containers or access - Features such as privileged containers or access are currently unavailable on Fargate. This will affect uses cases such as running Docker in Docker. • Limited access to Linux capabilities - The environment in which containers run on Fargate is locked down. Additional Linux capabilities, such as CAP_SYS_ADMIN and CAP_NET_ADMIN, are restricted to prevent a privilege escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS"} -{"global_id": 160, "doc_id": "fargate", "chunk_id": "7", "question_id": 1, "question": "What capability does Fargate support adding to tasks for observability and security tools?", "answer_span": "Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application.", "chunk": "escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You can use ECS exec to help collect diagnostic information for debugging. Fargate also prevents containers from accessing the underlying host’s resources, such as the file system, devices, networking, and container runtime. • Networking - You can use security groups and network ACLs to control inbound and outbound traffic. Fargate tasks receive an IP address from the configured subnet in your VPC. Fargate platform versions for Amazon ECS AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision"} -{"global_id": 161, "doc_id": "fargate", "chunk_id": "7", "question_id": 2, "question": "Can customers connect to the underlying host running their workloads on Fargate?", "answer_span": "Neither customers nor AWS operators can connect to a host running customer workloads.", "chunk": "escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You can use ECS exec to help collect diagnostic information for debugging. Fargate also prevents containers from accessing the underlying host’s resources, such as the file system, devices, networking, and container runtime. • Networking - You can use security groups and network ACLs to control inbound and outbound traffic. Fargate tasks receive an IP address from the configured subnet in your VPC. Fargate platform versions for Amazon ECS AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision"} -{"global_id": 162, "doc_id": "fargate", "chunk_id": "7", "question_id": 3, "question": "What do Fargate tasks receive from the configured subnet in your VPC?", "answer_span": "Fargate tasks receive an IP address from the configured subnet in your VPC.", "chunk": "escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You can use ECS exec to help collect diagnostic information for debugging. Fargate also prevents containers from accessing the underlying host’s resources, such as the file system, devices, networking, and container runtime. • Networking - You can use security groups and network ACLs to control inbound and outbound traffic. Fargate tasks receive an IP address from the configured subnet in your VPC. Fargate platform versions for Amazon ECS AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision"} -{"global_id": 163, "doc_id": "fargate", "chunk_id": "7", "question_id": 4, "question": "What happens if a security issue is found that affects an existing platform version?", "answer_span": "If a security issue is found that affects an existing platform version, AWS creates a new patched revision.", "chunk": "escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You can use ECS exec to help collect diagnostic information for debugging. Fargate also prevents containers from accessing the underlying host’s resources, such as the file system, devices, networking, and container runtime. • Networking - You can use security groups and network ACLs to control inbound and outbound traffic. Fargate tasks receive an IP address from the configured subnet in your VPC. Fargate platform versions for Amazon ECS AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision"} -{"global_id": 164, "doc_id": "fargate", "chunk_id": "8", "question_id": 1, "question": "What ensures that tasks on Fargate are started on secure infrastructure?", "answer_span": "A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure.", "chunk": "start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . You specify the platform version when you run a task, or deploy a service. Fargate platform versions 172 Amazon Elastic Container Service Developer Guide Consider the following when specifying a platform version: • You can specify a a specific version number, for example 1.4.0, or LATEST. The LATEST Linux platform version is 1.4.0. The LATEST Windows platform version is 1.0.0. • If you want to update the platform version for a service, create a deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. To change the service to run tasks on the Linux platform version 1.4.0, you update your service and specify a new platform version. Your tasks are redeployed with the latest platform version and the latest platform version revision. For more information about deployments, see Amazon ECS services. • If your service is scaled up without updating the platform version, those tasks receive the platform version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform"} -{"global_id": 165, "doc_id": "fargate", "chunk_id": "8", "question_id": 2, "question": "What happens if a security issue is found in an existing platform version?", "answer_span": "If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision.", "chunk": "start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . You specify the platform version when you run a task, or deploy a service. Fargate platform versions 172 Amazon Elastic Container Service Developer Guide Consider the following when specifying a platform version: • You can specify a a specific version number, for example 1.4.0, or LATEST. The LATEST Linux platform version is 1.4.0. The LATEST Windows platform version is 1.0.0. • If you want to update the platform version for a service, create a deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. To change the service to run tasks on the Linux platform version 1.4.0, you update your service and specify a new platform version. Your tasks are redeployed with the latest platform version and the latest platform version revision. For more information about deployments, see Amazon ECS services. • If your service is scaled up without updating the platform version, those tasks receive the platform version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform"} -{"global_id": 166, "doc_id": "fargate", "chunk_id": "8", "question_id": 3, "question": "How can you specify the platform version when running a task?", "answer_span": "You specify the platform version when you run a task, or deploy a service.", "chunk": "start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . You specify the platform version when you run a task, or deploy a service. Fargate platform versions 172 Amazon Elastic Container Service Developer Guide Consider the following when specifying a platform version: • You can specify a a specific version number, for example 1.4.0, or LATEST. The LATEST Linux platform version is 1.4.0. The LATEST Windows platform version is 1.0.0. • If you want to update the platform version for a service, create a deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. To change the service to run tasks on the Linux platform version 1.4.0, you update your service and specify a new platform version. Your tasks are redeployed with the latest platform version and the latest platform version revision. For more information about deployments, see Amazon ECS services. • If your service is scaled up without updating the platform version, those tasks receive the platform version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform"} -{"global_id": 167, "doc_id": "fargate", "chunk_id": "8", "question_id": 4, "question": "What should you do to update the platform version for a service?", "answer_span": "If you want to update the platform version for a service, create a deployment.", "chunk": "start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . You specify the platform version when you run a task, or deploy a service. Fargate platform versions 172 Amazon Elastic Container Service Developer Guide Consider the following when specifying a platform version: • You can specify a a specific version number, for example 1.4.0, or LATEST. The LATEST Linux platform version is 1.4.0. The LATEST Windows platform version is 1.0.0. • If you want to update the platform version for a service, create a deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. To change the service to run tasks on the Linux platform version 1.4.0, you update your service and specify a new platform version. Your tasks are redeployed with the latest platform version and the latest platform version revision. For more information about deployments, see Amazon ECS services. • If your service is scaled up without updating the platform version, those tasks receive the platform version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform"} -{"global_id": 168, "doc_id": "fargate", "chunk_id": "9", "question_id": 1, "question": "What happens when you increase the desired count of a service on the Linux platform version 1.3.0?", "answer_span": "If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform version 1.3.0.", "chunk": "version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform version 1.3.0. • New tasks always run on the latest revision of a platform version. This ensures tasks are always on secured and patched infrastructure. • The platform version numbers for Linux containers and Windows containers on Fargate are independent. For example, the behavior, features, and software used in platform version 1.0.0 for Windows containers on Fargate aren't comparable to those of platform version 1.0.0 for Linux containers on Fargate. • The following applies to Fargate Windows platform versions. Microsoft Windows Server container images must be created from a specific version of Windows Server. You must select the same version of Windows Server in the platformFamily when you run a task or create a service that matches the Windows Server container image. Additionally, you can provide a matching operatingSystemFamily in the task definition to prevent tasks from being run on the wrong Windows version. For more information, see Matching container host version with container image versions on the Microsoft Learn website. Fargate platform versions 173 Amazon Elastic Container Service Developer Guide Migrating to Linux platform version 1.4.0 on Amazon ECS Consider the following when migrating your Amazon ECS on Fargate tasks from platform version 1.0.0, 1.1.0, 1.2.0, or 1.3.0 to platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network"} -{"global_id": 169, "doc_id": "fargate", "chunk_id": "9", "question_id": 2, "question": "How do new tasks operate in relation to platform versions?", "answer_span": "New tasks always run on the latest revision of a platform version.", "chunk": "version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform version 1.3.0. • New tasks always run on the latest revision of a platform version. This ensures tasks are always on secured and patched infrastructure. • The platform version numbers for Linux containers and Windows containers on Fargate are independent. For example, the behavior, features, and software used in platform version 1.0.0 for Windows containers on Fargate aren't comparable to those of platform version 1.0.0 for Linux containers on Fargate. • The following applies to Fargate Windows platform versions. Microsoft Windows Server container images must be created from a specific version of Windows Server. You must select the same version of Windows Server in the platformFamily when you run a task or create a service that matches the Windows Server container image. Additionally, you can provide a matching operatingSystemFamily in the task definition to prevent tasks from being run on the wrong Windows version. For more information, see Matching container host version with container image versions on the Microsoft Learn website. Fargate platform versions 173 Amazon Elastic Container Service Developer Guide Migrating to Linux platform version 1.4.0 on Amazon ECS Consider the following when migrating your Amazon ECS on Fargate tasks from platform version 1.0.0, 1.1.0, 1.2.0, or 1.3.0 to platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network"} -{"global_id": 170, "doc_id": "fargate", "chunk_id": "9", "question_id": 3, "question": "What must be selected when running a task or creating a service for Windows containers?", "answer_span": "You must select the same version of Windows Server in the platformFamily when you run a task or create a service that matches the Windows Server container image.", "chunk": "version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform version 1.3.0. • New tasks always run on the latest revision of a platform version. This ensures tasks are always on secured and patched infrastructure. • The platform version numbers for Linux containers and Windows containers on Fargate are independent. For example, the behavior, features, and software used in platform version 1.0.0 for Windows containers on Fargate aren't comparable to those of platform version 1.0.0 for Linux containers on Fargate. • The following applies to Fargate Windows platform versions. Microsoft Windows Server container images must be created from a specific version of Windows Server. You must select the same version of Windows Server in the platformFamily when you run a task or create a service that matches the Windows Server container image. Additionally, you can provide a matching operatingSystemFamily in the task definition to prevent tasks from being run on the wrong Windows version. For more information, see Matching container host version with container image versions on the Microsoft Learn website. Fargate platform versions 173 Amazon Elastic Container Service Developer Guide Migrating to Linux platform version 1.4.0 on Amazon ECS Consider the following when migrating your Amazon ECS on Fargate tasks from platform version 1.0.0, 1.1.0, 1.2.0, or 1.3.0 to platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network"} -{"global_id": 171, "doc_id": "fargate", "chunk_id": "9", "question_id": 4, "question": "What is recommended before migrating tasks to platform version 1.4.0?", "answer_span": "It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks.", "chunk": "version that was specified on the service's current deployment. For example, assume that you have a service that runs tasks on the Linux platform version 1.3.0. If you increase the desired count of the service, the service scheduler starts the new tasks using the latest platform version revision of platform version 1.3.0. • New tasks always run on the latest revision of a platform version. This ensures tasks are always on secured and patched infrastructure. • The platform version numbers for Linux containers and Windows containers on Fargate are independent. For example, the behavior, features, and software used in platform version 1.0.0 for Windows containers on Fargate aren't comparable to those of platform version 1.0.0 for Linux containers on Fargate. • The following applies to Fargate Windows platform versions. Microsoft Windows Server container images must be created from a specific version of Windows Server. You must select the same version of Windows Server in the platformFamily when you run a task or create a service that matches the Windows Server container image. Additionally, you can provide a matching operatingSystemFamily in the task definition to prevent tasks from being run on the wrong Windows version. For more information, see Matching container host version with container image versions on the Microsoft Learn website. Fargate platform versions 173 Amazon Elastic Container Service Developer Guide Migrating to Linux platform version 1.4.0 on Amazon ECS Consider the following when migrating your Amazon ECS on Fargate tasks from platform version 1.0.0, 1.1.0, 1.2.0, or 1.3.0 to platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network"} -{"global_id": 172, "doc_id": "fargate", "chunk_id": "10", "question_id": 1, "question": "What is the best practice before migrating tasks?", "answer_span": "It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks.", "chunk": "platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC. The traffic is visible to you through your VPC flow logs. For more information see Amazon ECS task networking options for the Fargate launch type. • If you use interface VPC endpoints, consider the following. • For container images hosted with Amazon ECR, you need the following endpoints. For more information, see Amazon ECR interface VPC endpoints (AWS PrivateLink) in the Amazon Elastic Container Registry User Guide. • com.amazonaws.region.ecr.dkr Amazon ECR VPC endpoint • com.amazonaws.region.ecr.api Amazon ECR VPC endpoint • Amazon S3 gateway endpoint • When your task definition references Secrets Manager secrets to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Secrets Manager. For more information, see Using Secrets Manager with VPC Endpoints in the AWS Secrets Manager User Guide. • When your task definition references Systems Manager Parameter Store parameters to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Systems Manager. For more information, see Improve the security of EC2 instances by using VPC endpoints for Systems Manager in the AWS Systems Manager User Guide. • The security group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux"} -{"global_id": 173, "doc_id": "fargate", "chunk_id": "10", "question_id": 2, "question": "What has been updated regarding network traffic behavior in platform version 1.4.0?", "answer_span": "The network traffic behavior to and from tasks has been updated.", "chunk": "platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC. The traffic is visible to you through your VPC flow logs. For more information see Amazon ECS task networking options for the Fargate launch type. • If you use interface VPC endpoints, consider the following. • For container images hosted with Amazon ECR, you need the following endpoints. For more information, see Amazon ECR interface VPC endpoints (AWS PrivateLink) in the Amazon Elastic Container Registry User Guide. • com.amazonaws.region.ecr.dkr Amazon ECR VPC endpoint • com.amazonaws.region.ecr.api Amazon ECR VPC endpoint • Amazon S3 gateway endpoint • When your task definition references Secrets Manager secrets to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Secrets Manager. For more information, see Using Secrets Manager with VPC Endpoints in the AWS Secrets Manager User Guide. • When your task definition references Systems Manager Parameter Store parameters to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Systems Manager. For more information, see Improve the security of EC2 instances by using VPC endpoints for Systems Manager in the AWS Systems Manager User Guide. • The security group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux"} -{"global_id": 174, "doc_id": "fargate", "chunk_id": "10", "question_id": 3, "question": "What must you create when your task definition references Secrets Manager secrets?", "answer_span": "you must create the interface VPC endpoints for Secrets Manager.", "chunk": "platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC. The traffic is visible to you through your VPC flow logs. For more information see Amazon ECS task networking options for the Fargate launch type. • If you use interface VPC endpoints, consider the following. • For container images hosted with Amazon ECR, you need the following endpoints. For more information, see Amazon ECR interface VPC endpoints (AWS PrivateLink) in the Amazon Elastic Container Registry User Guide. • com.amazonaws.region.ecr.dkr Amazon ECR VPC endpoint • com.amazonaws.region.ecr.api Amazon ECR VPC endpoint • Amazon S3 gateway endpoint • When your task definition references Secrets Manager secrets to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Secrets Manager. For more information, see Using Secrets Manager with VPC Endpoints in the AWS Secrets Manager User Guide. • When your task definition references Systems Manager Parameter Store parameters to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Systems Manager. For more information, see Improve the security of EC2 instances by using VPC endpoints for Systems Manager in the AWS Systems Manager User Guide. • The security group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux"} -{"global_id": 175, "doc_id": "fargate", "chunk_id": "10", "question_id": 4, "question": "What is required for the security group associated with the Elastic Network Interface?", "answer_span": "The security group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints.", "chunk": "platform version 1.4.0. It is best practice to confirm your task works properly on platform version 1.4.0 before you migrate the tasks. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Amazon ECS on Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC. The traffic is visible to you through your VPC flow logs. For more information see Amazon ECS task networking options for the Fargate launch type. • If you use interface VPC endpoints, consider the following. • For container images hosted with Amazon ECR, you need the following endpoints. For more information, see Amazon ECR interface VPC endpoints (AWS PrivateLink) in the Amazon Elastic Container Registry User Guide. • com.amazonaws.region.ecr.dkr Amazon ECR VPC endpoint • com.amazonaws.region.ecr.api Amazon ECR VPC endpoint • Amazon S3 gateway endpoint • When your task definition references Secrets Manager secrets to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Secrets Manager. For more information, see Using Secrets Manager with VPC Endpoints in the AWS Secrets Manager User Guide. • When your task definition references Systems Manager Parameter Store parameters to retrieve sensitive data for your containers, you must create the interface VPC endpoints for Systems Manager. For more information, see Improve the security of EC2 instances by using VPC endpoints for Systems Manager in the AWS Systems Manager User Guide. • The security group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux"} -{"global_id": 176, "doc_id": "fargate", "chunk_id": "11", "question_id": 1, "question": "What does the security group rules need to allow?", "answer_span": "the security group rules to allow traffic between the task and the VPC endpoints.", "chunk": "group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux platform version deprecation. Migrating to Linux platform version 1.4.0 174 Amazon Elastic Container Service Developer Guide 1.4.0 The following is the changelog for platform version 1.4.0. • Beginning on November 5, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to use the following features: • When using Secrets Manager to store sensitive data, you can inject a specific JSON key or a specific version of a secret as an environment variable or in a log configuration. For more information, see Pass sensitive data to an Amazon ECS container. • Specify environment variables in bulk using the environmentFiles container definition parameter. For more information, see Pass an individual environment variable to an Amazon ECS container. • Tasks run in a VPC and subnet enabled for IPv6 will be assigned both a private IPv4 address and an IPv6 address. For more information, see Amazon ECS task networking options for the Fargate launch type. • The task metadata endpoint version 4 provides additional metadata about your task and container including the task launch type, the Amazon Resource Name (ARN) of the container, and the log driver and log driver options used. When querying the /stats endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to"} -{"global_id": 177, "doc_id": "fargate", "chunk_id": "11", "question_id": 2, "question": "What is the changelog for platform version 1.4.0 related to?", "answer_span": "The following is the changelog for platform version 1.4.0.", "chunk": "group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux platform version deprecation. Migrating to Linux platform version 1.4.0 174 Amazon Elastic Container Service Developer Guide 1.4.0 The following is the changelog for platform version 1.4.0. • Beginning on November 5, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to use the following features: • When using Secrets Manager to store sensitive data, you can inject a specific JSON key or a specific version of a secret as an environment variable or in a log configuration. For more information, see Pass sensitive data to an Amazon ECS container. • Specify environment variables in bulk using the environmentFiles container definition parameter. For more information, see Pass an individual environment variable to an Amazon ECS container. • Tasks run in a VPC and subnet enabled for IPv6 will be assigned both a private IPv4 address and an IPv6 address. For more information, see Amazon ECS task networking options for the Fargate launch type. • The task metadata endpoint version 4 provides additional metadata about your task and container including the task launch type, the Amazon Resource Name (ARN) of the container, and the log driver and log driver options used. When querying the /stats endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to"} -{"global_id": 178, "doc_id": "fargate", "chunk_id": "11", "question_id": 3, "question": "What can you do when using Secrets Manager with platform version 1.4.0?", "answer_span": "you can inject a specific JSON key or a specific version of a secret as an environment variable or in a log configuration.", "chunk": "group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux platform version deprecation. Migrating to Linux platform version 1.4.0 174 Amazon Elastic Container Service Developer Guide 1.4.0 The following is the changelog for platform version 1.4.0. • Beginning on November 5, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to use the following features: • When using Secrets Manager to store sensitive data, you can inject a specific JSON key or a specific version of a secret as an environment variable or in a log configuration. For more information, see Pass sensitive data to an Amazon ECS container. • Specify environment variables in bulk using the environmentFiles container definition parameter. For more information, see Pass an individual environment variable to an Amazon ECS container. • Tasks run in a VPC and subnet enabled for IPv6 will be assigned both a private IPv4 address and an IPv6 address. For more information, see Amazon ECS task networking options for the Fargate launch type. • The task metadata endpoint version 4 provides additional metadata about your task and container including the task launch type, the Amazon Resource Name (ARN) of the container, and the log driver and log driver options used. When querying the /stats endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to"} -{"global_id": 179, "doc_id": "fargate", "chunk_id": "11", "question_id": 4, "question": "What will tasks run in a VPC and subnet enabled for IPv6 be assigned?", "answer_span": "will be assigned both a private IPv4 address and an IPv6 address.", "chunk": "group for the Elastic Network Interface (ENI) associated with your task needs the security group rules to allow traffic between the task and the VPC endpoints. Fargate Linux platform version change log The following are the available Linux platform versions. For information about platform version deprecation, see AWS Fargate Linux platform version deprecation. Migrating to Linux platform version 1.4.0 174 Amazon Elastic Container Service Developer Guide 1.4.0 The following is the changelog for platform version 1.4.0. • Beginning on November 5, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to use the following features: • When using Secrets Manager to store sensitive data, you can inject a specific JSON key or a specific version of a secret as an environment variable or in a log configuration. For more information, see Pass sensitive data to an Amazon ECS container. • Specify environment variables in bulk using the environmentFiles container definition parameter. For more information, see Pass an individual environment variable to an Amazon ECS container. • Tasks run in a VPC and subnet enabled for IPv6 will be assigned both a private IPv4 address and an IPv6 address. For more information, see Amazon ECS task networking options for the Fargate launch type. • The task metadata endpoint version 4 provides additional metadata about your task and container including the task launch type, the Amazon Resource Name (ARN) of the container, and the log driver and log driver options used. When querying the /stats endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to"} -{"global_id": 180, "doc_id": "fargate", "chunk_id": "12", "question_id": 1, "question": "What new feature was introduced for Amazon ECS tasks launched on Fargate using platform version 1.4.0 starting July 30, 2020?", "answer_span": "any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to their Amazon ECS on Fargate tasks.", "chunk": "endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to their Amazon ECS on Fargate tasks. For more information, see Use load balancing to distribute Amazon ECS service traffic. • Beginning on May 28, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will have its ephemeral storage encrypted with an AES-256 encryption algorithm using an AWS owned encryption key. For more information, see Fargate task ephemeral storage for Amazon ECS and Storage options for Amazon ECS tasks. • Added support for using Amazon EFS file system volumes for persistent task storage. For more information, see Use Amazon EFS volumes with Amazon ECS. • The ephemeral task storage has been increased to a minimum of 20 GB for each task. For more information, see Fargate task ephemeral storage for Amazon ECS. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC and will be visible to you through Linux Platform version change log 175 Amazon Elastic Container Service Developer Guide your VPC flow logs. For more information about networking for the Amazon EC2 launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission"} -{"global_id": 181, "doc_id": "fargate", "chunk_id": "12", "question_id": 2, "question": "What encryption method is used for ephemeral storage in Amazon ECS tasks launched on Fargate starting May 28, 2020?", "answer_span": "any new Amazon ECS task launched on Fargate using platform version 1.4.0 will have its ephemeral storage encrypted with an AES-256 encryption algorithm using an AWS owned encryption key.", "chunk": "endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to their Amazon ECS on Fargate tasks. For more information, see Use load balancing to distribute Amazon ECS service traffic. • Beginning on May 28, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will have its ephemeral storage encrypted with an AES-256 encryption algorithm using an AWS owned encryption key. For more information, see Fargate task ephemeral storage for Amazon ECS and Storage options for Amazon ECS tasks. • Added support for using Amazon EFS file system volumes for persistent task storage. For more information, see Use Amazon EFS volumes with Amazon ECS. • The ephemeral task storage has been increased to a minimum of 20 GB for each task. For more information, see Fargate task ephemeral storage for Amazon ECS. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC and will be visible to you through Linux Platform version change log 175 Amazon Elastic Container Service Developer Guide your VPC flow logs. For more information about networking for the Amazon EC2 launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission"} -{"global_id": 182, "doc_id": "fargate", "chunk_id": "12", "question_id": 3, "question": "What is the minimum size of ephemeral task storage for Amazon ECS tasks?", "answer_span": "The ephemeral task storage has been increased to a minimum of 20 GB for each task.", "chunk": "endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to their Amazon ECS on Fargate tasks. For more information, see Use load balancing to distribute Amazon ECS service traffic. • Beginning on May 28, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will have its ephemeral storage encrypted with an AES-256 encryption algorithm using an AWS owned encryption key. For more information, see Fargate task ephemeral storage for Amazon ECS and Storage options for Amazon ECS tasks. • Added support for using Amazon EFS file system volumes for persistent task storage. For more information, see Use Amazon EFS volumes with Amazon ECS. • The ephemeral task storage has been increased to a minimum of 20 GB for each task. For more information, see Fargate task ephemeral storage for Amazon ECS. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC and will be visible to you through Linux Platform version change log 175 Amazon Elastic Container Service Developer Guide your VPC flow logs. For more information about networking for the Amazon EC2 launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission"} -{"global_id": 183, "doc_id": "fargate", "chunk_id": "12", "question_id": 4, "question": "What change was made to the network traffic behavior for Fargate tasks with platform version 1.4.0?", "answer_span": "Starting with platform version 1.4.0, all Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC and will be visible to you through your VPC flow logs.", "chunk": "endpoint you also receive network rate stats for your containers. For more information, see Task metadata endpoint version 4. • Beginning on July 30, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will be able to route UDP traffic using a Network Load Balancer to their Amazon ECS on Fargate tasks. For more information, see Use load balancing to distribute Amazon ECS service traffic. • Beginning on May 28, 2020, any new Amazon ECS task launched on Fargate using platform version 1.4.0 will have its ephemeral storage encrypted with an AES-256 encryption algorithm using an AWS owned encryption key. For more information, see Fargate task ephemeral storage for Amazon ECS and Storage options for Amazon ECS tasks. • Added support for using Amazon EFS file system volumes for persistent task storage. For more information, see Use Amazon EFS volumes with Amazon ECS. • The ephemeral task storage has been increased to a minimum of 20 GB for each task. For more information, see Fargate task ephemeral storage for Amazon ECS. • The network traffic behavior to and from tasks has been updated. Starting with platform version 1.4.0, all Fargate tasks receive a single elastic network interface (referred to as the task ENI) and all network traffic flows through that ENI within your VPC and will be visible to you through Linux Platform version change log 175 Amazon Elastic Container Service Developer Guide your VPC flow logs. For more information about networking for the Amazon EC2 launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission"} -{"global_id": 184, "doc_id": "fargate", "chunk_id": "13", "question_id": 1, "question": "What does the Fargate launch type support for networking?", "answer_span": "For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type.", "chunk": "launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission unit (MTU), which is the size of the largest payload that fits within a single frame. The larger the MTU, the more application payload can fit within a single frame, which reduces per-frame overhead and increases efficiency. Supporting jumbo frames will reduce overhead when the network path between your task and the destination supports jumbo frames, such as all traffic that remains within your VPC. • CloudWatch Container Insights will include network performance metrics for Fargate tasks. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Added support for the task metadata endpoint version 4 which provides additional information for your Fargate tasks, including network stats for the task and which Availability Zone the task is running in. For more information, see Amazon ECS task metadata endpoint version 4 and Amazon ECS task metadata endpoint version 4 for tasks on Fargate. • Added support for the SYS_PTRACE Linux parameter in container definitions. For more information, see Linux parameters. • The Fargate container agent replaces the use of the Amazon ECS container agent for all Fargate tasks. Usually, this change does not have an effect on how your tasks run. • The container runtime is now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error"} -{"global_id": 185, "doc_id": "fargate", "chunk_id": "13", "question_id": 2, "question": "What is the benefit of supporting jumbo frames?", "answer_span": "Supporting jumbo frames will reduce overhead when the network path between your task and the destination supports jumbo frames, such as all traffic that remains within your VPC.", "chunk": "launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission unit (MTU), which is the size of the largest payload that fits within a single frame. The larger the MTU, the more application payload can fit within a single frame, which reduces per-frame overhead and increases efficiency. Supporting jumbo frames will reduce overhead when the network path between your task and the destination supports jumbo frames, such as all traffic that remains within your VPC. • CloudWatch Container Insights will include network performance metrics for Fargate tasks. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Added support for the task metadata endpoint version 4 which provides additional information for your Fargate tasks, including network stats for the task and which Availability Zone the task is running in. For more information, see Amazon ECS task metadata endpoint version 4 and Amazon ECS task metadata endpoint version 4 for tasks on Fargate. • Added support for the SYS_PTRACE Linux parameter in container definitions. For more information, see Linux parameters. • The Fargate container agent replaces the use of the Amazon ECS container agent for all Fargate tasks. Usually, this change does not have an effect on how your tasks run. • The container runtime is now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error"} -{"global_id": 186, "doc_id": "fargate", "chunk_id": "13", "question_id": 3, "question": "What does CloudWatch Container Insights provide for Fargate tasks?", "answer_span": "CloudWatch Container Insights will include network performance metrics for Fargate tasks.", "chunk": "launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission unit (MTU), which is the size of the largest payload that fits within a single frame. The larger the MTU, the more application payload can fit within a single frame, which reduces per-frame overhead and increases efficiency. Supporting jumbo frames will reduce overhead when the network path between your task and the destination supports jumbo frames, such as all traffic that remains within your VPC. • CloudWatch Container Insights will include network performance metrics for Fargate tasks. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Added support for the task metadata endpoint version 4 which provides additional information for your Fargate tasks, including network stats for the task and which Availability Zone the task is running in. For more information, see Amazon ECS task metadata endpoint version 4 and Amazon ECS task metadata endpoint version 4 for tasks on Fargate. • Added support for the SYS_PTRACE Linux parameter in container definitions. For more information, see Linux parameters. • The Fargate container agent replaces the use of the Amazon ECS container agent for all Fargate tasks. Usually, this change does not have an effect on how your tasks run. • The container runtime is now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error"} -{"global_id": 187, "doc_id": "fargate", "chunk_id": "13", "question_id": 4, "question": "What has replaced the Amazon ECS container agent for Fargate tasks?", "answer_span": "The Fargate container agent replaces the use of the Amazon ECS container agent for all Fargate tasks.", "chunk": "launch type, see Amazon ECS task networking options for the EC2 launch type. For more information about networking for the Fargate launch type, see Amazon ECS task networking options for the Fargate launch type. • Task ENIs add support for jumbo frames. Network interfaces are configured with a maximum transmission unit (MTU), which is the size of the largest payload that fits within a single frame. The larger the MTU, the more application payload can fit within a single frame, which reduces per-frame overhead and increases efficiency. Supporting jumbo frames will reduce overhead when the network path between your task and the destination supports jumbo frames, such as all traffic that remains within your VPC. • CloudWatch Container Insights will include network performance metrics for Fargate tasks. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Added support for the task metadata endpoint version 4 which provides additional information for your Fargate tasks, including network stats for the task and which Availability Zone the task is running in. For more information, see Amazon ECS task metadata endpoint version 4 and Amazon ECS task metadata endpoint version 4 for tasks on Fargate. • Added support for the SYS_PTRACE Linux parameter in container definitions. For more information, see Linux parameters. • The Fargate container agent replaces the use of the Amazon ECS container agent for all Fargate tasks. Usually, this change does not have an effect on how your tasks run. • The container runtime is now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error"} -{"global_id": 188, "doc_id": "fargate", "chunk_id": "14", "question_id": 1, "question": "What change is mentioned regarding the container runtime?", "answer_span": "now using Containerd instead of Docker.", "chunk": "now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error messages. • Based on Amazon Linux 2. 1.3.0 The following is the changelog for platform version 1.3.0. • Beginning on Sept 30, 2019, any new Fargate task that is launched supports the awsfirelens log driver. Configure the FireLens for Amazon ECS to use task definition parameters to route logs to an AWS service or AWS Partner Network (APN) destination for log storage and analytics. For more information, see Send Amazon ECS logs to an AWS service or AWS Partner. Linux Platform version change log 176 Amazon Elastic Container Service Developer Guide • Added task recycling for Fargate tasks, which is the process of refreshing tasks that are a part of an Amazon ECS service. For more information, Task retirement and maintenance for AWS Fargate on Amazon ECS. • Beginning on March 27, 2019, any new Fargate task that is launched can use additional task definition parameters that you use to define a proxy configuration, dependencies for container startup and shutdown as well as a per-container start and stop timeout value. For more information, see Proxy configuration, Container dependency, and Container timeouts. • Beginning on April 2, 2019, any new Fargate task that is launched supports injecting sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that"} -{"global_id": 189, "doc_id": "fargate", "chunk_id": "14", "question_id": 2, "question": "What does the changelog for platform version 1.3.0 include?", "answer_span": "The following is the changelog for platform version 1.3.0.", "chunk": "now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error messages. • Based on Amazon Linux 2. 1.3.0 The following is the changelog for platform version 1.3.0. • Beginning on Sept 30, 2019, any new Fargate task that is launched supports the awsfirelens log driver. Configure the FireLens for Amazon ECS to use task definition parameters to route logs to an AWS service or AWS Partner Network (APN) destination for log storage and analytics. For more information, see Send Amazon ECS logs to an AWS service or AWS Partner. Linux Platform version change log 176 Amazon Elastic Container Service Developer Guide • Added task recycling for Fargate tasks, which is the process of refreshing tasks that are a part of an Amazon ECS service. For more information, Task retirement and maintenance for AWS Fargate on Amazon ECS. • Beginning on March 27, 2019, any new Fargate task that is launched can use additional task definition parameters that you use to define a proxy configuration, dependencies for container startup and shutdown as well as a per-container start and stop timeout value. For more information, see Proxy configuration, Container dependency, and Container timeouts. • Beginning on April 2, 2019, any new Fargate task that is launched supports injecting sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that"} -{"global_id": 190, "doc_id": "fargate", "chunk_id": "14", "question_id": 3, "question": "What feature supports the awsfirelens log driver?", "answer_span": "any new Fargate task that is launched supports the awsfirelens log driver.", "chunk": "now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error messages. • Based on Amazon Linux 2. 1.3.0 The following is the changelog for platform version 1.3.0. • Beginning on Sept 30, 2019, any new Fargate task that is launched supports the awsfirelens log driver. Configure the FireLens for Amazon ECS to use task definition parameters to route logs to an AWS service or AWS Partner Network (APN) destination for log storage and analytics. For more information, see Send Amazon ECS logs to an AWS service or AWS Partner. Linux Platform version change log 176 Amazon Elastic Container Service Developer Guide • Added task recycling for Fargate tasks, which is the process of refreshing tasks that are a part of an Amazon ECS service. For more information, Task retirement and maintenance for AWS Fargate on Amazon ECS. • Beginning on March 27, 2019, any new Fargate task that is launched can use additional task definition parameters that you use to define a proxy configuration, dependencies for container startup and shutdown as well as a per-container start and stop timeout value. For more information, see Proxy configuration, Container dependency, and Container timeouts. • Beginning on April 2, 2019, any new Fargate task that is launched supports injecting sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that"} -{"global_id": 191, "doc_id": "fargate", "chunk_id": "14", "question_id": 4, "question": "What can new Fargate tasks launched after March 27, 2019, use?", "answer_span": "any new Fargate task that is launched can use additional task definition parameters that you use to define a proxy configuration, dependencies for container startup and shutdown as well as a per-container start and stop timeout value.", "chunk": "now using Containerd instead of Docker. Most likely, this change does not have an effect on how your tasks run. You will notice that some error messages that originate with the container runtime changes from mentioning Docker to more general errors. For more information, see Amazon ECS stopped task error messages. • Based on Amazon Linux 2. 1.3.0 The following is the changelog for platform version 1.3.0. • Beginning on Sept 30, 2019, any new Fargate task that is launched supports the awsfirelens log driver. Configure the FireLens for Amazon ECS to use task definition parameters to route logs to an AWS service or AWS Partner Network (APN) destination for log storage and analytics. For more information, see Send Amazon ECS logs to an AWS service or AWS Partner. Linux Platform version change log 176 Amazon Elastic Container Service Developer Guide • Added task recycling for Fargate tasks, which is the process of refreshing tasks that are a part of an Amazon ECS service. For more information, Task retirement and maintenance for AWS Fargate on Amazon ECS. • Beginning on March 27, 2019, any new Fargate task that is launched can use additional task definition parameters that you use to define a proxy configuration, dependencies for container startup and shutdown as well as a per-container start and stop timeout value. For more information, see Proxy configuration, Container dependency, and Container timeouts. • Beginning on April 2, 2019, any new Fargate task that is launched supports injecting sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that"} -{"global_id": 192, "doc_id": "fargate", "chunk_id": "15", "question_id": 1, "question": "What can sensitive data be stored in for containers?", "answer_span": "containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition.", "chunk": "containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports referencing sensitive data in the log configuration of a container using the secretOptions container definition parameter. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports the splunk log driver in addition to the awslogs log driver. For more information, see Storage and logging. • Beginning on July 9, 2019, any new Fargate tasks that is launched supports CloudWatch Container Insights. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Beginning on December 3, 2019, the Fargate Spot capacity provider is supported. For more information, see Amazon ECS clusters for Fargate. • Based on Amazon Linux 2. AWS Fargate Linux platform version deprecation This page lists Linux platform versions that AWS Fargate has deprecated or have been scheduled for deprecation. These platform versions remain available until the published deprecation date. A force update date is provided for each platform version scheduled for deprecation. On the force update date, any service using the LATEST platform version that is pointed to a platform version that is scheduled for deprecation will be updated using the force new deployment option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform"} -{"global_id": 193, "doc_id": "fargate", "chunk_id": "15", "question_id": 2, "question": "When did support for referencing sensitive data in the log configuration of a container begin?", "answer_span": "Beginning on May 1, 2019, any new Fargate task that is launched supports referencing sensitive data in the log configuration of a container using the secretOptions container definition parameter.", "chunk": "containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports referencing sensitive data in the log configuration of a container using the secretOptions container definition parameter. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports the splunk log driver in addition to the awslogs log driver. For more information, see Storage and logging. • Beginning on July 9, 2019, any new Fargate tasks that is launched supports CloudWatch Container Insights. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Beginning on December 3, 2019, the Fargate Spot capacity provider is supported. For more information, see Amazon ECS clusters for Fargate. • Based on Amazon Linux 2. AWS Fargate Linux platform version deprecation This page lists Linux platform versions that AWS Fargate has deprecated or have been scheduled for deprecation. These platform versions remain available until the published deprecation date. A force update date is provided for each platform version scheduled for deprecation. On the force update date, any service using the LATEST platform version that is pointed to a platform version that is scheduled for deprecation will be updated using the force new deployment option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform"} -{"global_id": 194, "doc_id": "fargate", "chunk_id": "15", "question_id": 3, "question": "What log driver was added to Fargate tasks on May 1, 2019?", "answer_span": "Beginning on May 1, 2019, any new Fargate task that is launched supports the splunk log driver in addition to the awslogs log driver.", "chunk": "containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports referencing sensitive data in the log configuration of a container using the secretOptions container definition parameter. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports the splunk log driver in addition to the awslogs log driver. For more information, see Storage and logging. • Beginning on July 9, 2019, any new Fargate tasks that is launched supports CloudWatch Container Insights. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Beginning on December 3, 2019, the Fargate Spot capacity provider is supported. For more information, see Amazon ECS clusters for Fargate. • Based on Amazon Linux 2. AWS Fargate Linux platform version deprecation This page lists Linux platform versions that AWS Fargate has deprecated or have been scheduled for deprecation. These platform versions remain available until the published deprecation date. A force update date is provided for each platform version scheduled for deprecation. On the force update date, any service using the LATEST platform version that is pointed to a platform version that is scheduled for deprecation will be updated using the force new deployment option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform"} -{"global_id": 195, "doc_id": "fargate", "chunk_id": "15", "question_id": 4, "question": "What is the significance of December 3, 2019, for Fargate?", "answer_span": "Beginning on December 3, 2019, the Fargate Spot capacity provider is supported.", "chunk": "containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports referencing sensitive data in the log configuration of a container using the secretOptions container definition parameter. For more information, see Pass sensitive data to an Amazon ECS container. • Beginning on May 1, 2019, any new Fargate task that is launched supports the splunk log driver in addition to the awslogs log driver. For more information, see Storage and logging. • Beginning on July 9, 2019, any new Fargate tasks that is launched supports CloudWatch Container Insights. For more information, see Monitor Amazon ECS containers using Container Insights with enhanced observability. • Beginning on December 3, 2019, the Fargate Spot capacity provider is supported. For more information, see Amazon ECS clusters for Fargate. • Based on Amazon Linux 2. AWS Fargate Linux platform version deprecation This page lists Linux platform versions that AWS Fargate has deprecated or have been scheduled for deprecation. These platform versions remain available until the published deprecation date. A force update date is provided for each platform version scheduled for deprecation. On the force update date, any service using the LATEST platform version that is pointed to a platform version that is scheduled for deprecation will be updated using the force new deployment option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform"} -{"global_id": 196, "doc_id": "fargate", "chunk_id": "16", "question_id": 1, "question": "What happens to tasks running on a platform version scheduled for deprecation when the service is updated using the force new deployment option?", "answer_span": "all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time.", "chunk": "option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform version set are not affected by the force update date. Linux platform version deprecation 177"} -{"global_id": 197, "doc_id": "fargate", "chunk_id": "16", "question_id": 2, "question": "Are standalone tasks or services with an explicit platform version affected by the force update date?", "answer_span": "Standalone tasks or services with an explicit platform version set are not affected by the force update date.", "chunk": "option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform version set are not affected by the force update date. Linux platform version deprecation 177"} -{"global_id": 198, "doc_id": "fargate", "chunk_id": "16", "question_id": 3, "question": "What does the force new deployment option do?", "answer_span": "When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time.", "chunk": "option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform version set are not affected by the force update date. Linux platform version deprecation 177"} -{"global_id": 199, "doc_id": "fargate", "chunk_id": "16", "question_id": 4, "question": "What is the context of the text regarding Linux platform version?", "answer_span": "Linux platform version deprecation 177", "chunk": "option. When the service is updated using the force new deployment option, all tasks running on a platform version scheduled for deprecation are stopped and new tasks are launched using the platform version that the LATEST tag points to at that time. Standalone tasks or services with an explicit platform version set are not affected by the force update date. Linux platform version deprecation 177"} -{"global_id": 200, "doc_id": "ecs", "chunk_id": "0", "question_id": 1, "question": "What is Amazon Elastic Container Service?", "answer_span": "Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.", "chunk": "Amazon Elastic Container Service Developer Guide What is Amazon Elastic Container Service? Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. As a fully managed service, Amazon ECS comes with AWS configuration and operational best practices built-in. It's integrated with both AWS tools, such as Amazon Elastic Container Registry, and third-party tools, such as Docker. This integration makes it easier for teams to focus on building the applications, not the environment. You can run and scale your container workloads across AWS Regions in the cloud, and on-premises, without the complexity of managing a control plane. Terminology and components There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers The following diagram shows the Amazon ECS layers. Terminology and components 1 Amazon Elastic Container Service Developer Guide The capacity is the infrastructure where your containers run. The following is an overview of the capacity options: • Amazon EC2 instances in the AWS cloud You choose the instance type, the number of instances, and manage the capacity. • Serverless (AWS Fargate) in the AWS cloud Fargate is a serverless, pay-as-you-go compute engine. With Fargate you don't need to manage servers, handle capacity planning, or isolate container workloads for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic"} -{"global_id": 201, "doc_id": "ecs", "chunk_id": "0", "question_id": 2, "question": "What are the three layers in Amazon ECS?", "answer_span": "There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers", "chunk": "Amazon Elastic Container Service Developer Guide What is Amazon Elastic Container Service? Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. As a fully managed service, Amazon ECS comes with AWS configuration and operational best practices built-in. It's integrated with both AWS tools, such as Amazon Elastic Container Registry, and third-party tools, such as Docker. This integration makes it easier for teams to focus on building the applications, not the environment. You can run and scale your container workloads across AWS Regions in the cloud, and on-premises, without the complexity of managing a control plane. Terminology and components There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers The following diagram shows the Amazon ECS layers. Terminology and components 1 Amazon Elastic Container Service Developer Guide The capacity is the infrastructure where your containers run. The following is an overview of the capacity options: • Amazon EC2 instances in the AWS cloud You choose the instance type, the number of instances, and manage the capacity. • Serverless (AWS Fargate) in the AWS cloud Fargate is a serverless, pay-as-you-go compute engine. With Fargate you don't need to manage servers, handle capacity planning, or isolate container workloads for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic"} -{"global_id": 202, "doc_id": "ecs", "chunk_id": "0", "question_id": 3, "question": "What is AWS Fargate?", "answer_span": "Fargate is a serverless, pay-as-you-go compute engine.", "chunk": "Amazon Elastic Container Service Developer Guide What is Amazon Elastic Container Service? Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. As a fully managed service, Amazon ECS comes with AWS configuration and operational best practices built-in. It's integrated with both AWS tools, such as Amazon Elastic Container Registry, and third-party tools, such as Docker. This integration makes it easier for teams to focus on building the applications, not the environment. You can run and scale your container workloads across AWS Regions in the cloud, and on-premises, without the complexity of managing a control plane. Terminology and components There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers The following diagram shows the Amazon ECS layers. Terminology and components 1 Amazon Elastic Container Service Developer Guide The capacity is the infrastructure where your containers run. The following is an overview of the capacity options: • Amazon EC2 instances in the AWS cloud You choose the instance type, the number of instances, and manage the capacity. • Serverless (AWS Fargate) in the AWS cloud Fargate is a serverless, pay-as-you-go compute engine. With Fargate you don't need to manage servers, handle capacity planning, or isolate container workloads for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic"} -{"global_id": 203, "doc_id": "ecs", "chunk_id": "0", "question_id": 4, "question": "What does Amazon ECS Anywhere provide support for?", "answer_span": "Amazon ECS Anywhere provides support for registering an external instance such as an on-premises server or virtual machine (VM), to your Amazon ECS cluster.", "chunk": "Amazon Elastic Container Service Developer Guide What is Amazon Elastic Container Service? Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. As a fully managed service, Amazon ECS comes with AWS configuration and operational best practices built-in. It's integrated with both AWS tools, such as Amazon Elastic Container Registry, and third-party tools, such as Docker. This integration makes it easier for teams to focus on building the applications, not the environment. You can run and scale your container workloads across AWS Regions in the cloud, and on-premises, without the complexity of managing a control plane. Terminology and components There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers The following diagram shows the Amazon ECS layers. Terminology and components 1 Amazon Elastic Container Service Developer Guide The capacity is the infrastructure where your containers run. The following is an overview of the capacity options: • Amazon EC2 instances in the AWS cloud You choose the instance type, the number of instances, and manage the capacity. • Serverless (AWS Fargate) in the AWS cloud Fargate is a serverless, pay-as-you-go compute engine. With Fargate you don't need to manage servers, handle capacity planning, or isolate container workloads for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic"} -{"global_id": 204, "doc_id": "ecs", "chunk_id": "1", "question_id": 1, "question": "What does Amazon ECS Anywhere provide support for?", "answer_span": "Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster.", "chunk": "for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic Container Service Developer Guide Features Amazon ECS provides the following high-level features: Task definition The blueprint for the application. Cluster The infrastructure your application runs on. Task An application such as a batch job that performs work, and then stops. Service A long running stateless application. Account Setting Allows access to features. Cluster Auto Scaling Amazon ECS manages the scaling of Amazon EC2 instances that are registered to your cluster. Service Auto Scaling Amazon ECS increases or decreases the desired number of tasks in your service automatically. Provisioning There are multiple options for provisioning Amazon ECS: • AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details. These include calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Features 3 Amazon Elastic Container Service Developer Guide • AWS CDK — Provides an open-source software development framework that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing"} -{"global_id": 205, "doc_id": "ecs", "chunk_id": "1", "question_id": 2, "question": "What is the role of the Amazon ECS scheduler?", "answer_span": "The Amazon ECS scheduler is the software that manages your applications.", "chunk": "for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic Container Service Developer Guide Features Amazon ECS provides the following high-level features: Task definition The blueprint for the application. Cluster The infrastructure your application runs on. Task An application such as a batch job that performs work, and then stops. Service A long running stateless application. Account Setting Allows access to features. Cluster Auto Scaling Amazon ECS manages the scaling of Amazon EC2 instances that are registered to your cluster. Service Auto Scaling Amazon ECS increases or decreases the desired number of tasks in your service automatically. Provisioning There are multiple options for provisioning Amazon ECS: • AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details. These include calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Features 3 Amazon Elastic Container Service Developer Guide • AWS CDK — Provides an open-source software development framework that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing"} -{"global_id": 206, "doc_id": "ecs", "chunk_id": "1", "question_id": 3, "question": "What is a Task in Amazon ECS terminology?", "answer_span": "Task An application such as a batch job that performs work, and then stops.", "chunk": "for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic Container Service Developer Guide Features Amazon ECS provides the following high-level features: Task definition The blueprint for the application. Cluster The infrastructure your application runs on. Task An application such as a batch job that performs work, and then stops. Service A long running stateless application. Account Setting Allows access to features. Cluster Auto Scaling Amazon ECS manages the scaling of Amazon EC2 instances that are registered to your cluster. Service Auto Scaling Amazon ECS increases or decreases the desired number of tasks in your service automatically. Provisioning There are multiple options for provisioning Amazon ECS: • AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details. These include calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Features 3 Amazon Elastic Container Service Developer Guide • AWS CDK — Provides an open-source software development framework that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing"} -{"global_id": 207, "doc_id": "ecs", "chunk_id": "1", "question_id": 4, "question": "What are the options for provisioning Amazon ECS?", "answer_span": "There are multiple options for provisioning Amazon ECS: • AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details.", "chunk": "for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic Container Service Developer Guide Features Amazon ECS provides the following high-level features: Task definition The blueprint for the application. Cluster The infrastructure your application runs on. Task An application such as a batch job that performs work, and then stops. Service A long running stateless application. Account Setting Allows access to features. Cluster Auto Scaling Amazon ECS manages the scaling of Amazon EC2 instances that are registered to your cluster. Service Auto Scaling Amazon ECS increases or decreases the desired number of tasks in your service automatically. Provisioning There are multiple options for provisioning Amazon ECS: • AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details. These include calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Features 3 Amazon Elastic Container Service Developer Guide • AWS CDK — Provides an open-source software development framework that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing"} -{"global_id": 208, "doc_id": "ecs", "chunk_id": "2", "question_id": 1, "question": "What does the AWS CDK do?", "answer_span": "The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation.", "chunk": "that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing information for Amazon ECS. • AWS Fargate pricing – Pricing information for Fargate. Related services Services to use with Amazon ECS You can use other AWS services to help you deploy yours tasks and services on Amazon ECS. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon CloudWatch Monitor your services and tasks. Amazon Elastic Container Registry Store and manage container images. Elastic Load Balancing Automatically distribute incoming service traffic. Amazon GuardDuty Detect potentially unauthorized or malicious use of your container instances and workloads. Pricing 4 Amazon Elastic Container Service Developer Guide Learn how to create and use Amazon ECS resources The following guides provide an introduction to the tools available to access Amazon ECS and introductory procedures to run containers. Docker basics takes you through the basic steps to create a Docker container image and upload it to an Amazon ECR private repository. The getting started guides walk you through using the AWS Copilot command line interface and the AWS Management Console to complete the common tasks to run your containers on Amazon ECS and AWS Fargate. Contents • Set up to use Amazon ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows"} -{"global_id": 209, "doc_id": "ecs", "chunk_id": "2", "question_id": 2, "question": "What does Amazon ECS pricing depend on?", "answer_span": "Amazon ECS pricing depends on the capacity option you choose for your containers.", "chunk": "that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing information for Amazon ECS. • AWS Fargate pricing – Pricing information for Fargate. Related services Services to use with Amazon ECS You can use other AWS services to help you deploy yours tasks and services on Amazon ECS. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon CloudWatch Monitor your services and tasks. Amazon Elastic Container Registry Store and manage container images. Elastic Load Balancing Automatically distribute incoming service traffic. Amazon GuardDuty Detect potentially unauthorized or malicious use of your container instances and workloads. Pricing 4 Amazon Elastic Container Service Developer Guide Learn how to create and use Amazon ECS resources The following guides provide an introduction to the tools available to access Amazon ECS and introductory procedures to run containers. Docker basics takes you through the basic steps to create a Docker container image and upload it to an Amazon ECR private repository. The getting started guides walk you through using the AWS Copilot command line interface and the AWS Management Console to complete the common tasks to run your containers on Amazon ECS and AWS Fargate. Contents • Set up to use Amazon ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows"} -{"global_id": 210, "doc_id": "ecs", "chunk_id": "2", "question_id": 3, "question": "What service helps ensure the correct number of Amazon EC2 instances?", "answer_span": "Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application.", "chunk": "that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing information for Amazon ECS. • AWS Fargate pricing – Pricing information for Fargate. Related services Services to use with Amazon ECS You can use other AWS services to help you deploy yours tasks and services on Amazon ECS. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon CloudWatch Monitor your services and tasks. Amazon Elastic Container Registry Store and manage container images. Elastic Load Balancing Automatically distribute incoming service traffic. Amazon GuardDuty Detect potentially unauthorized or malicious use of your container instances and workloads. Pricing 4 Amazon Elastic Container Service Developer Guide Learn how to create and use Amazon ECS resources The following guides provide an introduction to the tools available to access Amazon ECS and introductory procedures to run containers. Docker basics takes you through the basic steps to create a Docker container image and upload it to an Amazon ECR private repository. The getting started guides walk you through using the AWS Copilot command line interface and the AWS Management Console to complete the common tasks to run your containers on Amazon ECS and AWS Fargate. Contents • Set up to use Amazon ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows"} -{"global_id": 211, "doc_id": "ecs", "chunk_id": "2", "question_id": 4, "question": "What does the Docker basics guide help you with?", "answer_span": "Docker basics takes you through the basic steps to create a Docker container image and upload it to an Amazon ECR private repository.", "chunk": "that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing information for Amazon ECS. • AWS Fargate pricing – Pricing information for Fargate. Related services Services to use with Amazon ECS You can use other AWS services to help you deploy yours tasks and services on Amazon ECS. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon CloudWatch Monitor your services and tasks. Amazon Elastic Container Registry Store and manage container images. Elastic Load Balancing Automatically distribute incoming service traffic. Amazon GuardDuty Detect potentially unauthorized or malicious use of your container instances and workloads. Pricing 4 Amazon Elastic Container Service Developer Guide Learn how to create and use Amazon ECS resources The following guides provide an introduction to the tools available to access Amazon ECS and introductory procedures to run containers. Docker basics takes you through the basic steps to create a Docker container image and upload it to an Amazon ECR private repository. The getting started guides walk you through using the AWS Copilot command line interface and the AWS Management Console to complete the common tasks to run your containers on Amazon ECS and AWS Fargate. Contents • Set up to use Amazon ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows"} -{"global_id": 212, "doc_id": "ecs", "chunk_id": "3", "question_id": 1, "question": "What is the purpose of the AWS Management Console in relation to Amazon ECS?", "answer_span": "The AWS Management Console is a browser-based interface for managing Amazon ECS resources.", "chunk": "ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the EC2 launch type • Creating Amazon ECS resources using the AWS CDK • Creating Amazon ECS resources using the AWS Copilot command line interface Set up to use Amazon ECS If you've already signed up for Amazon Web Services (AWS) and have been using Amazon Elastic Compute Cloud (Amazon EC2), you are close to being able to use Amazon ECS. The set-up process for the two services is similar. The following guide prepares you for launching your first Amazon ECS cluster. Complete the following tasks to get set up for Amazon ECS. AWS Management Console The AWS Management Console is a browser-based interface for managing Amazon ECS resources. The console provides a visual overview of the service, making it easy to explore Amazon ECS features and functions without needing to use additional tools. Many related tutorials and walkthroughs are available that can guide you through use of the console. For a tutorial that guides you through the console, see Learn how to create and use Amazon ECS resources. Set up 5 Amazon Elastic Container Service Developer Guide When starting out, many customers prefer using the console because it provides instant visual feedback on whether the actions they take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to"} -{"global_id": 213, "doc_id": "ecs", "chunk_id": "3", "question_id": 2, "question": "What do many customers prefer when starting out with Amazon ECS?", "answer_span": "When starting out, many customers prefer using the console because it provides instant visual feedback on whether the actions they take succeed.", "chunk": "ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the EC2 launch type • Creating Amazon ECS resources using the AWS CDK • Creating Amazon ECS resources using the AWS Copilot command line interface Set up to use Amazon ECS If you've already signed up for Amazon Web Services (AWS) and have been using Amazon Elastic Compute Cloud (Amazon EC2), you are close to being able to use Amazon ECS. The set-up process for the two services is similar. The following guide prepares you for launching your first Amazon ECS cluster. Complete the following tasks to get set up for Amazon ECS. AWS Management Console The AWS Management Console is a browser-based interface for managing Amazon ECS resources. The console provides a visual overview of the service, making it easy to explore Amazon ECS features and functions without needing to use additional tools. Many related tutorials and walkthroughs are available that can guide you through use of the console. For a tutorial that guides you through the console, see Learn how to create and use Amazon ECS resources. Set up 5 Amazon Elastic Container Service Developer Guide When starting out, many customers prefer using the console because it provides instant visual feedback on whether the actions they take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to"} -{"global_id": 214, "doc_id": "ecs", "chunk_id": "3", "question_id": 3, "question": "What is one of the tasks mentioned for getting set up for Amazon ECS?", "answer_span": "Complete the following tasks to get set up for Amazon ECS.", "chunk": "ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the EC2 launch type • Creating Amazon ECS resources using the AWS CDK • Creating Amazon ECS resources using the AWS Copilot command line interface Set up to use Amazon ECS If you've already signed up for Amazon Web Services (AWS) and have been using Amazon Elastic Compute Cloud (Amazon EC2), you are close to being able to use Amazon ECS. The set-up process for the two services is similar. The following guide prepares you for launching your first Amazon ECS cluster. Complete the following tasks to get set up for Amazon ECS. AWS Management Console The AWS Management Console is a browser-based interface for managing Amazon ECS resources. The console provides a visual overview of the service, making it easy to explore Amazon ECS features and functions without needing to use additional tools. Many related tutorials and walkthroughs are available that can guide you through use of the console. For a tutorial that guides you through the console, see Learn how to create and use Amazon ECS resources. Set up 5 Amazon Elastic Container Service Developer Guide When starting out, many customers prefer using the console because it provides instant visual feedback on whether the actions they take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to"} -{"global_id": 215, "doc_id": "ecs", "chunk_id": "3", "question_id": 4, "question": "What can AWS customers manage easily if they are familiar with the AWS Management Console?", "answer_span": "AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances.", "chunk": "ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the EC2 launch type • Creating Amazon ECS resources using the AWS CDK • Creating Amazon ECS resources using the AWS Copilot command line interface Set up to use Amazon ECS If you've already signed up for Amazon Web Services (AWS) and have been using Amazon Elastic Compute Cloud (Amazon EC2), you are close to being able to use Amazon ECS. The set-up process for the two services is similar. The following guide prepares you for launching your first Amazon ECS cluster. Complete the following tasks to get set up for Amazon ECS. AWS Management Console The AWS Management Console is a browser-based interface for managing Amazon ECS resources. The console provides a visual overview of the service, making it easy to explore Amazon ECS features and functions without needing to use additional tools. Many related tutorials and walkthroughs are available that can guide you through use of the console. For a tutorial that guides you through the console, see Learn how to create and use Amazon ECS resources. Set up 5 Amazon Elastic Container Service Developer Guide When starting out, many customers prefer using the console because it provides instant visual feedback on whether the actions they take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to"} -{"global_id": 216, "doc_id": "ecs", "chunk_id": "4", "question_id": 1, "question": "What can AWS customers manage using the AWS Management Console?", "answer_span": "AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances.", "chunk": "take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions,"} -{"global_id": 217, "doc_id": "ecs", "chunk_id": "4", "question_id": 2, "question": "What is created when you sign up for an AWS account?", "answer_span": "When you sign up for an AWS account, an AWS account root user is created.", "chunk": "take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions,"} -{"global_id": 218, "doc_id": "ecs", "chunk_id": "4", "question_id": 3, "question": "What is a security best practice mentioned in the text?", "answer_span": "As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access.", "chunk": "take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions,"} -{"global_id": 219, "doc_id": "ecs", "chunk_id": "4", "question_id": 4, "question": "What should you do after signing up for an AWS account?", "answer_span": "After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks.", "chunk": "take succeed. AWS customers that are familiar with the AWS Management Console, can easily manage related resources such as load balancers and Amazon EC2 instances. Start with the AWS Management Console. Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions,"} -{"global_id": 220, "doc_id": "ecs", "chunk_id": "5", "question_id": 1, "question": "What should you do to turn on multi-factor authentication for your root user?", "answer_span": "Turn on multi-factor authentication (MFA) for your root user.", "chunk": "page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual"} -{"global_id": 221, "doc_id": "ecs", "chunk_id": "5", "question_id": 2, "question": "Where can you find instructions for enabling IAM Identity Center?", "answer_span": "For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide.", "chunk": "page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual"} -{"global_id": 222, "doc_id": "ecs", "chunk_id": "5", "question_id": 3, "question": "How can you sign in as a user with administrative access?", "answer_span": "To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.", "chunk": "page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual"} -{"global_id": 223, "doc_id": "ecs", "chunk_id": "5", "question_id": 4, "question": "What is the first step to assign access to additional users in IAM Identity Center?", "answer_span": "In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions.", "chunk": "page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. Sign up for an AWS account 6 Amazon Elastic Container Service 2. Developer Guide Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual"} -{"global_id": 224, "doc_id": "ecs", "chunk_id": "6", "question_id": 1, "question": "What is the purpose of Amazon Virtual Private Cloud (Amazon VPC)?", "answer_span": "You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual network that you've defined.", "chunk": "to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual network that you've defined. We strongly suggest that you launch your container instances in a VPC. If you have a default VPC, you can skip this section and move to the next task, Create a security group. To determine whether you have a default VPC, see Work with your default VPC and default subnets in the Amazon VPC User Guide. Otherwise, you can create a nondefault VPC in your account using the steps below. For information about how to create a VPC, see Create a VPC in the Amazon VPC User Guide, and use the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated container instances, controlling both inbound and outbound traffic at the container instance level. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your"} -{"global_id": 225, "doc_id": "ecs", "chunk_id": "6", "question_id": 2, "question": "What should you do if you have a default VPC?", "answer_span": "If you have a default VPC, you can skip this section and move to the next task, Create a security group.", "chunk": "to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual network that you've defined. We strongly suggest that you launch your container instances in a VPC. If you have a default VPC, you can skip this section and move to the next task, Create a security group. To determine whether you have a default VPC, see Work with your default VPC and default subnets in the Amazon VPC User Guide. Otherwise, you can create a nondefault VPC in your account using the steps below. For information about how to create a VPC, see Create a VPC in the Amazon VPC User Guide, and use the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated container instances, controlling both inbound and outbound traffic at the container instance level. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your"} -{"global_id": 226, "doc_id": "ecs", "chunk_id": "6", "question_id": 3, "question": "What is the CIDR block size requirement for creating a VPC?", "answer_span": "The CIDR block size must have a size between /16 and /28.", "chunk": "to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual network that you've defined. We strongly suggest that you launch your container instances in a VPC. If you have a default VPC, you can skip this section and move to the next task, Create a security group. To determine whether you have a default VPC, see Work with your default VPC and default subnets in the Amazon VPC User Guide. Otherwise, you can create a nondefault VPC in your account using the steps below. For information about how to create a VPC, see Create a VPC in the Amazon VPC User Guide, and use the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated container instances, controlling both inbound and outbound traffic at the container instance level. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your"} -{"global_id": 227, "doc_id": "ecs", "chunk_id": "6", "question_id": 4, "question": "What do security groups control?", "answer_span": "Security groups act as a firewall for associated container instances, controlling both inbound and outbound traffic at the container instance level.", "chunk": "to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create a user with administrative access 7 Amazon Elastic Container Service Developer Guide Create a virtual private cloud You can use Amazon Virtual Private Cloud (Amazon VPC) to launch AWS resources into a virtual network that you've defined. We strongly suggest that you launch your container instances in a VPC. If you have a default VPC, you can skip this section and move to the next task, Create a security group. To determine whether you have a default VPC, see Work with your default VPC and default subnets in the Amazon VPC User Guide. Otherwise, you can create a nondefault VPC in your account using the steps below. For information about how to create a VPC, see Create a VPC in the Amazon VPC User Guide, and use the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated container instances, controlling both inbound and outbound traffic at the container instance level. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your"} -{"global_id": 228, "doc_id": "ecs", "chunk_id": "7", "question_id": 1, "question": "How can you connect to your container instance?", "answer_span": "connect to your container instance from your IP address using SSH.", "chunk": "connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Container instances require external network access to communicate with the Amazon ECS service endpoint. If you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Tip You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you are connecting through an internet service provider (ISP) or from behind a firewall without a static IP address, you must find out the range of IP addresses used by client computers. For information about how to create a security group, see Create a security group for your Amazon EC2 instance in the Amazon EC2 User Guide and use the following table to determine what options to select. Option Value Region The same Region in which you created your key pair. Name A name that is easy for you to remember, such as ecs-insta nces-default-cluster. VPC The default VPC (marked with \"(default)\"). Note If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases"} -{"global_id": 229, "doc_id": "ecs", "chunk_id": "7", "question_id": 2, "question": "What do container instances require to communicate with the Amazon ECS service endpoint?", "answer_span": "Container instances require external network access to communicate with the Amazon ECS service endpoint.", "chunk": "connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Container instances require external network access to communicate with the Amazon ECS service endpoint. If you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Tip You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you are connecting through an internet service provider (ISP) or from behind a firewall without a static IP address, you must find out the range of IP addresses used by client computers. For information about how to create a security group, see Create a security group for your Amazon EC2 instance in the Amazon EC2 User Guide and use the following table to determine what options to select. Option Value Region The same Region in which you created your key pair. Name A name that is easy for you to remember, such as ecs-insta nces-default-cluster. VPC The default VPC (marked with \"(default)\"). Note If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases"} -{"global_id": 230, "doc_id": "ecs", "chunk_id": "7", "question_id": 3, "question": "What should you do if you plan to launch container instances in multiple Regions?", "answer_span": "If you plan to launch container instances in multiple Regions, you need to create a security group in each Region.", "chunk": "connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Container instances require external network access to communicate with the Amazon ECS service endpoint. If you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Tip You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you are connecting through an internet service provider (ISP) or from behind a firewall without a static IP address, you must find out the range of IP addresses used by client computers. For information about how to create a security group, see Create a security group for your Amazon EC2 instance in the Amazon EC2 User Guide and use the following table to determine what options to select. Option Value Region The same Region in which you created your key pair. Name A name that is easy for you to remember, such as ecs-insta nces-default-cluster. VPC The default VPC (marked with \"(default)\"). Note If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases"} -{"global_id": 231, "doc_id": "ecs", "chunk_id": "7", "question_id": 4, "question": "What is a recommended service to find your public IP address?", "answer_span": "For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/.", "chunk": "connect to your container instance from your IP address using SSH. You can also add Create a virtual private cloud 8 Amazon Elastic Container Service Developer Guide rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Container instances require external network access to communicate with the Amazon ECS service endpoint. If you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Tip You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you are connecting through an internet service provider (ISP) or from behind a firewall without a static IP address, you must find out the range of IP addresses used by client computers. For information about how to create a security group, see Create a security group for your Amazon EC2 instance in the Amazon EC2 User Guide and use the following table to determine what options to select. Option Value Region The same Region in which you created your key pair. Name A name that is easy for you to remember, such as ecs-insta nces-default-cluster. VPC The default VPC (marked with \"(default)\"). Note If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases"} -{"global_id": 232, "doc_id": "ecs", "chunk_id": "8", "question_id": 1, "question": "What should you select if your account supports Amazon EC2 Classic?", "answer_span": "If your account supports Amazon EC2 Classic, select the VPC Create a security group", "chunk": "If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases in the Amazon EC2 User Guide. Amazon ECS container instances do not require any inbound ports to be open. However, you might want to add an SSH rule so you can log into the container instance and examine the tasks with Docker commands. You can also add rules for HTTP and HTTPS if you want your container instance to host a task that runs a web server. Container instances do require external network access to communicate with the Amazon ECS service endpoint. Complete the following steps to add these optional security group rules. Add the following three inbound rules to your security group.For information about how to create a security group, see Configure security group rules in the Amazon EC2 User Guide. Option Value HTTP rule Type: HTTP Source: Anywhere (0.0.0.0/0 ) This option automatically adds the 0.0.0.0/0 IPv4 CIDR block as the source. This is acceptable for a short time in a test environment, but it's unsafe in production environments. In production, authorize only a specific IP address or range of addresses to access your instance. HTTPS rule Create a security group Type: HTTPS 10 Amazon Elastic Container Service Option Developer Guide Value Source: Anywhere (0.0.0.0/0 ) This is acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH"} -{"global_id": 233, "doc_id": "ecs", "chunk_id": "8", "question_id": 2, "question": "What do Amazon ECS container instances not require?", "answer_span": "Amazon ECS container instances do not require any inbound ports to be open.", "chunk": "If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases in the Amazon EC2 User Guide. Amazon ECS container instances do not require any inbound ports to be open. However, you might want to add an SSH rule so you can log into the container instance and examine the tasks with Docker commands. You can also add rules for HTTP and HTTPS if you want your container instance to host a task that runs a web server. Container instances do require external network access to communicate with the Amazon ECS service endpoint. Complete the following steps to add these optional security group rules. Add the following three inbound rules to your security group.For information about how to create a security group, see Configure security group rules in the Amazon EC2 User Guide. Option Value HTTP rule Type: HTTP Source: Anywhere (0.0.0.0/0 ) This option automatically adds the 0.0.0.0/0 IPv4 CIDR block as the source. This is acceptable for a short time in a test environment, but it's unsafe in production environments. In production, authorize only a specific IP address or range of addresses to access your instance. HTTPS rule Create a security group Type: HTTPS 10 Amazon Elastic Container Service Option Developer Guide Value Source: Anywhere (0.0.0.0/0 ) This is acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH"} -{"global_id": 234, "doc_id": "ecs", "chunk_id": "8", "question_id": 3, "question": "What is acceptable for a short time in a test environment regarding the HTTP rule?", "answer_span": "This option automatically adds the 0.0.0.0/0 IPv4 CIDR block as the source. This is acceptable for a short time in a test environment, but it's unsafe in production environments.", "chunk": "If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases in the Amazon EC2 User Guide. Amazon ECS container instances do not require any inbound ports to be open. However, you might want to add an SSH rule so you can log into the container instance and examine the tasks with Docker commands. You can also add rules for HTTP and HTTPS if you want your container instance to host a task that runs a web server. Container instances do require external network access to communicate with the Amazon ECS service endpoint. Complete the following steps to add these optional security group rules. Add the following three inbound rules to your security group.For information about how to create a security group, see Configure security group rules in the Amazon EC2 User Guide. Option Value HTTP rule Type: HTTP Source: Anywhere (0.0.0.0/0 ) This option automatically adds the 0.0.0.0/0 IPv4 CIDR block as the source. This is acceptable for a short time in a test environment, but it's unsafe in production environments. In production, authorize only a specific IP address or range of addresses to access your instance. HTTPS rule Create a security group Type: HTTPS 10 Amazon Elastic Container Service Option Developer Guide Value Source: Anywhere (0.0.0.0/0 ) This is acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH"} -{"global_id": 235, "doc_id": "ecs", "chunk_id": "8", "question_id": 4, "question": "What should you do in production regarding access to your instance?", "answer_span": "In production, authorize only a specific IP address or range of addresses to access your instance.", "chunk": "If your account supports Amazon EC2 Classic, select the VPC Create a security group 9 Amazon Elastic Container Service Option Developer Guide Value that you created in the previous task. For information about the outbound rules to add for your use cases, see Security group rules for different use cases in the Amazon EC2 User Guide. Amazon ECS container instances do not require any inbound ports to be open. However, you might want to add an SSH rule so you can log into the container instance and examine the tasks with Docker commands. You can also add rules for HTTP and HTTPS if you want your container instance to host a task that runs a web server. Container instances do require external network access to communicate with the Amazon ECS service endpoint. Complete the following steps to add these optional security group rules. Add the following three inbound rules to your security group.For information about how to create a security group, see Configure security group rules in the Amazon EC2 User Guide. Option Value HTTP rule Type: HTTP Source: Anywhere (0.0.0.0/0 ) This option automatically adds the 0.0.0.0/0 IPv4 CIDR block as the source. This is acceptable for a short time in a test environment, but it's unsafe in production environments. In production, authorize only a specific IP address or range of addresses to access your instance. HTTPS rule Create a security group Type: HTTPS 10 Amazon Elastic Container Service Option Developer Guide Value Source: Anywhere (0.0.0.0/0 ) This is acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH"} -{"global_id": 236, "doc_id": "ecs", "chunk_id": "9", "question_id": 1, "question": "What is recommended for SSH access in production environments?", "answer_span": "In production, authorize only a specific IP address or range of addresses to access your instance.", "chunk": "acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH Source: Custom, specify the public IP address of your computer or network in CIDR notation. To specify an individual IP address in CIDR notation, add the routing prefix /32. For example, if your IP address is 203.0.113 .25 , specify 203.0.113 .25/32 . If your company allocates addresses from a range, specify the entire range, such as 203.0.113 .0/24 . Important For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0 ) to your instance, except for testing purposes and only for a short time. Create the credentials to connect to your EC2 instance For Amazon ECS, a key pair is only needed if you intend on using the EC2 launch type. Create the credentials to connect to your EC2 instance 12 Amazon Elastic Container Service Developer Guide AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an Amazon ECS container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you launch your container instance, then provide the private key when you log in using SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon"} -{"global_id": 237, "doc_id": "ecs", "chunk_id": "9", "question_id": 2, "question": "How do you specify an individual IP address in CIDR notation?", "answer_span": "To specify an individual IP address in CIDR notation, add the routing prefix /32.", "chunk": "acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH Source: Custom, specify the public IP address of your computer or network in CIDR notation. To specify an individual IP address in CIDR notation, add the routing prefix /32. For example, if your IP address is 203.0.113 .25 , specify 203.0.113 .25/32 . If your company allocates addresses from a range, specify the entire range, such as 203.0.113 .0/24 . Important For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0 ) to your instance, except for testing purposes and only for a short time. Create the credentials to connect to your EC2 instance For Amazon ECS, a key pair is only needed if you intend on using the EC2 launch type. Create the credentials to connect to your EC2 instance 12 Amazon Elastic Container Service Developer Guide AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an Amazon ECS container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you launch your container instance, then provide the private key when you log in using SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon"} -{"global_id": 238, "doc_id": "ecs", "chunk_id": "9", "question_id": 3, "question": "What is required to connect to an EC2 instance using SSH?", "answer_span": "You use a key pair to log in to your instance securely.", "chunk": "acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH Source: Custom, specify the public IP address of your computer or network in CIDR notation. To specify an individual IP address in CIDR notation, add the routing prefix /32. For example, if your IP address is 203.0.113 .25 , specify 203.0.113 .25/32 . If your company allocates addresses from a range, specify the entire range, such as 203.0.113 .0/24 . Important For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0 ) to your instance, except for testing purposes and only for a short time. Create the credentials to connect to your EC2 instance For Amazon ECS, a key pair is only needed if you intend on using the EC2 launch type. Create the credentials to connect to your EC2 instance 12 Amazon Elastic Container Service Developer Guide AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an Amazon ECS container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you launch your container instance, then provide the private key when you log in using SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon"} -{"global_id": 239, "doc_id": "ecs", "chunk_id": "9", "question_id": 4, "question": "What should you do if you plan to launch instances in multiple regions?", "answer_span": "If you plan to launch instances in multiple regions, you'll need to create a key pair in each region.", "chunk": "acceptable for a short time in a test environment, but it's unsafe in productio n environments. In productio n, authorize only a specific IP address or range of addresses to access your instance. Create a security group 11 Amazon Elastic Container Service Developer Guide Option Value SSH rule Type: SSH Source: Custom, specify the public IP address of your computer or network in CIDR notation. To specify an individual IP address in CIDR notation, add the routing prefix /32. For example, if your IP address is 203.0.113 .25 , specify 203.0.113 .25/32 . If your company allocates addresses from a range, specify the entire range, such as 203.0.113 .0/24 . Important For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0 ) to your instance, except for testing purposes and only for a short time. Create the credentials to connect to your EC2 instance For Amazon ECS, a key pair is only needed if you intend on using the EC2 launch type. Create the credentials to connect to your EC2 instance 12 Amazon Elastic Container Service Developer Guide AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an Amazon ECS container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you launch your container instance, then provide the private key when you log in using SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon"} -{"global_id": 240, "doc_id": "ecs", "chunk_id": "10", "question_id": 1, "question": "What should you do if you plan to launch instances in multiple regions?", "answer_span": "If you plan to launch instances in multiple regions, you'll need to create a key pair in each region.", "chunk": "SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair • Use the Amazon EC2 console to create a key pair. For more information about creating a key pair, see Create a key pair in the Amazon EC2 User Guide. For information about how to connect to your instance, see Connect to your Linux instance in the Amazon EC2 User Guide. Install the AWS CLI The AWS Management Console can be used to manage all operations manually with Amazon ECS. However, you can install the AWS CLI on your local desktop or a developer box so that you can build scripts that can automate common management tasks in Amazon ECS. To use the AWS CLI with Amazon ECS, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. The AWS Command Line Interface (AWS CLI) is a unified tool that you can use to manage your AWS services. With this one tool alone, you can both control multiple AWS services and automate these services through scripts. The Amazon ECS commands in the AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also"} -{"global_id": 241, "doc_id": "ecs", "chunk_id": "10", "question_id": 2, "question": "Where can you find more information about creating a key pair?", "answer_span": "For more information about creating a key pair, see Create a key pair in the Amazon EC2 User Guide.", "chunk": "SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair • Use the Amazon EC2 console to create a key pair. For more information about creating a key pair, see Create a key pair in the Amazon EC2 User Guide. For information about how to connect to your instance, see Connect to your Linux instance in the Amazon EC2 User Guide. Install the AWS CLI The AWS Management Console can be used to manage all operations manually with Amazon ECS. However, you can install the AWS CLI on your local desktop or a developer box so that you can build scripts that can automate common management tasks in Amazon ECS. To use the AWS CLI with Amazon ECS, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. The AWS Command Line Interface (AWS CLI) is a unified tool that you can use to manage your AWS services. With this one tool alone, you can both control multiple AWS services and automate these services through scripts. The Amazon ECS commands in the AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also"} -{"global_id": 242, "doc_id": "ecs", "chunk_id": "10", "question_id": 3, "question": "What can the AWS Management Console be used for?", "answer_span": "The AWS Management Console can be used to manage all operations manually with Amazon ECS.", "chunk": "SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair • Use the Amazon EC2 console to create a key pair. For more information about creating a key pair, see Create a key pair in the Amazon EC2 User Guide. For information about how to connect to your instance, see Connect to your Linux instance in the Amazon EC2 User Guide. Install the AWS CLI The AWS Management Console can be used to manage all operations manually with Amazon ECS. However, you can install the AWS CLI on your local desktop or a developer box so that you can build scripts that can automate common management tasks in Amazon ECS. To use the AWS CLI with Amazon ECS, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. The AWS Command Line Interface (AWS CLI) is a unified tool that you can use to manage your AWS services. With this one tool alone, you can both control multiple AWS services and automate these services through scripts. The Amazon ECS commands in the AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also"} -{"global_id": 243, "doc_id": "ecs", "chunk_id": "10", "question_id": 4, "question": "What is the AWS CLI suitable for?", "answer_span": "The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources.", "chunk": "SSH. If you haven't created a key pair already, you can create one using the Amazon EC2 console. If you plan to launch instances in multiple regions, you'll need to create a key pair in each region. For more information about regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair • Use the Amazon EC2 console to create a key pair. For more information about creating a key pair, see Create a key pair in the Amazon EC2 User Guide. For information about how to connect to your instance, see Connect to your Linux instance in the Amazon EC2 User Guide. Install the AWS CLI The AWS Management Console can be used to manage all operations manually with Amazon ECS. However, you can install the AWS CLI on your local desktop or a developer box so that you can build scripts that can automate common management tasks in Amazon ECS. To use the AWS CLI with Amazon ECS, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. The AWS Command Line Interface (AWS CLI) is a unified tool that you can use to manage your AWS services. With this one tool alone, you can both control multiple AWS services and automate these services through scripts. The Amazon ECS commands in the AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also"} -{"global_id": 244, "doc_id": "ecs", "chunk_id": "11", "question_id": 1, "question": "What does the AWS CLI reflect?", "answer_span": "AWS CLI are a reflection of the Amazon ECS API.", "chunk": "AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also helpful to customers who want to familiarize themselves with the Amazon ECS APIs. Customers can use the AWS CLI to perform a number of operations on Amazon ECS resources, including Create, Read, Update, and Delete operations, directly from the command line interface. Install the AWS CLI 13 Amazon Elastic Container Service Developer Guide Use the AWS CLI if you are or want to become familiar with the Amazon ECS APIs and corresponding CLI commands and want to write automated scripts and perform specific actions on Amazon ECS resources. AWS also provides the command line tools AWS Tools for Windows PowerShell. For more information, see the AWS Tools for Windows PowerShell User Guide. Next steps for using Amazon ECS After installing the AWS CLI, there are many different tools you can utilize as you continue to use Amazon ECS. The following links explain what some of those tools are and give examples of how to use them with Amazon ECS. • Create your first container image with Docker and push it to Amazon ECR for use in your Amazon ECS task definitions. • Learn how to create an Amazon ECS Linux task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define"} -{"global_id": 245, "doc_id": "ecs", "chunk_id": "11", "question_id": 2, "question": "Who is the AWS CLI suitable for?", "answer_span": "The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources.", "chunk": "AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also helpful to customers who want to familiarize themselves with the Amazon ECS APIs. Customers can use the AWS CLI to perform a number of operations on Amazon ECS resources, including Create, Read, Update, and Delete operations, directly from the command line interface. Install the AWS CLI 13 Amazon Elastic Container Service Developer Guide Use the AWS CLI if you are or want to become familiar with the Amazon ECS APIs and corresponding CLI commands and want to write automated scripts and perform specific actions on Amazon ECS resources. AWS also provides the command line tools AWS Tools for Windows PowerShell. For more information, see the AWS Tools for Windows PowerShell User Guide. Next steps for using Amazon ECS After installing the AWS CLI, there are many different tools you can utilize as you continue to use Amazon ECS. The following links explain what some of those tools are and give examples of how to use them with Amazon ECS. • Create your first container image with Docker and push it to Amazon ECR for use in your Amazon ECS task definitions. • Learn how to create an Amazon ECS Linux task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define"} -{"global_id": 246, "doc_id": "ecs", "chunk_id": "11", "question_id": 3, "question": "What operations can customers perform on Amazon ECS resources using the AWS CLI?", "answer_span": "Customers can use the AWS CLI to perform a number of operations on Amazon ECS resources, including Create, Read, Update, and Delete operations, directly from the command line interface.", "chunk": "AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also helpful to customers who want to familiarize themselves with the Amazon ECS APIs. Customers can use the AWS CLI to perform a number of operations on Amazon ECS resources, including Create, Read, Update, and Delete operations, directly from the command line interface. Install the AWS CLI 13 Amazon Elastic Container Service Developer Guide Use the AWS CLI if you are or want to become familiar with the Amazon ECS APIs and corresponding CLI commands and want to write automated scripts and perform specific actions on Amazon ECS resources. AWS also provides the command line tools AWS Tools for Windows PowerShell. For more information, see the AWS Tools for Windows PowerShell User Guide. Next steps for using Amazon ECS After installing the AWS CLI, there are many different tools you can utilize as you continue to use Amazon ECS. The following links explain what some of those tools are and give examples of how to use them with Amazon ECS. • Create your first container image with Docker and push it to Amazon ECR for use in your Amazon ECS task definitions. • Learn how to create an Amazon ECS Linux task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define"} -{"global_id": 247, "doc_id": "ecs", "chunk_id": "11", "question_id": 4, "question": "What is one of the next steps after installing the AWS CLI?", "answer_span": "After installing the AWS CLI, there are many different tools you can utilize as you continue to use Amazon ECS.", "chunk": "AWS CLI are a reflection of the Amazon ECS API. The AWS CLI is suitable for customers who prefer and are used to scripting and interfacing with a command line tool and know exactly which actions they want to perform on their Amazon ECS resources. The AWS CLI is also helpful to customers who want to familiarize themselves with the Amazon ECS APIs. Customers can use the AWS CLI to perform a number of operations on Amazon ECS resources, including Create, Read, Update, and Delete operations, directly from the command line interface. Install the AWS CLI 13 Amazon Elastic Container Service Developer Guide Use the AWS CLI if you are or want to become familiar with the Amazon ECS APIs and corresponding CLI commands and want to write automated scripts and perform specific actions on Amazon ECS resources. AWS also provides the command line tools AWS Tools for Windows PowerShell. For more information, see the AWS Tools for Windows PowerShell User Guide. Next steps for using Amazon ECS After installing the AWS CLI, there are many different tools you can utilize as you continue to use Amazon ECS. The following links explain what some of those tools are and give examples of how to use them with Amazon ECS. • Create your first container image with Docker and push it to Amazon ECR for use in your Amazon ECS task definitions. • Learn how to create an Amazon ECS Linux task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define"} -{"global_id": 248, "doc_id": "ecs", "chunk_id": "12", "question_id": 1, "question": "What is the purpose of creating an Amazon ECS Windows task?", "answer_span": "create an Amazon ECS Windows task for the Fargate launch type.", "chunk": "create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define and manage all AWS resources in your environment with automated deployment using Using Amazon ECS with AWS CloudFormation. • Use the complete Creating Amazon ECS resources using the AWS Copilot command line interface end-to-end developer workflow to create, release, and operate container applications that comply with AWS best practices for infrastructure. Creating a container image for use on Amazon ECS Amazon ECS uses Docker images in task definitions to launch containers. Docker is a technology that provides the tools for you to build, run, test, and deploy distributed applications in containers. Amazon ECS schedules containerized applications on to container instances or on to AWS Fargate. Containerized applications are packaged as container images. This example creates a container image for a web server. You can create your first Docker image, and then push that image to Amazon ECR, which is a container registry, for use in your Amazon ECS task definitions. This walkthrough assumes that you Next steps for using Amazon ECS 14 Amazon Elastic Container Service Developer Guide possess a basic understanding of what Docker is and how it works. For more information about Docker, see What is Docker? and the Docker documentation. Prerequisites Before you begin, ensure the following prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service."} -{"global_id": 249, "doc_id": "ecs", "chunk_id": "12", "question_id": 2, "question": "What technology does Amazon ECS use to launch containers?", "answer_span": "Amazon ECS uses Docker images in task definitions to launch containers.", "chunk": "create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define and manage all AWS resources in your environment with automated deployment using Using Amazon ECS with AWS CloudFormation. • Use the complete Creating Amazon ECS resources using the AWS Copilot command line interface end-to-end developer workflow to create, release, and operate container applications that comply with AWS best practices for infrastructure. Creating a container image for use on Amazon ECS Amazon ECS uses Docker images in task definitions to launch containers. Docker is a technology that provides the tools for you to build, run, test, and deploy distributed applications in containers. Amazon ECS schedules containerized applications on to container instances or on to AWS Fargate. Containerized applications are packaged as container images. This example creates a container image for a web server. You can create your first Docker image, and then push that image to Amazon ECR, which is a container registry, for use in your Amazon ECS task definitions. This walkthrough assumes that you Next steps for using Amazon ECS 14 Amazon Elastic Container Service Developer Guide possess a basic understanding of what Docker is and how it works. For more information about Docker, see What is Docker? and the Docker documentation. Prerequisites Before you begin, ensure the following prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service."} -{"global_id": 250, "doc_id": "ecs", "chunk_id": "12", "question_id": 3, "question": "What must you ensure before beginning to use Amazon ECS?", "answer_span": "ensure the following prerequisites are met.", "chunk": "create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define and manage all AWS resources in your environment with automated deployment using Using Amazon ECS with AWS CloudFormation. • Use the complete Creating Amazon ECS resources using the AWS Copilot command line interface end-to-end developer workflow to create, release, and operate container applications that comply with AWS best practices for infrastructure. Creating a container image for use on Amazon ECS Amazon ECS uses Docker images in task definitions to launch containers. Docker is a technology that provides the tools for you to build, run, test, and deploy distributed applications in containers. Amazon ECS schedules containerized applications on to container instances or on to AWS Fargate. Containerized applications are packaged as container images. This example creates a container image for a web server. You can create your first Docker image, and then push that image to Amazon ECR, which is a container registry, for use in your Amazon ECS task definitions. This walkthrough assumes that you Next steps for using Amazon ECS 14 Amazon Elastic Container Service Developer Guide possess a basic understanding of what Docker is and how it works. For more information about Docker, see What is Docker? and the Docker documentation. Prerequisites Before you begin, ensure the following prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service."} -{"global_id": 251, "doc_id": "ecs", "chunk_id": "12", "question_id": 4, "question": "Where can you push your Docker image for use in Amazon ECS task definitions?", "answer_span": "push that image to Amazon ECR, which is a container registry, for use in your Amazon ECS task definitions.", "chunk": "create an Amazon ECS Windows task for the Fargate launch type. • Learn how to create an Amazon ECS Windows task for the EC2 launch type. • Using your preferred programming language, define infrastructure or architecture as code with the Creating Amazon ECS resources using the AWS CDK. • Define and manage all AWS resources in your environment with automated deployment using Using Amazon ECS with AWS CloudFormation. • Use the complete Creating Amazon ECS resources using the AWS Copilot command line interface end-to-end developer workflow to create, release, and operate container applications that comply with AWS best practices for infrastructure. Creating a container image for use on Amazon ECS Amazon ECS uses Docker images in task definitions to launch containers. Docker is a technology that provides the tools for you to build, run, test, and deploy distributed applications in containers. Amazon ECS schedules containerized applications on to container instances or on to AWS Fargate. Containerized applications are packaged as container images. This example creates a container image for a web server. You can create your first Docker image, and then push that image to Amazon ECR, which is a container registry, for use in your Amazon ECS task definitions. This walkthrough assumes that you Next steps for using Amazon ECS 14 Amazon Elastic Container Service Developer Guide possess a basic understanding of what Docker is and how it works. For more information about Docker, see What is Docker? and the Docker documentation. Prerequisites Before you begin, ensure the following prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service."} -{"global_id": 252, "doc_id": "ecs", "chunk_id": "13", "question_id": 1, "question": "What must be completed before using Amazon ECR?", "answer_span": "Ensure you have completed the Amazon ECR setup steps.", "chunk": "prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service. For more information, see Amazon ECR managed policies. • You have Docker installed. For Docker installation steps for Amazon Linux 2023, see Installing Docker on AL2023. For all other operating systems, see the Docker documentation at Docker Desktop overview. • You have the AWS CLI installed and configured. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. If you don't have or need a local development environment and you prefer to use an Amazon EC2 instance to use Docker, we provide the following steps to launch an Amazon EC2 instance using Amazon Linux 2023 and install Docker Engine and the Docker CLI. Installing Docker on AL2023 Docker is available on many different operating systems, including most modern Linux distributions, like Ubuntu, and even macOS and Windows. For more information about how to install Docker on your particular operating system, go to the Docker installation guide. You do not need a local development system to use Docker. If you are using Amazon EC2 already, you can launch an Amazon Linux 2023 instance and install Docker to get started. If you already have Docker installed, skip to Create a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide."} -{"global_id": 253, "doc_id": "ecs", "chunk_id": "13", "question_id": 2, "question": "What permissions are required to access the Amazon ECR service?", "answer_span": "Your user has the required IAM permissions to access and use the Amazon ECR service.", "chunk": "prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service. For more information, see Amazon ECR managed policies. • You have Docker installed. For Docker installation steps for Amazon Linux 2023, see Installing Docker on AL2023. For all other operating systems, see the Docker documentation at Docker Desktop overview. • You have the AWS CLI installed and configured. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. If you don't have or need a local development environment and you prefer to use an Amazon EC2 instance to use Docker, we provide the following steps to launch an Amazon EC2 instance using Amazon Linux 2023 and install Docker Engine and the Docker CLI. Installing Docker on AL2023 Docker is available on many different operating systems, including most modern Linux distributions, like Ubuntu, and even macOS and Windows. For more information about how to install Docker on your particular operating system, go to the Docker installation guide. You do not need a local development system to use Docker. If you are using Amazon EC2 already, you can launch an Amazon Linux 2023 instance and install Docker to get started. If you already have Docker installed, skip to Create a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide."} -{"global_id": 254, "doc_id": "ecs", "chunk_id": "13", "question_id": 3, "question": "What should you do if you prefer to use an Amazon EC2 instance for Docker?", "answer_span": "If you don't have or need a local development environment and you prefer to use an Amazon EC2 instance to use Docker, we provide the following steps to launch an Amazon EC2 instance using Amazon Linux 2023 and install Docker Engine and the Docker CLI.", "chunk": "prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service. For more information, see Amazon ECR managed policies. • You have Docker installed. For Docker installation steps for Amazon Linux 2023, see Installing Docker on AL2023. For all other operating systems, see the Docker documentation at Docker Desktop overview. • You have the AWS CLI installed and configured. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. If you don't have or need a local development environment and you prefer to use an Amazon EC2 instance to use Docker, we provide the following steps to launch an Amazon EC2 instance using Amazon Linux 2023 and install Docker Engine and the Docker CLI. Installing Docker on AL2023 Docker is available on many different operating systems, including most modern Linux distributions, like Ubuntu, and even macOS and Windows. For more information about how to install Docker on your particular operating system, go to the Docker installation guide. You do not need a local development system to use Docker. If you are using Amazon EC2 already, you can launch an Amazon Linux 2023 instance and install Docker to get started. If you already have Docker installed, skip to Create a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide."} -{"global_id": 255, "doc_id": "ecs", "chunk_id": "13", "question_id": 4, "question": "Where can you find the installation steps for Docker on Amazon Linux 2023?", "answer_span": "For Docker installation steps for Amazon Linux 2023, see Installing Docker on AL2023.", "chunk": "prerequisites are met. • Ensure you have completed the Amazon ECR setup steps. For more information, see Moving an image through its lifecycle in Amazon ECR in the Amazon Elastic Container Registry User Guide. • Your user has the required IAM permissions to access and use the Amazon ECR service. For more information, see Amazon ECR managed policies. • You have Docker installed. For Docker installation steps for Amazon Linux 2023, see Installing Docker on AL2023. For all other operating systems, see the Docker documentation at Docker Desktop overview. • You have the AWS CLI installed and configured. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. If you don't have or need a local development environment and you prefer to use an Amazon EC2 instance to use Docker, we provide the following steps to launch an Amazon EC2 instance using Amazon Linux 2023 and install Docker Engine and the Docker CLI. Installing Docker on AL2023 Docker is available on many different operating systems, including most modern Linux distributions, like Ubuntu, and even macOS and Windows. For more information about how to install Docker on your particular operating system, go to the Docker installation guide. You do not need a local development system to use Docker. If you are using Amazon EC2 already, you can launch an Amazon Linux 2023 instance and install Docker to get started. If you already have Docker installed, skip to Create a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide."} -{"global_id": 256, "doc_id": "ecs", "chunk_id": "14", "question_id": 1, "question": "What is the first step to install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI?", "answer_span": "1. Launch an instance with the latest Amazon Linux 2023 AMI.", "chunk": "a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide. 2. Connect to your instance. For more information, see Connect to your EC2 instance in the Amazon EC2 User Guide. 3. Update the installed packages and package cache on your instance. Prerequisites 15 Amazon Elastic Container Service Developer Guide sudo yum update -y 4. Install the most recent Docker Community Edition package. sudo yum install docker 5. Start the Docker service. sudo service docker start 6. Add the ec2-user to the docker group so you can execute Docker commands without using sudo. sudo usermod -a -G docker ec2-user 7. Log out and log back in again to pick up the new docker group permissions. You can accomplish this by closing your current SSH terminal window and reconnecting to your instance in a new one. Your new SSH session will have the appropriate docker group permissions. 8. Verify that the ec2-user can run Docker commands without sudo. docker info Note In some cases, you may need to reboot your instance to provide permissions for the ec2-user to access the Docker daemon. Try rebooting your instance if you see the following error: Cannot connect to the Docker daemon. Is the docker daemon running on this host? Create a Docker image Amazon ECS task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance,"} -{"global_id": 257, "doc_id": "ecs", "chunk_id": "14", "question_id": 2, "question": "What command is used to install the most recent Docker Community Edition package?", "answer_span": "sudo yum install docker", "chunk": "a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide. 2. Connect to your instance. For more information, see Connect to your EC2 instance in the Amazon EC2 User Guide. 3. Update the installed packages and package cache on your instance. Prerequisites 15 Amazon Elastic Container Service Developer Guide sudo yum update -y 4. Install the most recent Docker Community Edition package. sudo yum install docker 5. Start the Docker service. sudo service docker start 6. Add the ec2-user to the docker group so you can execute Docker commands without using sudo. sudo usermod -a -G docker ec2-user 7. Log out and log back in again to pick up the new docker group permissions. You can accomplish this by closing your current SSH terminal window and reconnecting to your instance in a new one. Your new SSH session will have the appropriate docker group permissions. 8. Verify that the ec2-user can run Docker commands without sudo. docker info Note In some cases, you may need to reboot your instance to provide permissions for the ec2-user to access the Docker daemon. Try rebooting your instance if you see the following error: Cannot connect to the Docker daemon. Is the docker daemon running on this host? Create a Docker image Amazon ECS task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance,"} -{"global_id": 258, "doc_id": "ecs", "chunk_id": "14", "question_id": 3, "question": "What should you do after adding the ec2-user to the docker group?", "answer_span": "Log out and log back in again to pick up the new docker group permissions.", "chunk": "a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide. 2. Connect to your instance. For more information, see Connect to your EC2 instance in the Amazon EC2 User Guide. 3. Update the installed packages and package cache on your instance. Prerequisites 15 Amazon Elastic Container Service Developer Guide sudo yum update -y 4. Install the most recent Docker Community Edition package. sudo yum install docker 5. Start the Docker service. sudo service docker start 6. Add the ec2-user to the docker group so you can execute Docker commands without using sudo. sudo usermod -a -G docker ec2-user 7. Log out and log back in again to pick up the new docker group permissions. You can accomplish this by closing your current SSH terminal window and reconnecting to your instance in a new one. Your new SSH session will have the appropriate docker group permissions. 8. Verify that the ec2-user can run Docker commands without sudo. docker info Note In some cases, you may need to reboot your instance to provide permissions for the ec2-user to access the Docker daemon. Try rebooting your instance if you see the following error: Cannot connect to the Docker daemon. Is the docker daemon running on this host? Create a Docker image Amazon ECS task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance,"} -{"global_id": 259, "doc_id": "ecs", "chunk_id": "14", "question_id": 4, "question": "What error might indicate that you need to reboot your instance?", "answer_span": "Cannot connect to the Docker daemon. Is the docker daemon running on this host?", "chunk": "a Docker image. To install Docker on an Amazon EC2 instance using an Amazon Linux 2023 AMI 1. Launch an instance with the latest Amazon Linux 2023 AMI. For more information, see Launch an EC2 instance using the launch instance wizard in the console in the Amazon EC2 User Guide. 2. Connect to your instance. For more information, see Connect to your EC2 instance in the Amazon EC2 User Guide. 3. Update the installed packages and package cache on your instance. Prerequisites 15 Amazon Elastic Container Service Developer Guide sudo yum update -y 4. Install the most recent Docker Community Edition package. sudo yum install docker 5. Start the Docker service. sudo service docker start 6. Add the ec2-user to the docker group so you can execute Docker commands without using sudo. sudo usermod -a -G docker ec2-user 7. Log out and log back in again to pick up the new docker group permissions. You can accomplish this by closing your current SSH terminal window and reconnecting to your instance in a new one. Your new SSH session will have the appropriate docker group permissions. 8. Verify that the ec2-user can run Docker commands without sudo. docker info Note In some cases, you may need to reboot your instance to provide permissions for the ec2-user to access the Docker daemon. Try rebooting your instance if you see the following error: Cannot connect to the Docker daemon. Is the docker daemon running on this host? Create a Docker image Amazon ECS task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance,"} -{"global_id": 260, "doc_id": "ecs", "chunk_id": "15", "question_id": 1, "question": "What is the purpose of task definitions in Amazon ECS?", "answer_span": "task definitions use container images to launch containers on the container instances in your clusters.", "chunk": "task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance, and then push the image to the Amazon ECR container registry so you can use it in an Amazon ECS task definition. To create a Docker image of a simple web application 1. Create a file called Dockerfile. A Dockerfile is a manifest that describes the base image to use for your Docker image and what you want installed and running on it. For more information about Dockerfiles, go to the Dockerfile Reference. touch Dockerfile 2. Edit the Dockerfile you just created and add the following content. FROM public.ecr.aws/amazonlinux/amazonlinux:latest # Update installed packages and install Apache RUN yum update -y && \\ yum install -y httpd # Write hello world message RUN echo 'Hello World!' > /var/www/html/index.html # Configure Apache RUN echo 'mkdir -p /var/run/httpd' >> /root/run_apache.sh && \\ echo 'mkdir -p /var/lock/httpd' >> /root/run_apache.sh && \\ echo '/usr/sbin/httpd -D FOREGROUND' >> /root/run_apache.sh && \\ chmod 755 /root/run_apache.sh EXPOSE 80 CMD /root/run_apache.sh This Dockerfile uses the public Amazon Linux 2023 image hosted on Amazon ECR Public. The RUN instructions update the package caches, installs some software packages for the web server, and then write the \"Hello World!\" content to the web servers document root. The EXPOSE instruction means that port 80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile"} -{"global_id": 261, "doc_id": "ecs", "chunk_id": "15", "question_id": 2, "question": "What is the first step to create a Docker image of a simple web application?", "answer_span": "Create a file called Dockerfile.", "chunk": "task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance, and then push the image to the Amazon ECR container registry so you can use it in an Amazon ECS task definition. To create a Docker image of a simple web application 1. Create a file called Dockerfile. A Dockerfile is a manifest that describes the base image to use for your Docker image and what you want installed and running on it. For more information about Dockerfiles, go to the Dockerfile Reference. touch Dockerfile 2. Edit the Dockerfile you just created and add the following content. FROM public.ecr.aws/amazonlinux/amazonlinux:latest # Update installed packages and install Apache RUN yum update -y && \\ yum install -y httpd # Write hello world message RUN echo 'Hello World!' > /var/www/html/index.html # Configure Apache RUN echo 'mkdir -p /var/run/httpd' >> /root/run_apache.sh && \\ echo 'mkdir -p /var/lock/httpd' >> /root/run_apache.sh && \\ echo '/usr/sbin/httpd -D FOREGROUND' >> /root/run_apache.sh && \\ chmod 755 /root/run_apache.sh EXPOSE 80 CMD /root/run_apache.sh This Dockerfile uses the public Amazon Linux 2023 image hosted on Amazon ECR Public. The RUN instructions update the package caches, installs some software packages for the web server, and then write the \"Hello World!\" content to the web servers document root. The EXPOSE instruction means that port 80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile"} -{"global_id": 262, "doc_id": "ecs", "chunk_id": "15", "question_id": 3, "question": "What does the RUN instruction in the Dockerfile do?", "answer_span": "The RUN instructions update the package caches, installs some software packages for the web server, and then write the 'Hello World!' content to the web servers document root.", "chunk": "task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance, and then push the image to the Amazon ECR container registry so you can use it in an Amazon ECS task definition. To create a Docker image of a simple web application 1. Create a file called Dockerfile. A Dockerfile is a manifest that describes the base image to use for your Docker image and what you want installed and running on it. For more information about Dockerfiles, go to the Dockerfile Reference. touch Dockerfile 2. Edit the Dockerfile you just created and add the following content. FROM public.ecr.aws/amazonlinux/amazonlinux:latest # Update installed packages and install Apache RUN yum update -y && \\ yum install -y httpd # Write hello world message RUN echo 'Hello World!' > /var/www/html/index.html # Configure Apache RUN echo 'mkdir -p /var/run/httpd' >> /root/run_apache.sh && \\ echo 'mkdir -p /var/lock/httpd' >> /root/run_apache.sh && \\ echo '/usr/sbin/httpd -D FOREGROUND' >> /root/run_apache.sh && \\ chmod 755 /root/run_apache.sh EXPOSE 80 CMD /root/run_apache.sh This Dockerfile uses the public Amazon Linux 2023 image hosted on Amazon ECR Public. The RUN instructions update the package caches, installs some software packages for the web server, and then write the \"Hello World!\" content to the web servers document root. The EXPOSE instruction means that port 80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile"} -{"global_id": 263, "doc_id": "ecs", "chunk_id": "15", "question_id": 4, "question": "Which port is exposed by the Dockerfile for the web server?", "answer_span": "The EXPOSE instruction means that port 80 on the container is the one that is listening.", "chunk": "task definitions use container images to launch containers on the container instances in your clusters. In this section, you create a Docker image of a simple web application, and test Create a Docker image 16 Amazon Elastic Container Service Developer Guide it on your local system or Amazon EC2 instance, and then push the image to the Amazon ECR container registry so you can use it in an Amazon ECS task definition. To create a Docker image of a simple web application 1. Create a file called Dockerfile. A Dockerfile is a manifest that describes the base image to use for your Docker image and what you want installed and running on it. For more information about Dockerfiles, go to the Dockerfile Reference. touch Dockerfile 2. Edit the Dockerfile you just created and add the following content. FROM public.ecr.aws/amazonlinux/amazonlinux:latest # Update installed packages and install Apache RUN yum update -y && \\ yum install -y httpd # Write hello world message RUN echo 'Hello World!' > /var/www/html/index.html # Configure Apache RUN echo 'mkdir -p /var/run/httpd' >> /root/run_apache.sh && \\ echo 'mkdir -p /var/lock/httpd' >> /root/run_apache.sh && \\ echo '/usr/sbin/httpd -D FOREGROUND' >> /root/run_apache.sh && \\ chmod 755 /root/run_apache.sh EXPOSE 80 CMD /root/run_apache.sh This Dockerfile uses the public Amazon Linux 2023 image hosted on Amazon ECR Public. The RUN instructions update the package caches, installs some software packages for the web server, and then write the \"Hello World!\" content to the web servers document root. The EXPOSE instruction means that port 80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile"} -{"global_id": 264, "doc_id": "ecs", "chunk_id": "16", "question_id": 1, "question": "What command is used to build the Docker image from your Dockerfile?", "answer_span": "3. Build the Docker image from your Dockerfile.", "chunk": "80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile in the following command, instead of the relative path shown below. If you run the command an ARM based system, such as Apple Silicon, use the -platform option \"--platform linux/amd64\". docker build -t hello-world . 4. List your container image. docker images --filter reference=hello-world Output: REPOSITORY SIZE hello-world 194MB 5. TAG IMAGE ID CREATED latest e9ffedc8c286 4 minutes ago Run the newly built image. The -p 80:80 option maps the exposed port 80 on the container to port 80 on the host system. docker run -t -i -p 80:80 hello-world Note Output from the Apache web server is displayed in the terminal window. You can ignore the \"Could not reliably determine the fully qualified domain name\" message. 6. Open a browser and point to the server that is running Docker and hosting your container. • If you are using an EC2 instance, this is the Public DNS value for the server, which is the same address you use to connect to the instance with SSH. Make sure that the security group for your instance allows inbound traffic on port 80. Create a Docker image 18 Amazon Elastic Container Service Developer Guide • If you are running Docker locally, point your browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push,"} -{"global_id": 265, "doc_id": "ecs", "chunk_id": "16", "question_id": 2, "question": "What is the size of the hello-world image?", "answer_span": "Output: REPOSITORY SIZE hello-world 194MB", "chunk": "80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile in the following command, instead of the relative path shown below. If you run the command an ARM based system, such as Apple Silicon, use the -platform option \"--platform linux/amd64\". docker build -t hello-world . 4. List your container image. docker images --filter reference=hello-world Output: REPOSITORY SIZE hello-world 194MB 5. TAG IMAGE ID CREATED latest e9ffedc8c286 4 minutes ago Run the newly built image. The -p 80:80 option maps the exposed port 80 on the container to port 80 on the host system. docker run -t -i -p 80:80 hello-world Note Output from the Apache web server is displayed in the terminal window. You can ignore the \"Could not reliably determine the fully qualified domain name\" message. 6. Open a browser and point to the server that is running Docker and hosting your container. • If you are using an EC2 instance, this is the Public DNS value for the server, which is the same address you use to connect to the instance with SSH. Make sure that the security group for your instance allows inbound traffic on port 80. Create a Docker image 18 Amazon Elastic Container Service Developer Guide • If you are running Docker locally, point your browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push,"} -{"global_id": 266, "doc_id": "ecs", "chunk_id": "16", "question_id": 3, "question": "What command should you use to run the newly built image?", "answer_span": "docker run -t -i -p 80:80 hello-world", "chunk": "80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile in the following command, instead of the relative path shown below. If you run the command an ARM based system, such as Apple Silicon, use the -platform option \"--platform linux/amd64\". docker build -t hello-world . 4. List your container image. docker images --filter reference=hello-world Output: REPOSITORY SIZE hello-world 194MB 5. TAG IMAGE ID CREATED latest e9ffedc8c286 4 minutes ago Run the newly built image. The -p 80:80 option maps the exposed port 80 on the container to port 80 on the host system. docker run -t -i -p 80:80 hello-world Note Output from the Apache web server is displayed in the terminal window. You can ignore the \"Could not reliably determine the fully qualified domain name\" message. 6. Open a browser and point to the server that is running Docker and hosting your container. • If you are using an EC2 instance, this is the Public DNS value for the server, which is the same address you use to connect to the instance with SSH. Make sure that the security group for your instance allows inbound traffic on port 80. Create a Docker image 18 Amazon Elastic Container Service Developer Guide • If you are running Docker locally, point your browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push,"} -{"global_id": 267, "doc_id": "ecs", "chunk_id": "16", "question_id": 4, "question": "What should you do if you are running Docker locally?", "answer_span": "If you are running Docker locally, point your browser to http://localhost/.", "chunk": "80 on the container is the one that is listening, and the CMD instruction starts the web server. 3. Build the Docker image from your Dockerfile. Create a Docker image 17 Amazon Elastic Container Service Developer Guide Note Some versions of Docker may require the full path to your Dockerfile in the following command, instead of the relative path shown below. If you run the command an ARM based system, such as Apple Silicon, use the -platform option \"--platform linux/amd64\". docker build -t hello-world . 4. List your container image. docker images --filter reference=hello-world Output: REPOSITORY SIZE hello-world 194MB 5. TAG IMAGE ID CREATED latest e9ffedc8c286 4 minutes ago Run the newly built image. The -p 80:80 option maps the exposed port 80 on the container to port 80 on the host system. docker run -t -i -p 80:80 hello-world Note Output from the Apache web server is displayed in the terminal window. You can ignore the \"Could not reliably determine the fully qualified domain name\" message. 6. Open a browser and point to the server that is running Docker and hosting your container. • If you are using an EC2 instance, this is the Public DNS value for the server, which is the same address you use to connect to the instance with SSH. Make sure that the security group for your instance allows inbound traffic on port 80. Create a Docker image 18 Amazon Elastic Container Service Developer Guide • If you are running Docker locally, point your browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push,"} -{"global_id": 268, "doc_id": "ecs", "chunk_id": "17", "question_id": 1, "question": "What should you see when you navigate to http://localhost/?", "answer_span": "You should see a web page with your \"Hello World!\" statement.", "chunk": "browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push, pull, and manage images in your Amazon ECR repositories. For Amazon ECR product details, featured customer case studies, and FAQs, see the Amazon Elastic Container Registry product detail pages. To tag your image and push it to Amazon ECR 1. Create an Amazon ECR repository to store your hello-world image. Note the repositoryUri in the output. Substitute region, with your AWS Region, for example, us-east-1. aws ecr create-repository --repository-name hello-repository --region region Output: { \"repository\": { \"registryId\": \"aws_account_id\", \"repositoryName\": \"hello-repository\", \"repositoryArn\": \"arn:aws:ecr:region:aws_account_id:repository/hellorepository\", \"createdAt\": 1505337806.0, \"repositoryUri\": \"aws_account_id.dkr.ecr.region.amazonaws.com/hellorepository\" } } 2. Tag the hello-world image with the repositoryUri value from the previous step. docker tag hello-world aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Push your image to Amazon Elastic Container Registry 19 Amazon Elastic Container Service 3. Developer Guide Run the aws ecr get-login-password command. Specify the registry URI you want to authenticate to. For more information, see Registry Authentication in the Amazon Elastic Container Registry User Guide. aws ecr get-login-password --region region | docker login --username AWS -password-stdin aws_account_id.dkr.ecr.region.amazonaws.com Output: Login Succeeded Important If you receive an error, install or upgrade to the latest version of the AWS CLI. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you"} -{"global_id": 269, "doc_id": "ecs", "chunk_id": "17", "question_id": 2, "question": "What command is used to create an Amazon ECR repository?", "answer_span": "aws ecr create-repository --repository-name hello-repository --region region", "chunk": "browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push, pull, and manage images in your Amazon ECR repositories. For Amazon ECR product details, featured customer case studies, and FAQs, see the Amazon Elastic Container Registry product detail pages. To tag your image and push it to Amazon ECR 1. Create an Amazon ECR repository to store your hello-world image. Note the repositoryUri in the output. Substitute region, with your AWS Region, for example, us-east-1. aws ecr create-repository --repository-name hello-repository --region region Output: { \"repository\": { \"registryId\": \"aws_account_id\", \"repositoryName\": \"hello-repository\", \"repositoryArn\": \"arn:aws:ecr:region:aws_account_id:repository/hellorepository\", \"createdAt\": 1505337806.0, \"repositoryUri\": \"aws_account_id.dkr.ecr.region.amazonaws.com/hellorepository\" } } 2. Tag the hello-world image with the repositoryUri value from the previous step. docker tag hello-world aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Push your image to Amazon Elastic Container Registry 19 Amazon Elastic Container Service 3. Developer Guide Run the aws ecr get-login-password command. Specify the registry URI you want to authenticate to. For more information, see Registry Authentication in the Amazon Elastic Container Registry User Guide. aws ecr get-login-password --region region | docker login --username AWS -password-stdin aws_account_id.dkr.ecr.region.amazonaws.com Output: Login Succeeded Important If you receive an error, install or upgrade to the latest version of the AWS CLI. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you"} -{"global_id": 270, "doc_id": "ecs", "chunk_id": "17", "question_id": 3, "question": "What command do you run to authenticate to the registry?", "answer_span": "aws ecr get-login-password --region region | docker login --username AWS -password-stdin aws_account_id.dkr.ecr.region.amazonaws.com", "chunk": "browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push, pull, and manage images in your Amazon ECR repositories. For Amazon ECR product details, featured customer case studies, and FAQs, see the Amazon Elastic Container Registry product detail pages. To tag your image and push it to Amazon ECR 1. Create an Amazon ECR repository to store your hello-world image. Note the repositoryUri in the output. Substitute region, with your AWS Region, for example, us-east-1. aws ecr create-repository --repository-name hello-repository --region region Output: { \"repository\": { \"registryId\": \"aws_account_id\", \"repositoryName\": \"hello-repository\", \"repositoryArn\": \"arn:aws:ecr:region:aws_account_id:repository/hellorepository\", \"createdAt\": 1505337806.0, \"repositoryUri\": \"aws_account_id.dkr.ecr.region.amazonaws.com/hellorepository\" } } 2. Tag the hello-world image with the repositoryUri value from the previous step. docker tag hello-world aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Push your image to Amazon Elastic Container Registry 19 Amazon Elastic Container Service 3. Developer Guide Run the aws ecr get-login-password command. Specify the registry URI you want to authenticate to. For more information, see Registry Authentication in the Amazon Elastic Container Registry User Guide. aws ecr get-login-password --region region | docker login --username AWS -password-stdin aws_account_id.dkr.ecr.region.amazonaws.com Output: Login Succeeded Important If you receive an error, install or upgrade to the latest version of the AWS CLI. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you"} -{"global_id": 271, "doc_id": "ecs", "chunk_id": "17", "question_id": 4, "question": "What should you do if you receive an error related to the AWS CLI?", "answer_span": "If you receive an error, install or upgrade to the latest version of the AWS CLI.", "chunk": "browser to http://localhost/. You should see a web page with your \"Hello World!\" statement. 7. Stop the Docker container by typing Ctrl + c. Push your image to Amazon Elastic Container Registry Amazon ECR is a managed AWS managed image registry service. You can use the Docker CLI to push, pull, and manage images in your Amazon ECR repositories. For Amazon ECR product details, featured customer case studies, and FAQs, see the Amazon Elastic Container Registry product detail pages. To tag your image and push it to Amazon ECR 1. Create an Amazon ECR repository to store your hello-world image. Note the repositoryUri in the output. Substitute region, with your AWS Region, for example, us-east-1. aws ecr create-repository --repository-name hello-repository --region region Output: { \"repository\": { \"registryId\": \"aws_account_id\", \"repositoryName\": \"hello-repository\", \"repositoryArn\": \"arn:aws:ecr:region:aws_account_id:repository/hellorepository\", \"createdAt\": 1505337806.0, \"repositoryUri\": \"aws_account_id.dkr.ecr.region.amazonaws.com/hellorepository\" } } 2. Tag the hello-world image with the repositoryUri value from the previous step. docker tag hello-world aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Push your image to Amazon Elastic Container Registry 19 Amazon Elastic Container Service 3. Developer Guide Run the aws ecr get-login-password command. Specify the registry URI you want to authenticate to. For more information, see Registry Authentication in the Amazon Elastic Container Registry User Guide. aws ecr get-login-password --region region | docker login --username AWS -password-stdin aws_account_id.dkr.ecr.region.amazonaws.com Output: Login Succeeded Important If you receive an error, install or upgrade to the latest version of the AWS CLI. For more information, see Installing or updating to the latest version of the AWS CLI in the AWS Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you"} -{"global_id": 272, "doc_id": "ecs", "chunk_id": "18", "question_id": 1, "question": "What command is used to push the image to Amazon ECR?", "answer_span": "docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository", "chunk": "Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you are done experimenting with your Amazon ECR image, you can delete the repository so you are not charged for image storage. aws ecr delete-repository --repository-name hello-repository --region region --force Next steps Your task definitions require a task execution role. For more information, see Amazon ECS task execution IAM role. After you have created and pushed your container image to Amazon ECR, you can use that image in a task definition. For more information, see one of the following: • the section called “Learn how to create a Linux task for the Fargate launch type” Clean up 20 Amazon Elastic Container Service Developer Guide • the section called “Learn how to create a Windows task for the Fargate launch type” • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Learn how to create an Amazon ECS Linux task for the Fargate launch type Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage your containers. You can host your containers on a serverless infrastructure that is managed by Amazon ECS by launching your services or tasks on AWS Fargate. For more information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before"} -{"global_id": 273, "doc_id": "ecs", "chunk_id": "18", "question_id": 2, "question": "What should you do when you are done experimenting with your Amazon ECR image?", "answer_span": "you can delete the repository so you are not charged for image storage.", "chunk": "Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you are done experimenting with your Amazon ECR image, you can delete the repository so you are not charged for image storage. aws ecr delete-repository --repository-name hello-repository --region region --force Next steps Your task definitions require a task execution role. For more information, see Amazon ECS task execution IAM role. After you have created and pushed your container image to Amazon ECR, you can use that image in a task definition. For more information, see one of the following: • the section called “Learn how to create a Linux task for the Fargate launch type” Clean up 20 Amazon Elastic Container Service Developer Guide • the section called “Learn how to create a Windows task for the Fargate launch type” • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Learn how to create an Amazon ECS Linux task for the Fargate launch type Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage your containers. You can host your containers on a serverless infrastructure that is managed by Amazon ECS by launching your services or tasks on AWS Fargate. For more information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before"} -{"global_id": 274, "doc_id": "ecs", "chunk_id": "18", "question_id": 3, "question": "What is required for your task definitions after creating and pushing your container image?", "answer_span": "Your task definitions require a task execution role.", "chunk": "Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you are done experimenting with your Amazon ECR image, you can delete the repository so you are not charged for image storage. aws ecr delete-repository --repository-name hello-repository --region region --force Next steps Your task definitions require a task execution role. For more information, see Amazon ECS task execution IAM role. After you have created and pushed your container image to Amazon ECR, you can use that image in a task definition. For more information, see one of the following: • the section called “Learn how to create a Linux task for the Fargate launch type” Clean up 20 Amazon Elastic Container Service Developer Guide • the section called “Learn how to create a Windows task for the Fargate launch type” • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Learn how to create an Amazon ECS Linux task for the Fargate launch type Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage your containers. You can host your containers on a serverless infrastructure that is managed by Amazon ECS by launching your services or tasks on AWS Fargate. For more information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before"} -{"global_id": 275, "doc_id": "ecs", "chunk_id": "18", "question_id": 4, "question": "What service does Amazon ECS provide for managing containers?", "answer_span": "Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage your containers.", "chunk": "Command Line Interface User Guide. 4. Push the image to Amazon ECR with the repositoryUri value from the earlier step. docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository Clean up To continue on with creating an Amazon ECS task definition and launching a task with your container image, skip to the Next steps. When you are done experimenting with your Amazon ECR image, you can delete the repository so you are not charged for image storage. aws ecr delete-repository --repository-name hello-repository --region region --force Next steps Your task definitions require a task execution role. For more information, see Amazon ECS task execution IAM role. After you have created and pushed your container image to Amazon ECR, you can use that image in a task definition. For more information, see one of the following: • the section called “Learn how to create a Linux task for the Fargate launch type” Clean up 20 Amazon Elastic Container Service Developer Guide • the section called “Learn how to create a Windows task for the Fargate launch type” • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Learn how to create an Amazon ECS Linux task for the Fargate launch type Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage your containers. You can host your containers on a serverless infrastructure that is managed by Amazon ECS by launching your services or tasks on AWS Fargate. For more information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before"} -{"global_id": 276, "doc_id": "ecs", "chunk_id": "19", "question_id": 1, "question": "What is required for Fargate tasks in Amazon ECS?", "answer_span": "the console attempts to automatically create the task execution IAM role, which is required for Fargate tasks.", "chunk": "information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before you begin, complete the steps in Set up to use Amazon ECS and that your IAM user has the permissions specified in the AdministratorAccess IAM policy example. The console attempts to automatically create the task execution IAM role, which is required for Fargate tasks. To ensure that the console is able to create this IAM role, one of the following must be true: • Your user has administrator access. For more information, see Set up to use Amazon ECS. • Your user has the IAM permissions to create a service role. For more information, see Creating a Role to Delegate Permissions to an AWS Service. • A user with administrator access has manually created the task execution role so that it is available on the account to be used. For more information, see Amazon ECS task execution IAM role. Important The security group you select when creating a service with your task definition must have port 80 open for inbound traffic. Add the following inbound rule to your security group. Learn how to create a Linux task for the Fargate launch type 21"} -{"global_id": 277, "doc_id": "ecs", "chunk_id": "19", "question_id": 2, "question": "What must be true for the console to create the IAM role?", "answer_span": "one of the following must be true: • Your user has administrator access.", "chunk": "information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before you begin, complete the steps in Set up to use Amazon ECS and that your IAM user has the permissions specified in the AdministratorAccess IAM policy example. The console attempts to automatically create the task execution IAM role, which is required for Fargate tasks. To ensure that the console is able to create this IAM role, one of the following must be true: • Your user has administrator access. For more information, see Set up to use Amazon ECS. • Your user has the IAM permissions to create a service role. For more information, see Creating a Role to Delegate Permissions to an AWS Service. • A user with administrator access has manually created the task execution role so that it is available on the account to be used. For more information, see Amazon ECS task execution IAM role. Important The security group you select when creating a service with your task definition must have port 80 open for inbound traffic. Add the following inbound rule to your security group. Learn how to create a Linux task for the Fargate launch type 21"} -{"global_id": 278, "doc_id": "ecs", "chunk_id": "19", "question_id": 3, "question": "What must the security group have open for inbound traffic?", "answer_span": "the security group you select when creating a service with your task definition must have port 80 open for inbound traffic.", "chunk": "information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before you begin, complete the steps in Set up to use Amazon ECS and that your IAM user has the permissions specified in the AdministratorAccess IAM policy example. The console attempts to automatically create the task execution IAM role, which is required for Fargate tasks. To ensure that the console is able to create this IAM role, one of the following must be true: • Your user has administrator access. For more information, see Set up to use Amazon ECS. • Your user has the IAM permissions to create a service role. For more information, see Creating a Role to Delegate Permissions to an AWS Service. • A user with administrator access has manually created the task execution role so that it is available on the account to be used. For more information, see Amazon ECS task execution IAM role. Important The security group you select when creating a service with your task definition must have port 80 open for inbound traffic. Add the following inbound rule to your security group. Learn how to create a Linux task for the Fargate launch type 21"} -{"global_id": 279, "doc_id": "ecs", "chunk_id": "19", "question_id": 4, "question": "Where can you find more information about setting up Amazon ECS?", "answer_span": "For more information, see Set up to use Amazon ECS.", "chunk": "information on Fargate, see AWS Fargate for Amazon ECS. Get started with Amazon ECS on AWS Fargate by using the Fargate launch type for your tasks in the Regions where Amazon ECS supports AWS Fargate. Complete the following steps to get started with Amazon ECS on AWS Fargate. Prerequisites Before you begin, complete the steps in Set up to use Amazon ECS and that your IAM user has the permissions specified in the AdministratorAccess IAM policy example. The console attempts to automatically create the task execution IAM role, which is required for Fargate tasks. To ensure that the console is able to create this IAM role, one of the following must be true: • Your user has administrator access. For more information, see Set up to use Amazon ECS. • Your user has the IAM permissions to create a service role. For more information, see Creating a Role to Delegate Permissions to an AWS Service. • A user with administrator access has manually created the task execution role so that it is available on the account to be used. For more information, see Amazon ECS task execution IAM role. Important The security group you select when creating a service with your task definition must have port 80 open for inbound traffic. Add the following inbound rule to your security group. Learn how to create a Linux task for the Fargate launch type 21"} -{"global_id": 280, "doc_id": "ec2", "chunk_id": "0", "question_id": 1, "question": "What does Amazon EC2 provide?", "answer_span": "Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud.", "chunk": "Amazon Elastic Compute Cloud User Guide What is Amazon EC2? Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. You can add capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic. When usage decreases, you can reduce capacity (scale down) again. An EC2 instance is a virtual server in the AWS Cloud. When you launch an EC2 instance, the instance type that you specify determines the hardware available to your instance. Each instance type offers a different balance of compute, memory, network, and storage resources. For more information, see the Amazon EC2 Instance Types Guide. Features of Amazon EC2 Amazon EC2 provides the following high-level features: Instances Virtual servers. Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software). Instance types Various configurations of CPU, memory, storage, networking capacity, and graphics hardware for your instances. Features 1 Amazon Elastic Compute Cloud User Guide Amazon EBS volumes Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS). Instance store volumes Storage volumes for temporary data that is deleted when you stop, hibernate, or terminate your instance. Key pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to"} -{"global_id": 281, "doc_id": "ec2", "chunk_id": "0", "question_id": 2, "question": "What is an EC2 instance?", "answer_span": "An EC2 instance is a virtual server in the AWS Cloud.", "chunk": "Amazon Elastic Compute Cloud User Guide What is Amazon EC2? Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. You can add capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic. When usage decreases, you can reduce capacity (scale down) again. An EC2 instance is a virtual server in the AWS Cloud. When you launch an EC2 instance, the instance type that you specify determines the hardware available to your instance. Each instance type offers a different balance of compute, memory, network, and storage resources. For more information, see the Amazon EC2 Instance Types Guide. Features of Amazon EC2 Amazon EC2 provides the following high-level features: Instances Virtual servers. Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software). Instance types Various configurations of CPU, memory, storage, networking capacity, and graphics hardware for your instances. Features 1 Amazon Elastic Compute Cloud User Guide Amazon EBS volumes Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS). Instance store volumes Storage volumes for temporary data that is deleted when you stop, hibernate, or terminate your instance. Key pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to"} -{"global_id": 282, "doc_id": "ec2", "chunk_id": "0", "question_id": 3, "question": "What are Amazon Machine Images (AMIs)?", "answer_span": "Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software).", "chunk": "Amazon Elastic Compute Cloud User Guide What is Amazon EC2? Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. You can add capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic. When usage decreases, you can reduce capacity (scale down) again. An EC2 instance is a virtual server in the AWS Cloud. When you launch an EC2 instance, the instance type that you specify determines the hardware available to your instance. Each instance type offers a different balance of compute, memory, network, and storage resources. For more information, see the Amazon EC2 Instance Types Guide. Features of Amazon EC2 Amazon EC2 provides the following high-level features: Instances Virtual servers. Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software). Instance types Various configurations of CPU, memory, storage, networking capacity, and graphics hardware for your instances. Features 1 Amazon Elastic Compute Cloud User Guide Amazon EBS volumes Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS). Instance store volumes Storage volumes for temporary data that is deleted when you stop, hibernate, or terminate your instance. Key pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to"} -{"global_id": 283, "doc_id": "ec2", "chunk_id": "0", "question_id": 4, "question": "What is the purpose of security groups in Amazon EC2?", "answer_span": "Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to.", "chunk": "Amazon Elastic Compute Cloud User Guide What is Amazon EC2? Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. You can add capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic. When usage decreases, you can reduce capacity (scale down) again. An EC2 instance is a virtual server in the AWS Cloud. When you launch an EC2 instance, the instance type that you specify determines the hardware available to your instance. Each instance type offers a different balance of compute, memory, network, and storage resources. For more information, see the Amazon EC2 Instance Types Guide. Features of Amazon EC2 Amazon EC2 provides the following high-level features: Instances Virtual servers. Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software). Instance types Various configurations of CPU, memory, storage, networking capacity, and graphics hardware for your instances. Features 1 Amazon Elastic Compute Cloud User Guide Amazon EBS volumes Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS). Instance store volumes Storage volumes for temporary data that is deleted when you stop, hibernate, or terminate your instance. Key pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to"} -{"global_id": 284, "doc_id": "ec2", "chunk_id": "1", "question_id": 1, "question": "What does AWS store for secure login information?", "answer_span": "AWS stores the public key and you store the private key in a secure place.", "chunk": "pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect. Amazon EC2 supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. Related services Services to use with Amazon EC2 You can use other AWS services with the instances that you deploy using Amazon EC2. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. AWS Backup Automate backing up your Amazon EC2 instances and the Amazon EBS volumes attached to them. Amazon CloudWatch Monitor your instances and Amazon EBS volumes. Related services 2 Amazon Elastic Compute Cloud User Guide Elastic Load Balancing Automatically distribute incoming application traffic across multiple instances. Amazon GuardDuty Detect potentially unauthorized or malicious use of your EC2 instances. EC2 Image Builder Automate the creation, management, and deployment of customized, secure, and up-to-date server images. AWS Launch Wizard Size, configure, and deploy AWS resources for third-party applications without having to manually identify and provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform"} -{"global_id": 285, "doc_id": "ec2", "chunk_id": "1", "question_id": 2, "question": "What is a security group in the context of Amazon EC2?", "answer_span": "A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect.", "chunk": "pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect. Amazon EC2 supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. Related services Services to use with Amazon EC2 You can use other AWS services with the instances that you deploy using Amazon EC2. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. AWS Backup Automate backing up your Amazon EC2 instances and the Amazon EBS volumes attached to them. Amazon CloudWatch Monitor your instances and Amazon EBS volumes. Related services 2 Amazon Elastic Compute Cloud User Guide Elastic Load Balancing Automatically distribute incoming application traffic across multiple instances. Amazon GuardDuty Detect potentially unauthorized or malicious use of your EC2 instances. EC2 Image Builder Automate the creation, management, and deployment of customized, secure, and up-to-date server images. AWS Launch Wizard Size, configure, and deploy AWS resources for third-party applications without having to manually identify and provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform"} -{"global_id": 286, "doc_id": "ec2", "chunk_id": "1", "question_id": 3, "question": "What standard has Amazon EC2 been validated as compliant with?", "answer_span": "Amazon EC2 has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS).", "chunk": "pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect. Amazon EC2 supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. Related services Services to use with Amazon EC2 You can use other AWS services with the instances that you deploy using Amazon EC2. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. AWS Backup Automate backing up your Amazon EC2 instances and the Amazon EBS volumes attached to them. Amazon CloudWatch Monitor your instances and Amazon EBS volumes. Related services 2 Amazon Elastic Compute Cloud User Guide Elastic Load Balancing Automatically distribute incoming application traffic across multiple instances. Amazon GuardDuty Detect potentially unauthorized or malicious use of your EC2 instances. EC2 Image Builder Automate the creation, management, and deployment of customized, secure, and up-to-date server images. AWS Launch Wizard Size, configure, and deploy AWS resources for third-party applications without having to manually identify and provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform"} -{"global_id": 287, "doc_id": "ec2", "chunk_id": "1", "question_id": 4, "question": "What service helps ensure the correct number of Amazon EC2 instances are available?", "answer_span": "Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application.", "chunk": "pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect. Amazon EC2 supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. Related services Services to use with Amazon EC2 You can use other AWS services with the instances that you deploy using Amazon EC2. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. AWS Backup Automate backing up your Amazon EC2 instances and the Amazon EBS volumes attached to them. Amazon CloudWatch Monitor your instances and Amazon EBS volumes. Related services 2 Amazon Elastic Compute Cloud User Guide Elastic Load Balancing Automatically distribute incoming application traffic across multiple instances. Amazon GuardDuty Detect potentially unauthorized or malicious use of your EC2 instances. EC2 Image Builder Automate the creation, management, and deployment of customized, secure, and up-to-date server images. AWS Launch Wizard Size, configure, and deploy AWS resources for third-party applications without having to manually identify and provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform"} -{"global_id": 288, "doc_id": "ec2", "chunk_id": "2", "question_id": 1, "question": "What is AWS Systems Manager used for?", "answer_span": "AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution.", "chunk": "provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform that provides the resources that you need to deploy your project quickly, for a low, predictable monthly price. To compare Amazon EC2 and Lightsail, see Amazon Lightsail or Amazon EC2. Amazon Elastic Container Service (Amazon ECS) Deploy, manage, and scale containerized applications on a cluster of EC2 instances. For more information, see Choosing an AWS container service. Amazon Elastic Kubernetes Service (Amazon EKS) Run your Kubernetes applications on AWS. For more information, see Choosing an AWS container service. Related services 3 Amazon Elastic Compute Cloud User Guide Access Amazon EC2 You can create and manage your Amazon EC2 instances using the following interfaces: Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources. If you've signed up for an AWS account, you can access the Amazon EC2 console by signing into the AWS Management Console and selecting EC2 from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon EC2 commands in the AWS CLI Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether"} -{"global_id": 289, "doc_id": "ec2", "chunk_id": "2", "question_id": 2, "question": "What is Amazon Lightsail used for?", "answer_span": "Build websites or web applications using Amazon Lightsail, a cloud platform that provides the resources that you need to deploy your project quickly, for a low, predictable monthly price.", "chunk": "provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform that provides the resources that you need to deploy your project quickly, for a low, predictable monthly price. To compare Amazon EC2 and Lightsail, see Amazon Lightsail or Amazon EC2. Amazon Elastic Container Service (Amazon ECS) Deploy, manage, and scale containerized applications on a cluster of EC2 instances. For more information, see Choosing an AWS container service. Amazon Elastic Kubernetes Service (Amazon EKS) Run your Kubernetes applications on AWS. For more information, see Choosing an AWS container service. Related services 3 Amazon Elastic Compute Cloud User Guide Access Amazon EC2 You can create and manage your Amazon EC2 instances using the following interfaces: Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources. If you've signed up for an AWS account, you can access the Amazon EC2 console by signing into the AWS Management Console and selecting EC2 from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon EC2 commands in the AWS CLI Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether"} -{"global_id": 290, "doc_id": "ec2", "chunk_id": "2", "question_id": 3, "question": "How can you access Amazon EC2?", "answer_span": "You can create and manage your Amazon EC2 instances using the following interfaces: Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources.", "chunk": "provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform that provides the resources that you need to deploy your project quickly, for a low, predictable monthly price. To compare Amazon EC2 and Lightsail, see Amazon Lightsail or Amazon EC2. Amazon Elastic Container Service (Amazon ECS) Deploy, manage, and scale containerized applications on a cluster of EC2 instances. For more information, see Choosing an AWS container service. Amazon Elastic Kubernetes Service (Amazon EKS) Run your Kubernetes applications on AWS. For more information, see Choosing an AWS container service. Related services 3 Amazon Elastic Compute Cloud User Guide Access Amazon EC2 You can create and manage your Amazon EC2 instances using the following interfaces: Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources. If you've signed up for an AWS account, you can access the Amazon EC2 console by signing into the AWS Management Console and selecting EC2 from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon EC2 commands in the AWS CLI Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether"} -{"global_id": 291, "doc_id": "ec2", "chunk_id": "2", "question_id": 4, "question": "What format can AWS CloudFormation templates be in?", "answer_span": "You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you.", "chunk": "provision individual AWS resources. AWS Systems Manager Perform operations at scale on EC2 instances with this secure end-to-end management solution. Additional compute services You can launch instances using another AWS compute service instead of using Amazon EC2. Amazon Lightsail Build websites or web applications using Amazon Lightsail, a cloud platform that provides the resources that you need to deploy your project quickly, for a low, predictable monthly price. To compare Amazon EC2 and Lightsail, see Amazon Lightsail or Amazon EC2. Amazon Elastic Container Service (Amazon ECS) Deploy, manage, and scale containerized applications on a cluster of EC2 instances. For more information, see Choosing an AWS container service. Amazon Elastic Kubernetes Service (Amazon EKS) Run your Kubernetes applications on AWS. For more information, see Choosing an AWS container service. Related services 3 Amazon Elastic Compute Cloud User Guide Access Amazon EC2 You can create and manage your Amazon EC2 instances using the following interfaces: Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources. If you've signed up for an AWS account, you can access the Amazon EC2 console by signing into the AWS Management Console and selecting EC2 from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon EC2 commands in the AWS CLI Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether"} -{"global_id": 292, "doc_id": "ec2", "chunk_id": "3", "question_id": 1, "question": "What format can the template for AWS CloudFormation be in?", "answer_span": "You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you.", "chunk": "Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether in the same Region and account or in multiple Regions and accounts. For more information about supported resource types and properties for Amazon EC2, see EC2 resource type reference in the AWS CloudFormation User Guide. AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, AWS provides libraries, sample code, tutorials, and other resources for software developers. These libraries provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses, making it easier for you to get started. For more information, see Tools to Build on AWS. AWS Tools for PowerShell A set of PowerShell modules that are built on the functionality exposed by the SDK for .NET. The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line. To get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Query API Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore"} -{"global_id": 293, "doc_id": "ec2", "chunk_id": "3", "question_id": 2, "question": "What do AWS SDKs provide for software developers?", "answer_span": "AWS provides libraries, sample code, tutorials, and other resources for software developers.", "chunk": "Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether in the same Region and account or in multiple Regions and accounts. For more information about supported resource types and properties for Amazon EC2, see EC2 resource type reference in the AWS CloudFormation User Guide. AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, AWS provides libraries, sample code, tutorials, and other resources for software developers. These libraries provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses, making it easier for you to get started. For more information, see Tools to Build on AWS. AWS Tools for PowerShell A set of PowerShell modules that are built on the functionality exposed by the SDK for .NET. The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line. To get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Query API Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore"} -{"global_id": 294, "doc_id": "ec2", "chunk_id": "3", "question_id": 3, "question": "What can you use the Tools for PowerShell for?", "answer_span": "The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line.", "chunk": "Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether in the same Region and account or in multiple Regions and accounts. For more information about supported resource types and properties for Amazon EC2, see EC2 resource type reference in the AWS CloudFormation User Guide. AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, AWS provides libraries, sample code, tutorials, and other resources for software developers. These libraries provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses, making it easier for you to get started. For more information, see Tools to Build on AWS. AWS Tools for PowerShell A set of PowerShell modules that are built on the functionality exposed by the SDK for .NET. The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line. To get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Query API Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore"} -{"global_id": 295, "doc_id": "ec2", "chunk_id": "3", "question_id": 4, "question": "What type of requests does the Amazon EC2 Query API use?", "answer_span": "These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action.", "chunk": "Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether in the same Region and account or in multiple Regions and accounts. For more information about supported resource types and properties for Amazon EC2, see EC2 resource type reference in the AWS CloudFormation User Guide. AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, AWS provides libraries, sample code, tutorials, and other resources for software developers. These libraries provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses, making it easier for you to get started. For more information, see Tools to Build on AWS. AWS Tools for PowerShell A set of PowerShell modules that are built on the functionality exposed by the SDK for .NET. The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line. To get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Query API Amazon EC2 provides a Query API. These requests are HTTP or HTTPS requests that use the HTTP verbs GET or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore"} -{"global_id": 296, "doc_id": "ec2", "chunk_id": "4", "question_id": 1, "question": "What is the Free Tier option for Amazon EC2?", "answer_span": "You can get started with Amazon EC2 for free.", "chunk": "or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore the Free Tier options, see AWS Free Tier. On-Demand Instances Pay for the instances that you use by the second, with a minimum of 60 seconds, with no longterm commitments or upfront payments. Savings Plans You can reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years. Reserved Instances You can reduce your Amazon EC2 costs by making a commitment to a specific instance configuration, including instance type and Region, for a term of 1 or 3 years. Spot Instances Request unused EC2 instances, which can reduce your Amazon EC2 costs significantly. Dedicated Hosts Reduce costs by using a physical EC2 server that is fully dedicated for your use, either OnDemand or as part of a Savings Plan. You can use your existing server-bound software licenses and get help meeting compliance requirements. On-Demand Capacity Reservations Reserve compute capacity for your EC2 instances in a specific Availability Zone for any duration of time. Pricing 5 Amazon Elastic Compute Cloud User Guide Per-second billing Removes the cost of unused minutes and seconds from your bill. For a complete list of charges and prices for Amazon EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed"} -{"global_id": 297, "doc_id": "ec2", "chunk_id": "4", "question_id": 2, "question": "How does On-Demand Instances pricing work?", "answer_span": "Pay for the instances that you use by the second, with a minimum of 60 seconds, with no longterm commitments or upfront payments.", "chunk": "or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore the Free Tier options, see AWS Free Tier. On-Demand Instances Pay for the instances that you use by the second, with a minimum of 60 seconds, with no longterm commitments or upfront payments. Savings Plans You can reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years. Reserved Instances You can reduce your Amazon EC2 costs by making a commitment to a specific instance configuration, including instance type and Region, for a term of 1 or 3 years. Spot Instances Request unused EC2 instances, which can reduce your Amazon EC2 costs significantly. Dedicated Hosts Reduce costs by using a physical EC2 server that is fully dedicated for your use, either OnDemand or as part of a Savings Plan. You can use your existing server-bound software licenses and get help meeting compliance requirements. On-Demand Capacity Reservations Reserve compute capacity for your EC2 instances in a specific Availability Zone for any duration of time. Pricing 5 Amazon Elastic Compute Cloud User Guide Per-second billing Removes the cost of unused minutes and seconds from your bill. For a complete list of charges and prices for Amazon EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed"} -{"global_id": 298, "doc_id": "ec2", "chunk_id": "4", "question_id": 3, "question": "What is the benefit of Savings Plans?", "answer_span": "You can reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years.", "chunk": "or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore the Free Tier options, see AWS Free Tier. On-Demand Instances Pay for the instances that you use by the second, with a minimum of 60 seconds, with no longterm commitments or upfront payments. Savings Plans You can reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years. Reserved Instances You can reduce your Amazon EC2 costs by making a commitment to a specific instance configuration, including instance type and Region, for a term of 1 or 3 years. Spot Instances Request unused EC2 instances, which can reduce your Amazon EC2 costs significantly. Dedicated Hosts Reduce costs by using a physical EC2 server that is fully dedicated for your use, either OnDemand or as part of a Savings Plan. You can use your existing server-bound software licenses and get help meeting compliance requirements. On-Demand Capacity Reservations Reserve compute capacity for your EC2 instances in a specific Availability Zone for any duration of time. Pricing 5 Amazon Elastic Compute Cloud User Guide Per-second billing Removes the cost of unused minutes and seconds from your bill. For a complete list of charges and prices for Amazon EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed"} -{"global_id": 299, "doc_id": "ec2", "chunk_id": "4", "question_id": 4, "question": "What does per-second billing do?", "answer_span": "Removes the cost of unused minutes and seconds from your bill.", "chunk": "or POST and a Query parameter named Action. For more information about the API actions for Amazon EC2, see Actions in the Amazon EC2 API Reference. Pricing for Amazon EC2 Amazon EC2 provides the following pricing options: Free Tier You can get started with Amazon EC2 for free. To explore the Free Tier options, see AWS Free Tier. On-Demand Instances Pay for the instances that you use by the second, with a minimum of 60 seconds, with no longterm commitments or upfront payments. Savings Plans You can reduce your Amazon EC2 costs by making a commitment to a consistent amount of usage, in USD per hour, for a term of 1 or 3 years. Reserved Instances You can reduce your Amazon EC2 costs by making a commitment to a specific instance configuration, including instance type and Region, for a term of 1 or 3 years. Spot Instances Request unused EC2 instances, which can reduce your Amazon EC2 costs significantly. Dedicated Hosts Reduce costs by using a physical EC2 server that is fully dedicated for your use, either OnDemand or as part of a Savings Plan. You can use your existing server-bound software licenses and get help meeting compliance requirements. On-Demand Capacity Reservations Reserve compute capacity for your EC2 instances in a specific Availability Zone for any duration of time. Pricing 5 Amazon Elastic Compute Cloud User Guide Per-second billing Removes the cost of unused minutes and seconds from your bill. For a complete list of charges and prices for Amazon EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed"} -{"global_id": 300, "doc_id": "ec2", "chunk_id": "5", "question_id": 1, "question": "What tool can be used to create estimates for AWS use cases?", "answer_span": "To create estimates for your AWS use cases, use the AWS Pricing Calculator.", "chunk": "EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed on AWS, use the AWS Modernization Calculator for Microsoft Workloads. To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console. Your bill contains links to usage reports that provide details about your bill. To learn more about AWS account billing, see AWS Billing and Cost Management User Guide. If you have questions concerning AWS billing, accounts, and events, contact AWS Support. To calculate the cost of a sample provisioned environment, see Cloud Economics Center. When calculating the cost of a provisioned environment, remember to include incidental costs such as snapshot storage for EBS volumes. You can optimize the cost, security, and performance of your AWS environment using AWS Trusted Advisor. You can use AWS Cost Explorer to analyze the cost and usage of your EC2 instances. You can view data up to the last 13 months, and forecast how much you are likely to spend for the next 12 months. For more information, see Analyzing your costs and usage with AWS Cost Explorer in the AWS Cost Management User Guide. Resources • Amazon EC2 features • AWS re:Post • AWS Skill Builder • AWS Support Estimates, billing, and cost optimization 6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect"} -{"global_id": 301, "doc_id": "ec2", "chunk_id": "5", "question_id": 2, "question": "Where can you see your AWS bill?", "answer_span": "To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console.", "chunk": "EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed on AWS, use the AWS Modernization Calculator for Microsoft Workloads. To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console. Your bill contains links to usage reports that provide details about your bill. To learn more about AWS account billing, see AWS Billing and Cost Management User Guide. If you have questions concerning AWS billing, accounts, and events, contact AWS Support. To calculate the cost of a sample provisioned environment, see Cloud Economics Center. When calculating the cost of a provisioned environment, remember to include incidental costs such as snapshot storage for EBS volumes. You can optimize the cost, security, and performance of your AWS environment using AWS Trusted Advisor. You can use AWS Cost Explorer to analyze the cost and usage of your EC2 instances. You can view data up to the last 13 months, and forecast how much you are likely to spend for the next 12 months. For more information, see Analyzing your costs and usage with AWS Cost Explorer in the AWS Cost Management User Guide. Resources • Amazon EC2 features • AWS re:Post • AWS Skill Builder • AWS Support Estimates, billing, and cost optimization 6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect"} -{"global_id": 302, "doc_id": "ec2", "chunk_id": "5", "question_id": 3, "question": "What should you include when calculating the cost of a provisioned environment?", "answer_span": "When calculating the cost of a provisioned environment, remember to include incidental costs such as snapshot storage for EBS volumes.", "chunk": "EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed on AWS, use the AWS Modernization Calculator for Microsoft Workloads. To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console. Your bill contains links to usage reports that provide details about your bill. To learn more about AWS account billing, see AWS Billing and Cost Management User Guide. If you have questions concerning AWS billing, accounts, and events, contact AWS Support. To calculate the cost of a sample provisioned environment, see Cloud Economics Center. When calculating the cost of a provisioned environment, remember to include incidental costs such as snapshot storage for EBS volumes. You can optimize the cost, security, and performance of your AWS environment using AWS Trusted Advisor. You can use AWS Cost Explorer to analyze the cost and usage of your EC2 instances. You can view data up to the last 13 months, and forecast how much you are likely to spend for the next 12 months. For more information, see Analyzing your costs and usage with AWS Cost Explorer in the AWS Cost Management User Guide. Resources • Amazon EC2 features • AWS re:Post • AWS Skill Builder • AWS Support Estimates, billing, and cost optimization 6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect"} -{"global_id": 303, "doc_id": "ec2", "chunk_id": "5", "question_id": 4, "question": "How far back can you view data using AWS Cost Explorer?", "answer_span": "You can view data up to the last 13 months, and forecast how much you are likely to spend for the next 12 months.", "chunk": "EC2 and more information about the purchase models, see Amazon EC2 pricing. Estimates, billing, and cost optimization To create estimates for your AWS use cases, use the AWS Pricing Calculator. To estimate the cost of transforming Microsoft workloads to a modern architecture that uses open source and cloud-native services deployed on AWS, use the AWS Modernization Calculator for Microsoft Workloads. To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console. Your bill contains links to usage reports that provide details about your bill. To learn more about AWS account billing, see AWS Billing and Cost Management User Guide. If you have questions concerning AWS billing, accounts, and events, contact AWS Support. To calculate the cost of a sample provisioned environment, see Cloud Economics Center. When calculating the cost of a provisioned environment, remember to include incidental costs such as snapshot storage for EBS volumes. You can optimize the cost, security, and performance of your AWS environment using AWS Trusted Advisor. You can use AWS Cost Explorer to analyze the cost and usage of your EC2 instances. You can view data up to the last 13 months, and forecast how much you are likely to spend for the next 12 months. For more information, see Analyzing your costs and usage with AWS Cost Explorer in the AWS Cost Management User Guide. Resources • Amazon EC2 features • AWS re:Post • AWS Skill Builder • AWS Support Estimates, billing, and cost optimization 6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect"} -{"global_id": 304, "doc_id": "ec2", "chunk_id": "6", "question_id": 1, "question": "What is Amazon EC2?", "answer_span": "You'll learn how to launch and connect to an EC2 instance.", "chunk": "6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect to an EC2 instance. An instance is a virtual server in the AWS Cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance. Overview The following diagram shows the key components that you'll use in this tutorial: • An image – A template that contains the software to run on your instance, such as the operating system. • A key pair – A set of security credentials that you use to prove your identity when connecting to your instance. The public key is on your instance and the private key is on your computer. • A network – A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. To help you get started quickly, your account comes with a default VPC in each AWS Region, and each default VPC has a default subnet in each Availability Zone. • A security group – Acts as a virtual firewall to control inbound and outbound traffic. • An EBS volume – We require a root volume for the image. You can optionally add data volumes. 8 Amazon Elastic Compute Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything"} -{"global_id": 305, "doc_id": "ec2", "chunk_id": "6", "question_id": 2, "question": "What is an instance in the context of Amazon EC2?", "answer_span": "An instance is a virtual server in the AWS Cloud.", "chunk": "6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect to an EC2 instance. An instance is a virtual server in the AWS Cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance. Overview The following diagram shows the key components that you'll use in this tutorial: • An image – A template that contains the software to run on your instance, such as the operating system. • A key pair – A set of security credentials that you use to prove your identity when connecting to your instance. The public key is on your instance and the private key is on your computer. • A network – A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. To help you get started quickly, your account comes with a default VPC in each AWS Region, and each default VPC has a default subnet in each Availability Zone. • A security group – Acts as a virtual firewall to control inbound and outbound traffic. • An EBS volume – We require a root volume for the image. You can optionally add data volumes. 8 Amazon Elastic Compute Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything"} -{"global_id": 306, "doc_id": "ec2", "chunk_id": "6", "question_id": 3, "question": "What does a key pair consist of?", "answer_span": "A key pair – A set of security credentials that you use to prove your identity when connecting to your instance.", "chunk": "6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect to an EC2 instance. An instance is a virtual server in the AWS Cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance. Overview The following diagram shows the key components that you'll use in this tutorial: • An image – A template that contains the software to run on your instance, such as the operating system. • A key pair – A set of security credentials that you use to prove your identity when connecting to your instance. The public key is on your instance and the private key is on your computer. • A network – A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. To help you get started quickly, your account comes with a default VPC in each AWS Region, and each default VPC has a default subnet in each Availability Zone. • A security group – Acts as a virtual firewall to control inbound and outbound traffic. • An EBS volume – We require a root volume for the image. You can optionally add data volumes. 8 Amazon Elastic Compute Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything"} -{"global_id": 307, "doc_id": "ec2", "chunk_id": "6", "question_id": 4, "question": "What is the cost to get started with Amazon EC2?", "answer_span": "When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier.", "chunk": "6 Amazon Elastic Compute Cloud User Guide • Hands-on Tutorials • Web Hosting • Windows on AWS Resources 7 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect to an EC2 instance. An instance is a virtual server in the AWS Cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance. Overview The following diagram shows the key components that you'll use in this tutorial: • An image – A template that contains the software to run on your instance, such as the operating system. • A key pair – A set of security credentials that you use to prove your identity when connecting to your instance. The public key is on your instance and the private key is on your computer. • A network – A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. To help you get started quickly, your account comes with a default VPC in each AWS Region, and each default VPC has a default subnet in each Availability Zone. • A security group – Acts as a virtual firewall to control inbound and outbound traffic. • An EBS volume – We require a root volume for the image. You can optionally add data volumes. 8 Amazon Elastic Compute Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything"} -{"global_id": 308, "doc_id": "ec2", "chunk_id": "7", "question_id": 1, "question": "What is the AWS Free Tier?", "answer_span": "you can get started with Amazon EC2 for free using the AWS Free Tier.", "chunk": "your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. Otherwise, you'll incur the standard Amazon EC2 usage fees from the time that you launch the instance (even if it remains idle) until you terminate it. If you created your AWS account on or after July 15, 2025, it's less than 6 months old, and you haven't used up all your credits, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. For information on how to determine whether you are eligible for the Free Tier, see the section called “Track your Free Tier usage”. Tasks • Step 1: Launch an instance • Step 2: Connect to your instance • Step 3: Clean up your instance 9 Amazon Elastic Compute Cloud User Guide • Next steps Step 1: Launch an instance You can launch an EC2 instance using the AWS Management Console as described in the following procedure. This tutorial is intended to help you quickly launch your first instance, so it doesn't cover all possible options. To launch an instance 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4."} -{"global_id": 309, "doc_id": "ec2", "chunk_id": "7", "question_id": 2, "question": "What happens if you exceed the Free Tier benefits for Amazon EC2?", "answer_span": "you'll incur the standard Amazon EC2 usage fees from the time that you launch the instance (even if it remains idle) until you terminate it.", "chunk": "your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. Otherwise, you'll incur the standard Amazon EC2 usage fees from the time that you launch the instance (even if it remains idle) until you terminate it. If you created your AWS account on or after July 15, 2025, it's less than 6 months old, and you haven't used up all your credits, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. For information on how to determine whether you are eligible for the Free Tier, see the section called “Track your Free Tier usage”. Tasks • Step 1: Launch an instance • Step 2: Connect to your instance • Step 3: Clean up your instance 9 Amazon Elastic Compute Cloud User Guide • Next steps Step 1: Launch an instance You can launch an EC2 instance using the AWS Management Console as described in the following procedure. This tutorial is intended to help you quickly launch your first instance, so it doesn't cover all possible options. To launch an instance 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4."} -{"global_id": 310, "doc_id": "ec2", "chunk_id": "7", "question_id": 3, "question": "What should you do to determine your eligibility for the Free Tier?", "answer_span": "For information on how to determine whether you are eligible for the Free Tier, see the section called “Track your Free Tier usage”.", "chunk": "your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. Otherwise, you'll incur the standard Amazon EC2 usage fees from the time that you launch the instance (even if it remains idle) until you terminate it. If you created your AWS account on or after July 15, 2025, it's less than 6 months old, and you haven't used up all your credits, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. For information on how to determine whether you are eligible for the Free Tier, see the section called “Track your Free Tier usage”. Tasks • Step 1: Launch an instance • Step 2: Connect to your instance • Step 3: Clean up your instance 9 Amazon Elastic Compute Cloud User Guide • Next steps Step 1: Launch an instance You can launch an EC2 instance using the AWS Management Console as described in the following procedure. This tutorial is intended to help you quickly launch your first instance, so it doesn't cover all possible options. To launch an instance 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4."} -{"global_id": 311, "doc_id": "ec2", "chunk_id": "7", "question_id": 4, "question": "What is the first step to launch an EC2 instance?", "answer_span": "Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.", "chunk": "your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. Otherwise, you'll incur the standard Amazon EC2 usage fees from the time that you launch the instance (even if it remains idle) until you terminate it. If you created your AWS account on or after July 15, 2025, it's less than 6 months old, and you haven't used up all your credits, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. For information on how to determine whether you are eligible for the Free Tier, see the section called “Track your Free Tier usage”. Tasks • Step 1: Launch an instance • Step 2: Connect to your instance • Step 3: Clean up your instance 9 Amazon Elastic Compute Cloud User Guide • Next steps Step 1: Launch an instance You can launch an EC2 instance using the AWS Management Console as described in the following procedure. This tutorial is intended to help you quickly launch your first instance, so it doesn't cover all possible options. To launch an instance 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4."} -{"global_id": 312, "doc_id": "ec2", "chunk_id": "8", "question_id": 1, "question": "What is displayed at the top of the screen?", "answer_span": "the current AWS Region — for example, Ohio.", "chunk": "the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4. Under Name and tags, for Name, enter a descriptive name for your instance. 5. Under Application and OS Images (Amazon Machine Image), do the following: a. Choose Quick Start, and then choose the operating system (OS) for your instance. For your first Linux instance, we recommend that you choose Amazon Linux. b. From Amazon Machine Image (AMI), select an AMI that is marked Free Tier eligible. 6. Under Instance type, for Instance type, select an instance type that is marked Free Tier eligible. 7. Under Key pair (login), for Key pair name, choose an existing key pair or choose Create new key pair to create your first key pair. Warning If you choose Proceed without a key pair (Not recommended), you won't be able to connect to your instance using the methods described in this tutorial. 8. Under Network settings, notice that we selected your default VPC, selected the option to use the default subnet in an Availability Zone that we choose for you, and configured a security group with a rule that allows connections to your instance from anywhere (0.0.0.0.0/0). Step 1: Launch an instance 10 Amazon Elastic Compute Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range"} -{"global_id": 313, "doc_id": "ec2", "chunk_id": "8", "question_id": 2, "question": "What should you enter for the instance name?", "answer_span": "Under Name and tags, for Name, enter a descriptive name for your instance.", "chunk": "the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4. Under Name and tags, for Name, enter a descriptive name for your instance. 5. Under Application and OS Images (Amazon Machine Image), do the following: a. Choose Quick Start, and then choose the operating system (OS) for your instance. For your first Linux instance, we recommend that you choose Amazon Linux. b. From Amazon Machine Image (AMI), select an AMI that is marked Free Tier eligible. 6. Under Instance type, for Instance type, select an instance type that is marked Free Tier eligible. 7. Under Key pair (login), for Key pair name, choose an existing key pair or choose Create new key pair to create your first key pair. Warning If you choose Proceed without a key pair (Not recommended), you won't be able to connect to your instance using the methods described in this tutorial. 8. Under Network settings, notice that we selected your default VPC, selected the option to use the default subnet in an Availability Zone that we choose for you, and configured a security group with a rule that allows connections to your instance from anywhere (0.0.0.0.0/0). Step 1: Launch an instance 10 Amazon Elastic Compute Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range"} -{"global_id": 314, "doc_id": "ec2", "chunk_id": "8", "question_id": 3, "question": "What is recommended for your first Linux instance?", "answer_span": "we recommend that you choose Amazon Linux.", "chunk": "the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4. Under Name and tags, for Name, enter a descriptive name for your instance. 5. Under Application and OS Images (Amazon Machine Image), do the following: a. Choose Quick Start, and then choose the operating system (OS) for your instance. For your first Linux instance, we recommend that you choose Amazon Linux. b. From Amazon Machine Image (AMI), select an AMI that is marked Free Tier eligible. 6. Under Instance type, for Instance type, select an instance type that is marked Free Tier eligible. 7. Under Key pair (login), for Key pair name, choose an existing key pair or choose Create new key pair to create your first key pair. Warning If you choose Proceed without a key pair (Not recommended), you won't be able to connect to your instance using the methods described in this tutorial. 8. Under Network settings, notice that we selected your default VPC, selected the option to use the default subnet in an Availability Zone that we choose for you, and configured a security group with a rule that allows connections to your instance from anywhere (0.0.0.0.0/0). Step 1: Launch an instance 10 Amazon Elastic Compute Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range"} -{"global_id": 315, "doc_id": "ec2", "chunk_id": "8", "question_id": 4, "question": "What happens if you choose to proceed without a key pair?", "answer_span": "you won't be able to connect to your instance using the methods described in this tutorial.", "chunk": "the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4. Under Name and tags, for Name, enter a descriptive name for your instance. 5. Under Application and OS Images (Amazon Machine Image), do the following: a. Choose Quick Start, and then choose the operating system (OS) for your instance. For your first Linux instance, we recommend that you choose Amazon Linux. b. From Amazon Machine Image (AMI), select an AMI that is marked Free Tier eligible. 6. Under Instance type, for Instance type, select an instance type that is marked Free Tier eligible. 7. Under Key pair (login), for Key pair name, choose an existing key pair or choose Create new key pair to create your first key pair. Warning If you choose Proceed without a key pair (Not recommended), you won't be able to connect to your instance using the methods described in this tutorial. 8. Under Network settings, notice that we selected your default VPC, selected the option to use the default subnet in an Availability Zone that we choose for you, and configured a security group with a rule that allows connections to your instance from anywhere (0.0.0.0.0/0). Step 1: Launch an instance 10 Amazon Elastic Compute Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range"} -{"global_id": 316, "doc_id": "ec2", "chunk_id": "9", "question_id": 1, "question": "What is recommended for access in production environments?", "answer_span": "In production, be sure to authorize access only from the appropriate individual IP address or range of addresses.", "chunk": "enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range of addresses. For your first instance, we recommend that you use the default settings. Otherwise, you can update your network settings as follows: 9. • (Optional) To use a specific default subnet, choose Edit and then choose a subnet. • (Optional) To use a different VPC, choose Edit and then choose an existing VPC. If the VPC isn't configured for public internet access, you won't be able to connect to your instance. • (Optional) To restrict inbound connection traffic to a specific network, choose Custom instead of Anywhere, and enter the CIDR block for your network. • (Optional) To use a different security group, choose Select existing security group and choose an existing security group. If the security group does not have a rule that allows connection traffic from your network, you won't be able to connect to your instance. For a Linux instance, you must allow SSH traffic. For a Windows instance, you must allow RDP traffic. Under Configure storage, notice that we configured a root volume but no data volumes. This is sufficient for test purposes. 10. Review a summary of your instance configuration in the Summary panel, and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status"} -{"global_id": 317, "doc_id": "ec2", "chunk_id": "9", "question_id": 2, "question": "What must you allow for a Linux instance?", "answer_span": "For a Linux instance, you must allow SSH traffic.", "chunk": "enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range of addresses. For your first instance, we recommend that you use the default settings. Otherwise, you can update your network settings as follows: 9. • (Optional) To use a specific default subnet, choose Edit and then choose a subnet. • (Optional) To use a different VPC, choose Edit and then choose an existing VPC. If the VPC isn't configured for public internet access, you won't be able to connect to your instance. • (Optional) To restrict inbound connection traffic to a specific network, choose Custom instead of Anywhere, and enter the CIDR block for your network. • (Optional) To use a different security group, choose Select existing security group and choose an existing security group. If the security group does not have a rule that allows connection traffic from your network, you won't be able to connect to your instance. For a Linux instance, you must allow SSH traffic. For a Windows instance, you must allow RDP traffic. Under Configure storage, notice that we configured a root volume but no data volumes. This is sufficient for test purposes. 10. Review a summary of your instance configuration in the Summary panel, and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status"} -{"global_id": 318, "doc_id": "ec2", "chunk_id": "9", "question_id": 3, "question": "What should you do if the VPC isn't configured for public internet access?", "answer_span": "If the VPC isn't configured for public internet access, you won't be able to connect to your instance.", "chunk": "enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range of addresses. For your first instance, we recommend that you use the default settings. Otherwise, you can update your network settings as follows: 9. • (Optional) To use a specific default subnet, choose Edit and then choose a subnet. • (Optional) To use a different VPC, choose Edit and then choose an existing VPC. If the VPC isn't configured for public internet access, you won't be able to connect to your instance. • (Optional) To restrict inbound connection traffic to a specific network, choose Custom instead of Anywhere, and enter the CIDR block for your network. • (Optional) To use a different security group, choose Select existing security group and choose an existing security group. If the security group does not have a rule that allows connection traffic from your network, you won't be able to connect to your instance. For a Linux instance, you must allow SSH traffic. For a Windows instance, you must allow RDP traffic. Under Configure storage, notice that we configured a root volume but no data volumes. This is sufficient for test purposes. 10. Review a summary of your instance configuration in the Summary panel, and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status"} -{"global_id": 319, "doc_id": "ec2", "chunk_id": "9", "question_id": 4, "question": "What happens to the instance state after it starts?", "answer_span": "After the instance starts, its state changes to running.", "chunk": "enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range of addresses. For your first instance, we recommend that you use the default settings. Otherwise, you can update your network settings as follows: 9. • (Optional) To use a specific default subnet, choose Edit and then choose a subnet. • (Optional) To use a different VPC, choose Edit and then choose an existing VPC. If the VPC isn't configured for public internet access, you won't be able to connect to your instance. • (Optional) To restrict inbound connection traffic to a specific network, choose Custom instead of Anywhere, and enter the CIDR block for your network. • (Optional) To use a different security group, choose Select existing security group and choose an existing security group. If the security group does not have a rule that allows connection traffic from your network, you won't be able to connect to your instance. For a Linux instance, you must allow SSH traffic. For a Windows instance, you must allow RDP traffic. Under Configure storage, notice that we configured a root volume but no data volumes. This is sufficient for test purposes. 10. Review a summary of your instance configuration in the Summary panel, and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status"} -{"global_id": 320, "doc_id": "ec2", "chunk_id": "10", "question_id": 1, "question": "What should you do after the launch is successful?", "answer_span": "choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch.", "chunk": "launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status and alarms tab. After your instance passes its status checks, it is ready to receive connection requests. Step 1: Launch an instance 11 Amazon Elastic Compute Cloud User Guide Step 2: Connect to your instance The procedure that you use depends on the operating system of the instance. If you can't connect to your instance, see Troubleshoot issues connecting to your Amazon EC2 Linux instance for assistance. Linux instances You can connect to your Linux instance using any SSH client. If you are running Windows on your computer, open a terminal and run the ssh command to verify that you have an SSH client installed. If the command is not found, install OpenSSH for Windows. To connect to your instance using SSH 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the SSH client tab. 5. (Optional) If you created a key pair when you launched the instance and downloaded the private key (.pem file) to a computer running Linux or macOS, run the example chmod command to set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window"} -{"global_id": 321, "doc_id": "ec2", "chunk_id": "10", "question_id": 2, "question": "What is the initial state of the instance after launching?", "answer_span": "The initial instance state is pending.", "chunk": "launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status and alarms tab. After your instance passes its status checks, it is ready to receive connection requests. Step 1: Launch an instance 11 Amazon Elastic Compute Cloud User Guide Step 2: Connect to your instance The procedure that you use depends on the operating system of the instance. If you can't connect to your instance, see Troubleshoot issues connecting to your Amazon EC2 Linux instance for assistance. Linux instances You can connect to your Linux instance using any SSH client. If you are running Windows on your computer, open a terminal and run the ssh command to verify that you have an SSH client installed. If the command is not found, install OpenSSH for Windows. To connect to your instance using SSH 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the SSH client tab. 5. (Optional) If you created a key pair when you launched the instance and downloaded the private key (.pem file) to a computer running Linux or macOS, run the example chmod command to set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window"} -{"global_id": 322, "doc_id": "ec2", "chunk_id": "10", "question_id": 3, "question": "What should you do if you can't connect to your instance?", "answer_span": "see Troubleshoot issues connecting to your Amazon EC2 Linux instance for assistance.", "chunk": "launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status and alarms tab. After your instance passes its status checks, it is ready to receive connection requests. Step 1: Launch an instance 11 Amazon Elastic Compute Cloud User Guide Step 2: Connect to your instance The procedure that you use depends on the operating system of the instance. If you can't connect to your instance, see Troubleshoot issues connecting to your Amazon EC2 Linux instance for assistance. Linux instances You can connect to your Linux instance using any SSH client. If you are running Windows on your computer, open a terminal and run the ssh command to verify that you have an SSH client installed. If the command is not found, install OpenSSH for Windows. To connect to your instance using SSH 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the SSH client tab. 5. (Optional) If you created a key pair when you launched the instance and downloaded the private key (.pem file) to a computer running Linux or macOS, run the example chmod command to set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window"} -{"global_id": 323, "doc_id": "ec2", "chunk_id": "10", "question_id": 4, "question": "What command should you run to verify that you have an SSH client installed on Windows?", "answer_span": "run the ssh command to verify that you have an SSH client installed.", "chunk": "launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status and alarms tab. After your instance passes its status checks, it is ready to receive connection requests. Step 1: Launch an instance 11 Amazon Elastic Compute Cloud User Guide Step 2: Connect to your instance The procedure that you use depends on the operating system of the instance. If you can't connect to your instance, see Troubleshoot issues connecting to your Amazon EC2 Linux instance for assistance. Linux instances You can connect to your Linux instance using any SSH client. If you are running Windows on your computer, open a terminal and run the ssh command to verify that you have an SSH client installed. If the command is not found, install OpenSSH for Windows. To connect to your instance using SSH 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the SSH client tab. 5. (Optional) If you created a key pair when you launched the instance and downloaded the private key (.pem file) to a computer running Linux or macOS, run the example chmod command to set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window"} -{"global_id": 324, "doc_id": "ec2", "chunk_id": "11", "question_id": 1, "question": "What is the name of the private key file in the example SSH command?", "answer_span": "key-pair-name.pem is the name of your private key file", "chunk": "example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window on your computer, run the ssh command that you saved in the previous step. If the private key file is not in the current directory, you must specify the fully-qualified path to the key file in this command. The following is an example response: The authenticity of host 'ec2-198-51-100-1.us-east-2.compute.amazonaws.com (198-51-100-1)' can't be established. ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY. Are you sure you want to continue connecting (yes/no)? Step 2: Connect to your instance 12 Amazon Elastic Compute Cloud 8. User Guide (Optional) Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output when you first start an instance. To get the console output, choose Actions, Monitor and troubleshoot, Get system log. If the fingerprints don't match, someone might be attempting a man-in-the-middle attack. If they match, continue to the next step. 9. Enter yes. The following is an example response: Warning: Permanently added 'ec2-198-51-100-1.useast-2.compute.amazonaws.com' (ECDSA) to the list of known hosts. Windows instances To connect to a Windows instance using RDP, you must retrieve the initial administrator password and then enter this password when you connect to your instance. It takes a few minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language"} -{"global_id": 325, "doc_id": "ec2", "chunk_id": "11", "question_id": 2, "question": "What should you do if the private key file is not in the current directory?", "answer_span": "you must specify the fully-qualified path to the key file in this command", "chunk": "example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window on your computer, run the ssh command that you saved in the previous step. If the private key file is not in the current directory, you must specify the fully-qualified path to the key file in this command. The following is an example response: The authenticity of host 'ec2-198-51-100-1.us-east-2.compute.amazonaws.com (198-51-100-1)' can't be established. ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY. Are you sure you want to continue connecting (yes/no)? Step 2: Connect to your instance 12 Amazon Elastic Compute Cloud 8. User Guide (Optional) Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output when you first start an instance. To get the console output, choose Actions, Monitor and troubleshoot, Get system log. If the fingerprints don't match, someone might be attempting a man-in-the-middle attack. If they match, continue to the next step. 9. Enter yes. The following is an example response: Warning: Permanently added 'ec2-198-51-100-1.useast-2.compute.amazonaws.com' (ECDSA) to the list of known hosts. Windows instances To connect to a Windows instance using RDP, you must retrieve the initial administrator password and then enter this password when you connect to your instance. It takes a few minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language"} -{"global_id": 326, "doc_id": "ec2", "chunk_id": "11", "question_id": 3, "question": "What should you verify to ensure security when connecting to your instance?", "answer_span": "Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output", "chunk": "example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window on your computer, run the ssh command that you saved in the previous step. If the private key file is not in the current directory, you must specify the fully-qualified path to the key file in this command. The following is an example response: The authenticity of host 'ec2-198-51-100-1.us-east-2.compute.amazonaws.com (198-51-100-1)' can't be established. ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY. Are you sure you want to continue connecting (yes/no)? Step 2: Connect to your instance 12 Amazon Elastic Compute Cloud 8. User Guide (Optional) Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output when you first start an instance. To get the console output, choose Actions, Monitor and troubleshoot, Get system log. If the fingerprints don't match, someone might be attempting a man-in-the-middle attack. If they match, continue to the next step. 9. Enter yes. The following is an example response: Warning: Permanently added 'ec2-198-51-100-1.useast-2.compute.amazonaws.com' (ECDSA) to the list of known hosts. Windows instances To connect to a Windows instance using RDP, you must retrieve the initial administrator password and then enter this password when you connect to your instance. It takes a few minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language"} -{"global_id": 327, "doc_id": "ec2", "chunk_id": "11", "question_id": 4, "question": "What must you do to connect to a Windows instance using RDP?", "answer_span": "you must retrieve the initial administrator password and then enter this password when you connect to your instance", "chunk": "example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window on your computer, run the ssh command that you saved in the previous step. If the private key file is not in the current directory, you must specify the fully-qualified path to the key file in this command. The following is an example response: The authenticity of host 'ec2-198-51-100-1.us-east-2.compute.amazonaws.com (198-51-100-1)' can't be established. ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY. Are you sure you want to continue connecting (yes/no)? Step 2: Connect to your instance 12 Amazon Elastic Compute Cloud 8. User Guide (Optional) Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output when you first start an instance. To get the console output, choose Actions, Monitor and troubleshoot, Get system log. If the fingerprints don't match, someone might be attempting a man-in-the-middle attack. If they match, continue to the next step. 9. Enter yes. The following is an example response: Warning: Permanently added 'ec2-198-51-100-1.useast-2.compute.amazonaws.com' (ECDSA) to the list of known hosts. Windows instances To connect to a Windows instance using RDP, you must retrieve the initial administrator password and then enter this password when you connect to your instance. It takes a few minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language"} -{"global_id": 328, "doc_id": "ec2", "chunk_id": "12", "question_id": 1, "question": "What action must the account have permission to call?", "answer_span": "account must have permission to call the GetPasswordData action.", "chunk": "account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language of the OS, and then choose the corresponding username. For example, for an English OS, the username is Administrator, for a French OS it's Administrateur, and for a Portuguese OS it's Administrador. If a language version of the OS does not have a username in the same language, choose the username Administrator (Other). For more information, see Localized Names for Administrator Account in Windows in the Microsoft website. To retrieve the initial administrator password 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the RDP client tab. 5. For Username, choose the default username for the Administrator account. The username you choose must match the language of the operating system (OS) contained in the AMI that you Step 2: Connect to your instance 13 Amazon Elastic Compute Cloud User Guide used to launch your instance. If there is no username in the same language as your OS, choose Administrator (Other). 6. Choose Get password. 7. On the Get Windows password page, do the following: a. Choose Upload private key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password"} -{"global_id": 329, "doc_id": "ec2", "chunk_id": "12", "question_id": 2, "question": "What is the default username for an English OS?", "answer_span": "for an English OS, the username is Administrator.", "chunk": "account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language of the OS, and then choose the corresponding username. For example, for an English OS, the username is Administrator, for a French OS it's Administrateur, and for a Portuguese OS it's Administrador. If a language version of the OS does not have a username in the same language, choose the username Administrator (Other). For more information, see Localized Names for Administrator Account in Windows in the Microsoft website. To retrieve the initial administrator password 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the RDP client tab. 5. For Username, choose the default username for the Administrator account. The username you choose must match the language of the operating system (OS) contained in the AMI that you Step 2: Connect to your instance 13 Amazon Elastic Compute Cloud User Guide used to launch your instance. If there is no username in the same language as your OS, choose Administrator (Other). 6. Choose Get password. 7. On the Get Windows password page, do the following: a. Choose Upload private key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password"} -{"global_id": 330, "doc_id": "ec2", "chunk_id": "12", "question_id": 3, "question": "What should you do to retrieve the initial administrator password?", "answer_span": "To retrieve the initial administrator password 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.", "chunk": "account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language of the OS, and then choose the corresponding username. For example, for an English OS, the username is Administrator, for a French OS it's Administrateur, and for a Portuguese OS it's Administrador. If a language version of the OS does not have a username in the same language, choose the username Administrator (Other). For more information, see Localized Names for Administrator Account in Windows in the Microsoft website. To retrieve the initial administrator password 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the RDP client tab. 5. For Username, choose the default username for the Administrator account. The username you choose must match the language of the operating system (OS) contained in the AMI that you Step 2: Connect to your instance 13 Amazon Elastic Compute Cloud User Guide used to launch your instance. If there is no username in the same language as your OS, choose Administrator (Other). 6. Choose Get password. 7. On the Get Windows password page, do the following: a. Choose Upload private key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password"} -{"global_id": 331, "doc_id": "ec2", "chunk_id": "12", "question_id": 4, "question": "What must the username you choose match?", "answer_span": "The username you choose must match the language of the operating system (OS) contained in the AMI that you used to launch your instance.", "chunk": "account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language of the OS, and then choose the corresponding username. For example, for an English OS, the username is Administrator, for a French OS it's Administrateur, and for a Portuguese OS it's Administrador. If a language version of the OS does not have a username in the same language, choose the username Administrator (Other). For more information, see Localized Names for Administrator Account in Windows in the Microsoft website. To retrieve the initial administrator password 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the RDP client tab. 5. For Username, choose the default username for the Administrator account. The username you choose must match the language of the operating system (OS) contained in the AMI that you Step 2: Connect to your instance 13 Amazon Elastic Compute Cloud User Guide used to launch your instance. If there is no username in the same language as your OS, choose Administrator (Other). 6. Choose Get password. 7. On the Get Windows password page, do the following: a. Choose Upload private key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password"} -{"global_id": 332, "doc_id": "ec2", "chunk_id": "13", "question_id": 1, "question": "What should you do after selecting the file to copy its contents?", "answer_span": "Select the file and choose Open to copy the entire contents of the file to this window.", "chunk": "that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password link shown previously. c. Copy the password and save it in a safe place. This password is required to connect to the instance. The following procedure uses the Remote Desktop Connection client for Windows (MSTSC). If you're using a different RDP client, download the RDP file and then see the documentation for the RDP client for the steps to establish the RDP connection. To connect to a Windows instance using an RDP client 1. On the Connect to instance page, choose Download remote desktop file. When the file download is finished, choose Cancel to return to the Instances page. The RDP file is downloaded to your Downloads folder. 2. Run mstsc.exe to open the RDP client. 3. Expand Show options, choose Open, and select the .rdp file from your Downloads folder. 4. By default, Computer is the public IPv4 DNS name of the instance and User name is the administrator account. To connect to the instance using IPv6 instead, replace the public IPv4 DNS name of the instance with its IPv6 address. Review the default settings and change them as needed. 5. Choose Connect. If you receive a warning that the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect"} -{"global_id": 333, "doc_id": "ec2", "chunk_id": "13", "question_id": 2, "question": "What appears under Password after choosing Decrypt password?", "answer_span": "the default administrator password for the instance appears under Password, replacing the Get password link shown previously.", "chunk": "that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password link shown previously. c. Copy the password and save it in a safe place. This password is required to connect to the instance. The following procedure uses the Remote Desktop Connection client for Windows (MSTSC). If you're using a different RDP client, download the RDP file and then see the documentation for the RDP client for the steps to establish the RDP connection. To connect to a Windows instance using an RDP client 1. On the Connect to instance page, choose Download remote desktop file. When the file download is finished, choose Cancel to return to the Instances page. The RDP file is downloaded to your Downloads folder. 2. Run mstsc.exe to open the RDP client. 3. Expand Show options, choose Open, and select the .rdp file from your Downloads folder. 4. By default, Computer is the public IPv4 DNS name of the instance and User name is the administrator account. To connect to the instance using IPv6 instead, replace the public IPv4 DNS name of the instance with its IPv6 address. Review the default settings and change them as needed. 5. Choose Connect. If you receive a warning that the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect"} -{"global_id": 334, "doc_id": "ec2", "chunk_id": "13", "question_id": 3, "question": "What is required to connect to the instance?", "answer_span": "This password is required to connect to the instance.", "chunk": "that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password link shown previously. c. Copy the password and save it in a safe place. This password is required to connect to the instance. The following procedure uses the Remote Desktop Connection client for Windows (MSTSC). If you're using a different RDP client, download the RDP file and then see the documentation for the RDP client for the steps to establish the RDP connection. To connect to a Windows instance using an RDP client 1. On the Connect to instance page, choose Download remote desktop file. When the file download is finished, choose Cancel to return to the Instances page. The RDP file is downloaded to your Downloads folder. 2. Run mstsc.exe to open the RDP client. 3. Expand Show options, choose Open, and select the .rdp file from your Downloads folder. 4. By default, Computer is the public IPv4 DNS name of the instance and User name is the administrator account. To connect to the instance using IPv6 instead, replace the public IPv4 DNS name of the instance with its IPv6 address. Review the default settings and change them as needed. 5. Choose Connect. If you receive a warning that the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect"} -{"global_id": 335, "doc_id": "ec2", "chunk_id": "13", "question_id": 4, "question": "What should you do if you receive a warning about the publisher of the remote connection?", "answer_span": "If you receive a warning that the publisher of the remote connection is unknown, choose Connect to continue.", "chunk": "that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password link shown previously. c. Copy the password and save it in a safe place. This password is required to connect to the instance. The following procedure uses the Remote Desktop Connection client for Windows (MSTSC). If you're using a different RDP client, download the RDP file and then see the documentation for the RDP client for the steps to establish the RDP connection. To connect to a Windows instance using an RDP client 1. On the Connect to instance page, choose Download remote desktop file. When the file download is finished, choose Cancel to return to the Instances page. The RDP file is downloaded to your Downloads folder. 2. Run mstsc.exe to open the RDP client. 3. Expand Show options, choose Open, and select the .rdp file from your Downloads folder. 4. By default, Computer is the public IPv4 DNS name of the instance and User name is the administrator account. To connect to the instance using IPv6 instead, replace the public IPv4 DNS name of the instance with its IPv6 address. Review the default settings and change them as needed. 5. Choose Connect. If you receive a warning that the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect"} -{"global_id": 336, "doc_id": "ec2", "chunk_id": "14", "question_id": 1, "question": "What should you do if you trust the self-signed certificate?", "answer_span": "If you trust the certificate, choose Yes to connect to your instance.", "chunk": "to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect to your instance. Step 2: Connect to your instance 14 Amazon Elastic Compute Cloud • User Guide [Windows] Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose View certificate and then choose Thumbprint from the Details tab. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. • [Mac OS X] Before you proceed, compare the fingerprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose Show Certificate, expand Details, and choose SHA1 Fingerprints. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. 8. If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop. If you receive an error message instead, see the section called “Remote Desktop can't connect to the remote computer”. When you are finished with the RDP connection, you can close the RDP client. Step 3: Clean up your instance After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits"} -{"global_id": 337, "doc_id": "ec2", "chunk_id": "14", "question_id": 2, "question": "How can you confirm the identity of the remote computer on Windows?", "answer_span": "Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer.", "chunk": "to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect to your instance. Step 2: Connect to your instance 14 Amazon Elastic Compute Cloud • User Guide [Windows] Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose View certificate and then choose Thumbprint from the Details tab. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. • [Mac OS X] Before you proceed, compare the fingerprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose Show Certificate, expand Details, and choose SHA1 Fingerprints. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. 8. If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop. If you receive an error message instead, see the section called “Remote Desktop can't connect to the remote computer”. When you are finished with the RDP connection, you can close the RDP client. Step 3: Clean up your instance After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits"} -{"global_id": 338, "doc_id": "ec2", "chunk_id": "14", "question_id": 3, "question": "What happens if the RDP connection is successful?", "answer_span": "If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop.", "chunk": "to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect to your instance. Step 2: Connect to your instance 14 Amazon Elastic Compute Cloud • User Guide [Windows] Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose View certificate and then choose Thumbprint from the Details tab. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. • [Mac OS X] Before you proceed, compare the fingerprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose Show Certificate, expand Details, and choose SHA1 Fingerprints. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. 8. If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop. If you receive an error message instead, see the section called “Remote Desktop can't connect to the remote computer”. When you are finished with the RDP connection, you can close the RDP client. Step 3: Clean up your instance After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits"} -{"global_id": 339, "doc_id": "ec2", "chunk_id": "14", "question_id": 4, "question": "What should you do after finishing with the instance created for the tutorial?", "answer_span": "After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance.", "chunk": "to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect to your instance. Step 2: Connect to your instance 14 Amazon Elastic Compute Cloud • User Guide [Windows] Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose View certificate and then choose Thumbprint from the Details tab. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. • [Mac OS X] Before you proceed, compare the fingerprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose Show Certificate, expand Details, and choose SHA1 Fingerprints. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. 8. If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop. If you receive an error message instead, see the section called “Remote Desktop can't connect to the remote computer”. When you are finished with the RDP connection, you can close the RDP client. Step 3: Clean up your instance After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits"} -{"global_id": 340, "doc_id": "ec2", "chunk_id": "15", "question_id": 1, "question": "What happens when you terminate an instance?", "answer_span": "Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it.", "chunk": "instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits as soon as the instance status changes to shutting down or terminated. To keep your instance for later, but not incur charges or usage that counts against your Free Tier limits, you can stop the instance now and then start it again later. For more information, see Stop and start Amazon EC2 instances. To terminate your instance 1. In the navigation pane, choose Instances. In the list of instances, select the instance. 2. Choose Instance state, Terminate (delete) instance. Step 3: Clean up your instance 15 Amazon Elastic Compute Cloud 3. User Guide Choose Terminate (delete) when prompted for confirmation. Amazon EC2 shuts down and terminates your instance. After your instance is terminated, it remains visible on the console for a short while, and then the entry is automatically deleted. You cannot remove the terminated instance from the console display yourself. Next steps After you start your instance, you might want to explore the following next steps: • Explore the Amazon EC2 core concepts with the introductory tutorials. For more information, see Tutorials for launching EC2 instances. • Learn how to track your Amazon EC2 Free Tier usage using the console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see"} -{"global_id": 341, "doc_id": "ec2", "chunk_id": "15", "question_id": 2, "question": "How can you avoid incurring charges while keeping your instance?", "answer_span": "To keep your instance for later, but not incur charges or usage that counts against your Free Tier limits, you can stop the instance now and then start it again later.", "chunk": "instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits as soon as the instance status changes to shutting down or terminated. To keep your instance for later, but not incur charges or usage that counts against your Free Tier limits, you can stop the instance now and then start it again later. For more information, see Stop and start Amazon EC2 instances. To terminate your instance 1. In the navigation pane, choose Instances. In the list of instances, select the instance. 2. Choose Instance state, Terminate (delete) instance. Step 3: Clean up your instance 15 Amazon Elastic Compute Cloud 3. User Guide Choose Terminate (delete) when prompted for confirmation. Amazon EC2 shuts down and terminates your instance. After your instance is terminated, it remains visible on the console for a short while, and then the entry is automatically deleted. You cannot remove the terminated instance from the console display yourself. Next steps After you start your instance, you might want to explore the following next steps: • Explore the Amazon EC2 core concepts with the introductory tutorials. For more information, see Tutorials for launching EC2 instances. • Learn how to track your Amazon EC2 Free Tier usage using the console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see"} -{"global_id": 342, "doc_id": "ec2", "chunk_id": "15", "question_id": 3, "question": "What is the first step to terminate your instance?", "answer_span": "In the navigation pane, choose Instances.", "chunk": "instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits as soon as the instance status changes to shutting down or terminated. To keep your instance for later, but not incur charges or usage that counts against your Free Tier limits, you can stop the instance now and then start it again later. For more information, see Stop and start Amazon EC2 instances. To terminate your instance 1. In the navigation pane, choose Instances. In the list of instances, select the instance. 2. Choose Instance state, Terminate (delete) instance. Step 3: Clean up your instance 15 Amazon Elastic Compute Cloud 3. User Guide Choose Terminate (delete) when prompted for confirmation. Amazon EC2 shuts down and terminates your instance. After your instance is terminated, it remains visible on the console for a short while, and then the entry is automatically deleted. You cannot remove the terminated instance from the console display yourself. Next steps After you start your instance, you might want to explore the following next steps: • Explore the Amazon EC2 core concepts with the introductory tutorials. For more information, see Tutorials for launching EC2 instances. • Learn how to track your Amazon EC2 Free Tier usage using the console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see"} -{"global_id": 343, "doc_id": "ec2", "chunk_id": "15", "question_id": 4, "question": "What should you do after your instance is terminated?", "answer_span": "After your instance is terminated, it remains visible on the console for a short while, and then the entry is automatically deleted.", "chunk": "instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits as soon as the instance status changes to shutting down or terminated. To keep your instance for later, but not incur charges or usage that counts against your Free Tier limits, you can stop the instance now and then start it again later. For more information, see Stop and start Amazon EC2 instances. To terminate your instance 1. In the navigation pane, choose Instances. In the list of instances, select the instance. 2. Choose Instance state, Terminate (delete) instance. Step 3: Clean up your instance 15 Amazon Elastic Compute Cloud 3. User Guide Choose Terminate (delete) when prompted for confirmation. Amazon EC2 shuts down and terminates your instance. After your instance is terminated, it remains visible on the console for a short while, and then the entry is automatically deleted. You cannot remove the terminated instance from the console display yourself. Next steps After you start your instance, you might want to explore the following next steps: • Explore the Amazon EC2 core concepts with the introductory tutorials. For more information, see Tutorials for launching EC2 instances. • Learn how to track your Amazon EC2 Free Tier usage using the console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see"} -{"global_id": 344, "doc_id": "ec2", "chunk_id": "16", "question_id": 1, "question": "What should you configure to notify you if your usage exceeds the Free Tier?", "answer_span": "Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025).", "chunk": "Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see Create an Amazon EBS volume in the Amazon EBS User Guide. • Learn how to remotely manage your EC2 instance using the Run command. For more information, see AWS Systems Manager Run Command in the AWS Systems Manager User Guide. • Learn about instance purchasing options. For more information, see Amazon EC2 billing and purchasing options. • Get advice about instance types. For more information, see Get recommendations from EC2 instance type finder. Next steps 16 Amazon Elastic Compute Cloud User Guide Best practices for Amazon EC2 To ensure the maximum benefit from Amazon EC2, we recommend that you perform the following best practices. Security • Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible. For more information, see Creating IAM policies in the IAM User Guide. • Implement the least permissive rules for your security group. • Regularly patch, update, and secure the operating system and applications on your instance. For more information, see Update management. For guidelines specific to Windows operating systems, see Security best practices for Windows instances. • Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage •"} -{"global_id": 345, "doc_id": "ec2", "chunk_id": "16", "question_id": 2, "question": "Where can you find information about creating an Amazon EBS volume?", "answer_span": "For more information, see Create an Amazon EBS volume in the Amazon EBS User Guide.", "chunk": "Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see Create an Amazon EBS volume in the Amazon EBS User Guide. • Learn how to remotely manage your EC2 instance using the Run command. For more information, see AWS Systems Manager Run Command in the AWS Systems Manager User Guide. • Learn about instance purchasing options. For more information, see Amazon EC2 billing and purchasing options. • Get advice about instance types. For more information, see Get recommendations from EC2 instance type finder. Next steps 16 Amazon Elastic Compute Cloud User Guide Best practices for Amazon EC2 To ensure the maximum benefit from Amazon EC2, we recommend that you perform the following best practices. Security • Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible. For more information, see Creating IAM policies in the IAM User Guide. • Implement the least permissive rules for your security group. • Regularly patch, update, and secure the operating system and applications on your instance. For more information, see Update management. For guidelines specific to Windows operating systems, see Security best practices for Windows instances. • Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage •"} -{"global_id": 346, "doc_id": "ec2", "chunk_id": "16", "question_id": 3, "question": "What is recommended to manage access to AWS resources and APIs?", "answer_span": "Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible.", "chunk": "Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see Create an Amazon EBS volume in the Amazon EBS User Guide. • Learn how to remotely manage your EC2 instance using the Run command. For more information, see AWS Systems Manager Run Command in the AWS Systems Manager User Guide. • Learn about instance purchasing options. For more information, see Amazon EC2 billing and purchasing options. • Get advice about instance types. For more information, see Get recommendations from EC2 instance type finder. Next steps 16 Amazon Elastic Compute Cloud User Guide Best practices for Amazon EC2 To ensure the maximum benefit from Amazon EC2, we recommend that you perform the following best practices. Security • Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible. For more information, see Creating IAM policies in the IAM User Guide. • Implement the least permissive rules for your security group. • Regularly patch, update, and secure the operating system and applications on your instance. For more information, see Update management. For guidelines specific to Windows operating systems, see Security best practices for Windows instances. • Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage •"} -{"global_id": 347, "doc_id": "ec2", "chunk_id": "16", "question_id": 4, "question": "What tool can be used to automatically discover and scan Amazon EC2 instances for vulnerabilities?", "answer_span": "Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure.", "chunk": "Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see Create an Amazon EBS volume in the Amazon EBS User Guide. • Learn how to remotely manage your EC2 instance using the Run command. For more information, see AWS Systems Manager Run Command in the AWS Systems Manager User Guide. • Learn about instance purchasing options. For more information, see Amazon EC2 billing and purchasing options. • Get advice about instance types. For more information, see Get recommendations from EC2 instance type finder. Next steps 16 Amazon Elastic Compute Cloud User Guide Best practices for Amazon EC2 To ensure the maximum benefit from Amazon EC2, we recommend that you perform the following best practices. Security • Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible. For more information, see Creating IAM policies in the IAM User Guide. • Implement the least permissive rules for your security group. • Regularly patch, update, and secure the operating system and applications on your instance. For more information, see Update management. For guidelines specific to Windows operating systems, see Security best practices for Windows instances. • Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage •"} -{"global_id": 348, "doc_id": "ec2", "chunk_id": "17", "question_id": 1, "question": "What should you use to monitor your Amazon EC2 resources against security best practices?", "answer_span": "Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards.", "chunk": "For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage • Understand the implications of the root device type for data persistence, backup, and recovery. For more information, see Root device type. • Use separate Amazon EBS volumes for the operating system versus your data. Ensure that the volume with your data persists after instance termination. For more information, see Preserve data when an instance is terminated. • Use the instance store available for your instance to store temporary data. Remember that the data stored in instance store is deleted when you stop, hibernate, or terminate your instance. If you use instance store for database storage, ensure that you have a cluster with a replication factor that ensures fault tolerance. • Encrypt EBS volumes and snapshots. For more information, see Amazon EBS encryption in the Amazon EBS User Guide. 17 Amazon Elastic Compute Cloud User Guide Resource management • Use instance metadata and custom resource tags to track and identify your AWS resources. For more information, see Use instance metadata to manage your EC2 instance and Tag your Amazon EC2 resources. • View your current limits for Amazon EC2. Plan to request any limit increases in advance of the time that you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back"} -{"global_id": 349, "doc_id": "ec2", "chunk_id": "17", "question_id": 2, "question": "What is recommended for data persistence after instance termination?", "answer_span": "Ensure that the volume with your data persists after instance termination.", "chunk": "For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage • Understand the implications of the root device type for data persistence, backup, and recovery. For more information, see Root device type. • Use separate Amazon EBS volumes for the operating system versus your data. Ensure that the volume with your data persists after instance termination. For more information, see Preserve data when an instance is terminated. • Use the instance store available for your instance to store temporary data. Remember that the data stored in instance store is deleted when you stop, hibernate, or terminate your instance. If you use instance store for database storage, ensure that you have a cluster with a replication factor that ensures fault tolerance. • Encrypt EBS volumes and snapshots. For more information, see Amazon EBS encryption in the Amazon EBS User Guide. 17 Amazon Elastic Compute Cloud User Guide Resource management • Use instance metadata and custom resource tags to track and identify your AWS resources. For more information, see Use instance metadata to manage your EC2 instance and Tag your Amazon EC2 resources. • View your current limits for Amazon EC2. Plan to request any limit increases in advance of the time that you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back"} -{"global_id": 350, "doc_id": "ec2", "chunk_id": "17", "question_id": 3, "question": "What should you do to store temporary data in your instance?", "answer_span": "Use the instance store available for your instance to store temporary data.", "chunk": "For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage • Understand the implications of the root device type for data persistence, backup, and recovery. For more information, see Root device type. • Use separate Amazon EBS volumes for the operating system versus your data. Ensure that the volume with your data persists after instance termination. For more information, see Preserve data when an instance is terminated. • Use the instance store available for your instance to store temporary data. Remember that the data stored in instance store is deleted when you stop, hibernate, or terminate your instance. If you use instance store for database storage, ensure that you have a cluster with a replication factor that ensures fault tolerance. • Encrypt EBS volumes and snapshots. For more information, see Amazon EBS encryption in the Amazon EBS User Guide. 17 Amazon Elastic Compute Cloud User Guide Resource management • Use instance metadata and custom resource tags to track and identify your AWS resources. For more information, see Use instance metadata to manage your EC2 instance and Tag your Amazon EC2 resources. • View your current limits for Amazon EC2. Plan to request any limit increases in advance of the time that you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back"} -{"global_id": 351, "doc_id": "ec2", "chunk_id": "17", "question_id": 4, "question": "What tool can you use to inspect your AWS environment for recommendations?", "answer_span": "Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps.", "chunk": "For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage • Understand the implications of the root device type for data persistence, backup, and recovery. For more information, see Root device type. • Use separate Amazon EBS volumes for the operating system versus your data. Ensure that the volume with your data persists after instance termination. For more information, see Preserve data when an instance is terminated. • Use the instance store available for your instance to store temporary data. Remember that the data stored in instance store is deleted when you stop, hibernate, or terminate your instance. If you use instance store for database storage, ensure that you have a cluster with a replication factor that ensures fault tolerance. • Encrypt EBS volumes and snapshots. For more information, see Amazon EBS encryption in the Amazon EBS User Guide. 17 Amazon Elastic Compute Cloud User Guide Resource management • Use instance metadata and custom resource tags to track and identify your AWS resources. For more information, see Use instance metadata to manage your EC2 instance and Tag your Amazon EC2 resources. • View your current limits for Amazon EC2. Plan to request any limit increases in advance of the time that you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back"} -{"global_id": 352, "doc_id": "ec2", "chunk_id": "18", "question_id": 1, "question": "What tool can be used to inspect your AWS environment and make recommendations?", "answer_span": "Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps.", "chunk": "quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back up your EBS volumes using Amazon EBS snapshots, and create an Amazon Machine Image (AMI) from your instance to save the configuration as a template for launching future instances. For more information about AWS services that help achieve this use case, see AWS Backup and Amazon Data Lifecycle Manager. • Deploy critical components of your application across multiple Availability Zones, and replicate your data appropriately. • Design your applications to handle dynamic IP addressing when your instance restarts. For more information, see Amazon EC2 instance IP addressing. • Monitor and respond to events. For more information, see Monitor Amazon EC2 resources. • Ensure that you are prepared to handle failover. For a basic solution, you can manually attach a network interface or Elastic IP address to a replacement instance. For more information, see Elastic network interfaces. For an automated solution, you can use Amazon EC2 Auto Scaling. For more information, see the Amazon EC2 Auto Scaling User Guide. • Regularly test the process of recovering your instances and Amazon EBS volumes to ensure data and services are restored successfully. Networking • Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required"} -{"global_id": 353, "doc_id": "ec2", "chunk_id": "18", "question_id": 2, "question": "What should you regularly back up using Amazon EBS snapshots?", "answer_span": "Regularly back up your EBS volumes using Amazon EBS snapshots, and create an Amazon Machine Image (AMI) from your instance to save the configuration as a template for launching future instances.", "chunk": "quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back up your EBS volumes using Amazon EBS snapshots, and create an Amazon Machine Image (AMI) from your instance to save the configuration as a template for launching future instances. For more information about AWS services that help achieve this use case, see AWS Backup and Amazon Data Lifecycle Manager. • Deploy critical components of your application across multiple Availability Zones, and replicate your data appropriately. • Design your applications to handle dynamic IP addressing when your instance restarts. For more information, see Amazon EC2 instance IP addressing. • Monitor and respond to events. For more information, see Monitor Amazon EC2 resources. • Ensure that you are prepared to handle failover. For a basic solution, you can manually attach a network interface or Elastic IP address to a replacement instance. For more information, see Elastic network interfaces. For an automated solution, you can use Amazon EC2 Auto Scaling. For more information, see the Amazon EC2 Auto Scaling User Guide. • Regularly test the process of recovering your instances and Amazon EBS volumes to ensure data and services are restored successfully. Networking • Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required"} -{"global_id": 354, "doc_id": "ec2", "chunk_id": "18", "question_id": 3, "question": "What is the recommended time-to-live (TTL) value for applications?", "answer_span": "Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6.", "chunk": "quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back up your EBS volumes using Amazon EBS snapshots, and create an Amazon Machine Image (AMI) from your instance to save the configuration as a template for launching future instances. For more information about AWS services that help achieve this use case, see AWS Backup and Amazon Data Lifecycle Manager. • Deploy critical components of your application across multiple Availability Zones, and replicate your data appropriately. • Design your applications to handle dynamic IP addressing when your instance restarts. For more information, see Amazon EC2 instance IP addressing. • Monitor and respond to events. For more information, see Monitor Amazon EC2 resources. • Ensure that you are prepared to handle failover. For a basic solution, you can manually attach a network interface or Elastic IP address to a replacement instance. For more information, see Elastic network interfaces. For an automated solution, you can use Amazon EC2 Auto Scaling. For more information, see the Amazon EC2 Auto Scaling User Guide. • Regularly test the process of recovering your instances and Amazon EBS volumes to ensure data and services are restored successfully. Networking • Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required"} -{"global_id": 355, "doc_id": "ec2", "chunk_id": "18", "question_id": 4, "question": "What should you do to ensure data and services are restored successfully?", "answer_span": "Regularly test the process of recovering your instances and Amazon EBS volumes to ensure data and services are restored successfully.", "chunk": "quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back up your EBS volumes using Amazon EBS snapshots, and create an Amazon Machine Image (AMI) from your instance to save the configuration as a template for launching future instances. For more information about AWS services that help achieve this use case, see AWS Backup and Amazon Data Lifecycle Manager. • Deploy critical components of your application across multiple Availability Zones, and replicate your data appropriately. • Design your applications to handle dynamic IP addressing when your instance restarts. For more information, see Amazon EC2 instance IP addressing. • Monitor and respond to events. For more information, see Monitor Amazon EC2 resources. • Ensure that you are prepared to handle failover. For a basic solution, you can manually attach a network interface or Elastic IP address to a replacement instance. For more information, see Elastic network interfaces. For an automated solution, you can use Amazon EC2 Auto Scaling. For more information, see the Amazon EC2 Auto Scaling User Guide. • Regularly test the process of recovering your instances and Amazon EBS volumes to ensure data and services are restored successfully. Networking • Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required"} -{"global_id": 356, "doc_id": "ec2", "chunk_id": "19", "question_id": 1, "question": "What is an Amazon Machine Image (AMI)?", "answer_span": "An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance.", "chunk": "value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance. Each AMI also contains a block device mapping that specifies the block devices to attach to the instances that you launch. You must specify an AMI when you launch an instance. The AMI must be compatible with the instance type that you chose for your instance. You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace. An AMI is specific to the following: • Region • Operating system • Processor architecture • Root device type • Virtualization type You can launch multiple instances from a single AMI when you require multiple instances with the same configuration. You can use different AMIs to launch instances when you require instances with different configurations, as shown in the following diagram. 19 Amazon Elastic Compute Cloud User Guide You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration. You can copy an AMI to another AWS Region, and then use it to launch instances in that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS"} -{"global_id": 357, "doc_id": "ec2", "chunk_id": "19", "question_id": 2, "question": "What must you specify when launching an instance?", "answer_span": "You must specify an AMI when you launch an instance.", "chunk": "value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance. Each AMI also contains a block device mapping that specifies the block devices to attach to the instances that you launch. You must specify an AMI when you launch an instance. The AMI must be compatible with the instance type that you chose for your instance. You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace. An AMI is specific to the following: • Region • Operating system • Processor architecture • Root device type • Virtualization type You can launch multiple instances from a single AMI when you require multiple instances with the same configuration. You can use different AMIs to launch instances when you require instances with different configurations, as shown in the following diagram. 19 Amazon Elastic Compute Cloud User Guide You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration. You can copy an AMI to another AWS Region, and then use it to launch instances in that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS"} -{"global_id": 358, "doc_id": "ec2", "chunk_id": "19", "question_id": 3, "question": "What can you do with an AMI that you created?", "answer_span": "You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration.", "chunk": "value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance. Each AMI also contains a block device mapping that specifies the block devices to attach to the instances that you launch. You must specify an AMI when you launch an instance. The AMI must be compatible with the instance type that you chose for your instance. You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace. An AMI is specific to the following: • Region • Operating system • Processor architecture • Root device type • Virtualization type You can launch multiple instances from a single AMI when you require multiple instances with the same configuration. You can use different AMIs to launch instances when you require instances with different configurations, as shown in the following diagram. 19 Amazon Elastic Compute Cloud User Guide You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration. You can copy an AMI to another AWS Region, and then use it to launch instances in that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS"} -{"global_id": 359, "doc_id": "ec2", "chunk_id": "19", "question_id": 4, "question": "What types of AMIs can you use to launch instances?", "answer_span": "You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace.", "chunk": "value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance. Each AMI also contains a block device mapping that specifies the block devices to attach to the instances that you launch. You must specify an AMI when you launch an instance. The AMI must be compatible with the instance type that you chose for your instance. You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace. An AMI is specific to the following: • Region • Operating system • Processor architecture • Root device type • Virtualization type You can launch multiple instances from a single AMI when you require multiple instances with the same configuration. You can use different AMIs to launch instances when you require instances with di��erent configurations, as shown in the following diagram. 19 Amazon Elastic Compute Cloud User Guide You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration. You can copy an AMI to another AWS Region, and then use it to launch instances in that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS"} -{"global_id": 360, "doc_id": "ec2", "chunk_id": "20", "question_id": 1, "question": "What can you do with your AMI using the AWS Marketplace?", "answer_span": "You can sell your AMI using the AWS Marketplace.", "chunk": "created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS Marketplace for Amazon EC2 instances • Amazon EC2 AMI lifecycle • Instance launch behavior with Amazon EC2 boot modes • Use encryption with EBS-backed AMIs • Understand shared AMI usage in Amazon EC2 • Monitor AMI events using Amazon EventBridge • Understand AMI billing information • AMI quotas in Amazon EC2 20"} -{"global_id": 361, "doc_id": "ec2", "chunk_id": "20", "question_id": 2, "question": "What is one of the topics covered in the contents related to AMIs?", "answer_span": "AMI types and characteristics in Amazon EC2", "chunk": "created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS Marketplace for Amazon EC2 instances • Amazon EC2 AMI lifecycle • Instance launch behavior with Amazon EC2 boot modes • Use encryption with EBS-backed AMIs • Understand shared AMI usage in Amazon EC2 • Monitor AMI events using Amazon EventBridge • Understand AMI billing information • AMI quotas in Amazon EC2 20"} -{"global_id": 362, "doc_id": "ec2", "chunk_id": "20", "question_id": 3, "question": "What does the document mention about AMI lifecycle?", "answer_span": "Amazon EC2 AMI lifecycle", "chunk": "created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS Marketplace for Amazon EC2 instances • Amazon EC2 AMI lifecycle • Instance launch behavior with Amazon EC2 boot modes • Use encryption with EBS-backed AMIs • Understand shared AMI usage in Amazon EC2 • Monitor AMI events using Amazon EventBridge • Understand AMI billing information • AMI quotas in Amazon EC2 20"} -{"global_id": 363, "doc_id": "ec2", "chunk_id": "20", "question_id": 4, "question": "What is one of the behaviors associated with instance launch in Amazon EC2?", "answer_span": "Instance launch behavior with Amazon EC2 boot modes", "chunk": "created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS Marketplace for Amazon EC2 instances • Amazon EC2 AMI lifecycle • Instance launch behavior with Amazon EC2 boot modes • Use encryption with EBS-backed AMIs • Understand shared AMI usage in Amazon EC2 • Monitor AMI events using Amazon EventBridge • Understand AMI billing information • AMI quotas in Amazon EC2 20"} -{"global_id": 364, "doc_id": "batch", "chunk_id": "0", "question_id": 1, "question": "What does AWS Batch help you to do?", "answer_span": "AWS Batch helps you to run batch computing workloads on the AWS Cloud.", "chunk": "AWS Batch User Guide What is AWS Batch? AWS Batch helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. AWS Batch removes the undifferentiated heavy lifting of configuring and managing the required infrastructure, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to eliminate capacity constraints, reduce compute costs, and deliver results quickly. As a fully managed service, AWS Batch helps you to run batch computing workloads of any scale. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems. 1 AWS Batch User Guide AWS Batch provides all of the necessary functionality to run high-scale, compute-intensive workloads on top of AWS managed container orchestration services, Amazon ECS and Amazon EKS. AWS Batch is able to scale compute capacity on Amazon EC2 instances and Fargate resources. AWS Batch provides a fully managed service for batch workloads, and delivers the operational capabilities to optimize these types of workloads for throughput, speed, resource efficiency, and cost. AWS Batch also enables SageMaker Training job queuing, allowing data scientists and ML engineers to submit Training jobs with priorities to configurable queues. This capability ensures that ML workloads run automatically as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides"} -{"global_id": 365, "doc_id": "batch", "chunk_id": "0", "question_id": 2, "question": "What type of workloads can AWS Batch efficiently provision resources for?", "answer_span": "AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads.", "chunk": "AWS Batch User Guide What is AWS Batch? AWS Batch helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. AWS Batch removes the undifferentiated heavy lifting of configuring and managing the required infrastructure, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to eliminate capacity constraints, reduce compute costs, and deliver results quickly. As a fully managed service, AWS Batch helps you to run batch computing workloads of any scale. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems. 1 AWS Batch User Guide AWS Batch provides all of the necessary functionality to run high-scale, compute-intensive workloads on top of AWS managed container orchestration services, Amazon ECS and Amazon EKS. AWS Batch is able to scale compute capacity on Amazon EC2 instances and Fargate resources. AWS Batch provides a fully managed service for batch workloads, and delivers the operational capabilities to optimize these types of workloads for throughput, speed, resource efficiency, and cost. AWS Batch also enables SageMaker Training job queuing, allowing data scientists and ML engineers to submit Training jobs with priorities to configurable queues. This capability ensures that ML workloads run automatically as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides"} -{"global_id": 366, "doc_id": "batch", "chunk_id": "0", "question_id": 3, "question": "What does AWS Batch eliminate the need for in terms of software management?", "answer_span": "With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems.", "chunk": "AWS Batch User Guide What is AWS Batch? AWS Batch helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. AWS Batch removes the undifferentiated heavy lifting of configuring and managing the required infrastructure, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to eliminate capacity constraints, reduce compute costs, and deliver results quickly. As a fully managed service, AWS Batch helps you to run batch computing workloads of any scale. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems. 1 AWS Batch User Guide AWS Batch provides all of the necessary functionality to run high-scale, compute-intensive workloads on top of AWS managed container orchestration services, Amazon ECS and Amazon EKS. AWS Batch is able to scale compute capacity on Amazon EC2 instances and Fargate resources. AWS Batch provides a fully managed service for batch workloads, and delivers the operational capabilities to optimize these types of workloads for throughput, speed, resource efficiency, and cost. AWS Batch also enables SageMaker Training job queuing, allowing data scientists and ML engineers to submit Training jobs with priorities to configurable queues. This capability ensures that ML workloads run automatically as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides"} -{"global_id": 367, "doc_id": "batch", "chunk_id": "0", "question_id": 4, "question": "How does AWS Batch support machine learning workloads?", "answer_span": "For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs.", "chunk": "AWS Batch User Guide What is AWS Batch? AWS Batch helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. AWS Batch removes the undifferentiated heavy lifting of configuring and managing the required infrastructure, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to eliminate capacity constraints, reduce compute costs, and deliver results quickly. As a fully managed service, AWS Batch helps you to run batch computing workloads of any scale. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems. 1 AWS Batch User Guide AWS Batch provides all of the necessary functionality to run high-scale, compute-intensive workloads on top of AWS managed container orchestration services, Amazon ECS and Amazon EKS. AWS Batch is able to scale compute capacity on Amazon EC2 instances and Fargate resources. AWS Batch provides a fully managed service for batch workloads, and delivers the operational capabilities to optimize these types of workloads for throughput, speed, resource efficiency, and cost. AWS Batch also enables SageMaker Training job queuing, allowing data scientists and ML engineers to submit Training jobs with priorities to configurable queues. This capability ensures that ML workloads run automatically as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides"} -{"global_id": 368, "doc_id": "batch", "chunk_id": "1", "question_id": 1, "question": "What capabilities does AWS Batch provide for SageMaker Training jobs?", "answer_span": "For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs.", "chunk": "as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads. Jobs are automatically queued and executed based on configured priorities and resource availability. 2 AWS Batch User Guide Are you a first-time AWS Batch user? If you are a first-time user of AWS Batch, we recommend that you begin by reading the following sections: • Components of AWS Batch • Create IAM account and administrative user • Setting up AWS Batch • Getting started with AWS Batch tutorials • Getting started with AWS Batch on SageMaker AI Related services AWS Batch is a fully managed batch computing service that plans, schedules, and runs your containerized batch ML, simulation, and analytics workloads across the full range of AWS compute offerings, such as Amazon ECS, Amazon EKS, AWS Fargate, and Spot or On-Demand Instances. For more information about each managed compute service, see: • Amazon EC2 User Guide • AWS Fargate Developer Guide • Amazon EKS User Guide • Amazon SageMaker AI Developer Guide Accessing AWS Batch You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources. AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the"} -{"global_id": 369, "doc_id": "batch", "chunk_id": "1", "question_id": 2, "question": "What is the shared responsibility model in AWS Batch?", "answer_span": "This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads.", "chunk": "as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads. Jobs are automatically queued and executed based on configured priorities and resource availability. 2 AWS Batch User Guide Are you a first-time AWS Batch user? If you are a first-time user of AWS Batch, we recommend that you begin by reading the following sections: • Components of AWS Batch • Create IAM account and administrative user • Setting up AWS Batch • Getting started with AWS Batch tutorials • Getting started with AWS Batch on SageMaker AI Related services AWS Batch is a fully managed batch computing service that plans, schedules, and runs your containerized batch ML, simulation, and analytics workloads across the full range of AWS compute offerings, such as Amazon ECS, Amazon EKS, AWS Fargate, and Spot or On-Demand Instances. For more information about each managed compute service, see: • Amazon EC2 User Guide • AWS Fargate Developer Guide • Amazon EKS User Guide • Amazon SageMaker AI Developer Guide Accessing AWS Batch You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources. AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the"} -{"global_id": 370, "doc_id": "batch", "chunk_id": "1", "question_id": 3, "question": "What should first-time AWS Batch users read?", "answer_span": "If you are a first-time user of AWS Batch, we recommend that you begin by reading the following sections: • Components of AWS Batch • Create IAM account and administrative user • Setting up AWS Batch • Getting started with AWS Batch tutorials • Getting started with AWS Batch on SageMaker", "chunk": "as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads. Jobs are automatically queued and executed based on configured priorities and resource availability. 2 AWS Batch User Guide Are you a first-time AWS Batch user? If you are a first-time user of AWS Batch, we recommend that you begin by reading the following sections: • Components of AWS Batch • Create IAM account and administrative user • Setting up AWS Batch • Getting started with AWS Batch tutorials • Getting started with AWS Batch on SageMaker AI Related services AWS Batch is a fully managed batch computing service that plans, schedules, and runs your containerized batch ML, simulation, and analytics workloads across the full range of AWS compute offerings, such as Amazon ECS, Amazon EKS, AWS Fargate, and Spot or On-Demand Instances. For more information about each managed compute service, see: • Amazon EC2 User Guide • AWS Fargate Developer Guide • Amazon EKS User Guide • Amazon SageMaker AI Developer Guide Accessing AWS Batch You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources. AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the"} -{"global_id": 371, "doc_id": "batch", "chunk_id": "1", "question_id": 4, "question": "How can you access AWS Batch?", "answer_span": "You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources.", "chunk": "as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads. Jobs are automatically queued and executed based on configured priorities and resource availability. 2 AWS Batch User Guide Are you a first-time AWS Batch user? If you are a first-time user of AWS Batch, we recommend that you begin by reading the following sections: • Components of AWS Batch • Create IAM account and administrative user • Setting up AWS Batch • Getting started with AWS Batch tutorials • Getting started with AWS Batch on SageMaker AI Related services AWS Batch is a fully managed batch computing service that plans, schedules, and runs your containerized batch ML, simulation, and analytics workloads across the full range of AWS compute offerings, such as Amazon ECS, Amazon EKS, AWS Fargate, and Spot or On-Demand Instances. For more information about each managed compute service, see: • Amazon EC2 User Guide • AWS Fargate Developer Guide • Amazon EKS User Guide • Amazon SageMaker AI Developer Guide Accessing AWS Batch You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources. AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the"} -{"global_id": 372, "doc_id": "batch", "chunk_id": "2", "question_id": 1, "question": "What operating systems support the AWS Command Line Interface?", "answer_span": "The AWS Command Line Interface is supported on Windows, macOS, and Linux.", "chunk": "AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the AWS CLI Command Reference. Are you a first-time AWS Batch user? 3 AWS Batch User Guide AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, use the libraries, sample code, tutorials, and other resources provided by AWS. These libraries provide basic functions that automate tasks, such as cryptographically signing your requests, retrying requests, and handling error responses. These functions make it more efficient for you to get started. For more information, see Tools to Build on AWS. Components of AWS Batch AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure. Compute environment A compute environment is a set of managed or unmanaged compute resources that are used to run jobs. With managed compute environments, you can specify desired compute type (Fargate or EC2) at several levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum"} -{"global_id": 373, "doc_id": "batch", "chunk_id": "2", "question_id": 2, "question": "What does AWS Batch simplify?", "answer_span": "AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region.", "chunk": "AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the AWS CLI Command Reference. Are you a first-time AWS Batch user? 3 AWS Batch User Guide AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, use the libraries, sample code, tutorials, and other resources provided by AWS. These libraries provide basic functions that automate tasks, such as cryptographically signing your requests, retrying requests, and handling error responses. These functions make it more efficient for you to get started. For more information, see Tools to Build on AWS. Components of AWS Batch AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure. Compute environment A compute environment is a set of managed or unmanaged compute resources that are used to run jobs. With managed compute environments, you can specify desired compute type (Fargate or EC2) at several levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum"} -{"global_id": 374, "doc_id": "batch", "chunk_id": "2", "question_id": 3, "question": "What can you define after a compute environment is associated with a job queue?", "answer_span": "you can define job definitions that specify which Docker container images to run your jobs.", "chunk": "AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the AWS CLI Command Reference. Are you a first-time AWS Batch user? 3 AWS Batch User Guide AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, use the libraries, sample code, tutorials, and other resources provided by AWS. These libraries provide basic functions that automate tasks, such as cryptographically signing your requests, retrying requests, and handling error responses. These functions make it more efficient for you to get started. For more information, see Tools to Build on AWS. Components of AWS Batch AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure. Compute environment A compute environment is a set of managed or unmanaged compute resources that are used to run jobs. With managed compute environments, you can specify desired compute type (Fargate or EC2) at several levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum"} -{"global_id": 375, "doc_id": "batch", "chunk_id": "2", "question_id": 4, "question": "What are managed compute environments used for?", "answer_span": "A compute environment is a set of managed or unmanaged compute resources that are used to run jobs.", "chunk": "AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the AWS CLI Command Reference. Are you a first-time AWS Batch user? 3 AWS Batch User Guide AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, use the libraries, sample code, tutorials, and other resources provided by AWS. These libraries provide basic functions that automate tasks, such as cryptographically signing your requests, retrying requests, and handling error responses. These functions make it more efficient for you to get started. For more information, see Tools to Build on AWS. Components of AWS Batch AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure. Compute environment A compute environment is a set of managed or unmanaged compute resources that are used to run jobs. With managed compute environments, you can specify desired compute type (Fargate or EC2) at several levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum"} -{"global_id": 376, "doc_id": "batch", "chunk_id": "3", "question_id": 1, "question": "What types of EC2 instances can you set up in compute environments?", "answer_span": "You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge.", "chunk": "levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're Components of AWS Batch 4 AWS Batch User Guide willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets. AWS Batch efficiently launches, manages, and terminates compute types as needed. You can also manage your own compute environments. As such, you're responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch creates for you. For more information, see Compute environments for AWS Batch. Job queues When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment. You associate one or more compute environments with a job queue. You can also assign priority values for these compute environments and even across job queues themselves. For example, you can have a high priority queue that you submit time-sensitive jobs to, and a low priority queue for jobs that can run anytime when compute resources are cheaper. For more information, see Job queues. Job definitions A job definition specifies how jobs are to be run. You can think of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points"} -{"global_id": 377, "doc_id": "batch", "chunk_id": "3", "question_id": 2, "question": "What are the components you can specify for the compute environment?", "answer_span": "You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets.", "chunk": "levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're Components of AWS Batch 4 AWS Batch User Guide willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets. AWS Batch efficiently launches, manages, and terminates compute types as needed. You can also manage your own compute environments. As such, you're responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch creates for you. For more information, see Compute environments for AWS Batch. Job queues When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment. You associate one or more compute environments with a job queue. You can also assign priority values for these compute environments and even across job queues themselves. For example, you can have a high priority queue that you submit time-sensitive jobs to, and a low priority queue for jobs that can run anytime when compute resources are cheaper. For more information, see Job queues. Job definitions A job definition specifies how jobs are to be run. You can think of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points"} -{"global_id": 378, "doc_id": "batch", "chunk_id": "3", "question_id": 3, "question": "What happens when you submit an AWS Batch job?", "answer_span": "When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment.", "chunk": "levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're Components of AWS Batch 4 AWS Batch User Guide willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets. AWS Batch efficiently launches, manages, and terminates compute types as needed. You can also manage your own compute environments. As such, you're responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch creates for you. For more information, see Compute environments for AWS Batch. Job queues When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment. You associate one or more compute environments with a job queue. You can also assign priority values for these compute environments and even across job queues themselves. For example, you can have a high priority queue that you submit time-sensitive jobs to, and a low priority queue for jobs that can run anytime when compute resources are cheaper. For more information, see Job queues. Job definitions A job definition specifies how jobs are to be run. You can think of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points"} -{"global_id": 379, "doc_id": "batch", "chunk_id": "3", "question_id": 4, "question": "What does a job definition specify?", "answer_span": "A job definition specifies how jobs are to be run.", "chunk": "levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're Components of AWS Batch 4 AWS Batch User Guide willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets. AWS Batch efficiently launches, manages, and terminates compute types as needed. You can also manage your own compute environments. As such, you're responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch creates for you. For more information, see Compute environments for AWS Batch. Job queues When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment. You associate one or more compute environments with a job queue. You can also assign priority values for these compute environments and even across job queues themselves. For example, you can have a high priority queue that you submit time-sensitive jobs to, and a low priority queue for jobs that can run anytime when compute resources are cheaper. For more information, see Job queues. Job definitions A job definition specifies how jobs are to be run. You can think of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points"} -{"global_id": 380, "doc_id": "batch", "chunk_id": "4", "question_id": 1, "question": "What is a job definition described as in the text?", "answer_span": "of a job definition as a blueprint for the resources in your job.", "chunk": "of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points for persistent storage. Many of the specifications in a job definition can be overridden by specifying new values when submitting individual Jobs. For more information, see Job definitions Jobs A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment, using parameters that you specify in a job definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs or the availability of resources you specify. For more information, see Jobs. Scheduling policy You can use scheduling policies to configure how compute resources in a job queue are allocated between users or workloads. Using fair-share scheduling policies, you can assign different share identifiers to workloads or users. The AWS Batch job scheduler defaults to a first-in, first-out (FIFO) strategy. For more information, see Fair-share scheduling policies. Job queues 5 AWS Batch User Guide Consumable resources A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating"} -{"global_id": 381, "doc_id": "batch", "chunk_id": "4", "question_id": 2, "question": "What can you supply your job with to provide access to other AWS resources?", "answer_span": "You can supply your job with an IAM role to provide access to other AWS resources.", "chunk": "of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points for persistent storage. Many of the specifications in a job definition can be overridden by specifying new values when submitting individual Jobs. For more information, see Job definitions Jobs A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment, using parameters that you specify in a job definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs or the availability of resources you specify. For more information, see Jobs. Scheduling policy You can use scheduling policies to configure how compute resources in a job queue are allocated between users or workloads. Using fair-share scheduling policies, you can assign different share identifiers to workloads or users. The AWS Batch job scheduler defaults to a first-in, first-out (FIFO) strategy. For more information, see Fair-share scheduling policies. Job queues 5 AWS Batch User Guide Consumable resources A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating"} -{"global_id": 382, "doc_id": "batch", "chunk_id": "4", "question_id": 3, "question": "What does a job in AWS Batch run as?", "answer_span": "It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment.", "chunk": "of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points for persistent storage. Many of the specifications in a job definition can be overridden by specifying new values when submitting individual Jobs. For more information, see Job definitions Jobs A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment, using parameters that you specify in a job definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs or the availability of resources you specify. For more information, see Jobs. Scheduling policy You can use scheduling policies to configure how compute resources in a job queue are allocated between users or workloads. Using fair-share scheduling policies, you can assign different share identifiers to workloads or users. The AWS Batch job scheduler defaults to a first-in, first-out (FIFO) strategy. For more information, see Fair-share scheduling policies. Job queues 5 AWS Batch User Guide Consumable resources A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating"} -{"global_id": 383, "doc_id": "batch", "chunk_id": "4", "question_id": 4, "question": "What is a consumable resource according to the text?", "answer_span": "A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on.", "chunk": "of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points for persistent storage. Many of the specifications in a job definition can be overridden by specifying new values when submitting individual Jobs. For more information, see Job definitions Jobs A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment, using parameters that you specify in a job definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs or the availability of resources you specify. For more information, see Jobs. Scheduling policy You can use scheduling policies to configure how compute resources in a job queue are allocated between users or workloads. Using fair-share scheduling policies, you can assign different share identifiers to workloads or users. The AWS Batch job scheduler defaults to a first-in, first-out (FIFO) strategy. For more information, see Fair-share scheduling policies. Job queues 5 AWS Batch User Guide Consumable resources A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating"} -{"global_id": 384, "doc_id": "batch", "chunk_id": "5", "question_id": 1, "question": "What does AWS Batch take into account when scheduling a job?", "answer_span": "You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job.", "chunk": "access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating only the jobs that have all the required resources available. For more information, see Resource-aware scheduling . Service Environment A Service Environment define how AWS Batch integrates with SageMaker for job execution. Service Environments enable AWS Batch to submit and manage jobs on SageMaker while providing the queuing, scheduling, and priority management capabilities of AWS Batch. Service Environments define capacity limits for specific service types such as SageMaker Training jobs. The capacity limits control the maximum resources that can be used by service jobs in the environment. For more information, see Service environments for AWS Batch. Service job A service job is a unit of work that you submit to AWS Batch to run on a service environment. Service jobs leverage AWS Batch's queuing and scheduling capabilities while delegating actual execution to the external service. For example, SageMaker Training jobs submitted as service jobs are queued and prioritized by AWS Batch, but the SageMaker Training job execution occurs within SageMaker AI infrastructure. This integration enables data scientists and ML engineers to benefit from AWS Batch's automated workload management, and priority queuing, for their SageMaker AI Training workloads. Service jobs can reference other jobs by name or ID and support job dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon"} -{"global_id": 385, "doc_id": "batch", "chunk_id": "5", "question_id": 2, "question": "What is a Service Environment in AWS Batch?", "answer_span": "A Service Environment define how AWS Batch integrates with SageMaker for job execution.", "chunk": "access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating only the jobs that have all the required resources available. For more information, see Resource-aware scheduling . Service Environment A Service Environment define how AWS Batch integrates with SageMaker for job execution. Service Environments enable AWS Batch to submit and manage jobs on SageMaker while providing the queuing, scheduling, and priority management capabilities of AWS Batch. Service Environments define capacity limits for specific service types such as SageMaker Training jobs. The capacity limits control the maximum resources that can be used by service jobs in the environment. For more information, see Service environments for AWS Batch. Service job A service job is a unit of work that you submit to AWS Batch to run on a service environment. Service jobs leverage AWS Batch's queuing and scheduling capabilities while delegating actual execution to the external service. For example, SageMaker Training jobs submitted as service jobs are queued and prioritized by AWS Batch, but the SageMaker Training job execution occurs within SageMaker AI infrastructure. This integration enables data scientists and ML engineers to benefit from AWS Batch's automated workload management, and priority queuing, for their SageMaker AI Training workloads. Service jobs can reference other jobs by name or ID and support job dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon"} -{"global_id": 386, "doc_id": "batch", "chunk_id": "5", "question_id": 3, "question": "What is a service job in AWS Batch?", "answer_span": "A service job is a unit of work that you submit to AWS Batch to run on a service environment.", "chunk": "access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating only the jobs that have all the required resources available. For more information, see Resource-aware scheduling . Service Environment A Service Environment define how AWS Batch integrates with SageMaker for job execution. Service Environments enable AWS Batch to submit and manage jobs on SageMaker while providing the queuing, scheduling, and priority management capabilities of AWS Batch. Service Environments define capacity limits for specific service types such as SageMaker Training jobs. The capacity limits control the maximum resources that can be used by service jobs in the environment. For more information, see Service environments for AWS Batch. Service job A service job is a unit of work that you submit to AWS Batch to run on a service environment. Service jobs leverage AWS Batch's queuing and scheduling capabilities while delegating actual execution to the external service. For example, SageMaker Training jobs submitted as service jobs are queued and prioritized by AWS Batch, but the SageMaker Training job execution occurs within SageMaker AI infrastructure. This integration enables data scientists and ML engineers to benefit from AWS Batch's automated workload management, and priority queuing, for their SageMaker AI Training workloads. Service jobs can reference other jobs by name or ID and support job dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon"} -{"global_id": 387, "doc_id": "batch", "chunk_id": "5", "question_id": 4, "question": "How do service jobs benefit from AWS Batch?", "answer_span": "This integration enables data scientists and ML engineers to benefit from AWS Batch's automated workload management, and priority queuing, for their SageMaker AI Training workloads.", "chunk": "access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating only the jobs that have all the required resources available. For more information, see Resource-aware scheduling . Service Environment A Service Environment define how AWS Batch integrates with SageMaker for job execution. Service Environments enable AWS Batch to submit and manage jobs on SageMaker while providing the queuing, scheduling, and priority management capabilities of AWS Batch. Service Environments define capacity limits for specific service types such as SageMaker Training jobs. The capacity limits control the maximum resources that can be used by service jobs in the environment. For more information, see Service environments for AWS Batch. Service job A service job is a unit of work that you submit to AWS Batch to run on a service environment. Service jobs leverage AWS Batch's queuing and scheduling capabilities while delegating actual execution to the external service. For example, SageMaker Training jobs submitted as service jobs are queued and prioritized by AWS Batch, but the SageMaker Training job execution occurs within SageMaker AI infrastructure. This integration enables data scientists and ML engineers to benefit from AWS Batch's automated workload management, and priority queuing, for their SageMaker AI Training workloads. Service jobs can reference other jobs by name or ID and support job dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon"} -{"global_id": 388, "doc_id": "batch", "chunk_id": "6", "question_id": 1, "question": "What services must you be using to soon use AWS Batch?", "answer_span": "If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon use AWS Batch.", "chunk": "dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon use AWS Batch. The setup process for these services is similar. This is because AWS Batch uses Amazon ECS container instances in its compute environments. To use the AWS CLI with AWS Batch, you must use a version of the AWS CLI that supports the latest AWS Batch features. If you don't see support for an AWS Batch feature in the AWS CLI, upgrade to the latest version. For more information, see http://aws.amazon.com/cli/. Note Because AWS Batch uses components of Amazon EC2, you use the Amazon EC2 console for many of these steps. Complete the following tasks to get set up for AWS Batch. Topics • Create IAM account and administrative user • Create IAM roles for your compute environments and container instances • Create a key pair for your instances • Create a VPC • Create a security group • Install the AWS CLI Create IAM account and administrative user To get started, you need to create an AWS account and a single user that is typically granted administrative rights. To accomplish this, complete the following tutorials: Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. Create IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign"} -{"global_id": 389, "doc_id": "batch", "chunk_id": "6", "question_id": 2, "question": "What must you do if you don't see support for an AWS Batch feature in the AWS CLI?", "answer_span": "If you don't see support for an AWS Batch feature in the AWS CLI, upgrade to the latest version.", "chunk": "dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon use AWS Batch. The setup process for these services is similar. This is because AWS Batch uses Amazon ECS container instances in its compute environments. To use the AWS CLI with AWS Batch, you must use a version of the AWS CLI that supports the latest AWS Batch features. If you don't see support for an AWS Batch feature in the AWS CLI, upgrade to the latest version. For more information, see http://aws.amazon.com/cli/. Note Because AWS Batch uses components of Amazon EC2, you use the Amazon EC2 console for many of these steps. Complete the following tasks to get set up for AWS Batch. Topics • Create IAM account and administrative user • Create IAM roles for your compute environments and container instances • Create a key pair for your instances • Create a VPC • Create a security group • Install the AWS CLI Create IAM account and administrative user To get started, you need to create an AWS account and a single user that is typically granted administrative rights. To accomplish this, complete the following tutorials: Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. Create IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign"} -{"global_id": 390, "doc_id": "batch", "chunk_id": "6", "question_id": 3, "question": "What is the first step to create an AWS account?", "answer_span": "To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup.", "chunk": "dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon use AWS Batch. The setup process for these services is similar. This is because AWS Batch uses Amazon ECS container instances in its compute environments. To use the AWS CLI with AWS Batch, you must use a version of the AWS CLI that supports the latest AWS Batch features. If you don't see support for an AWS Batch feature in the AWS CLI, upgrade to the latest version. For more information, see http://aws.amazon.com/cli/. Note Because AWS Batch uses components of Amazon EC2, you use the Amazon EC2 console for many of these steps. Complete the following tasks to get set up for AWS Batch. Topics • Create IAM account and administrative user • Create IAM roles for your compute environments and container instances • Create a key pair for your instances • Create a VPC • Create a security group • Install the AWS CLI Create IAM account and administrative user To get started, you need to create an AWS account and a single user that is typically granted administrative rights. To accomplish this, complete the following tutorials: Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. Create IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign"} -{"global_id": 391, "doc_id": "batch", "chunk_id": "6", "question_id": 4, "question": "What is one of the tasks you need to complete to set up AWS Batch?", "answer_span": "Complete the following tasks to get set up for AWS Batch.", "chunk": "dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon use AWS Batch. The setup process for these services is similar. This is because AWS Batch uses Amazon ECS container instances in its compute environments. To use the AWS CLI with AWS Batch, you must use a version of the AWS CLI that supports the latest AWS Batch features. If you don't see support for an AWS Batch feature in the AWS CLI, upgrade to the latest version. For more information, see http://aws.amazon.com/cli/. Note Because AWS Batch uses components of Amazon EC2, you use the Amazon EC2 console for many of these steps. Complete the following tasks to get set up for AWS Batch. Topics • Create IAM account and administrative user • Create IAM roles for your compute environments and container instances • Create a key pair for your instances • Create a VPC • Create a security group • Install the AWS CLI Create IAM account and administrative user To get started, you need to create an AWS account and a single user that is typically granted administrative rights. To accomplish this, complete the following tutorials: Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. Create IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign"} -{"global_id": 392, "doc_id": "batch", "chunk_id": "7", "question_id": 1, "question": "What is the first step to sign up for an AWS account?", "answer_span": "1. Open https://portal.aws.amazon.com/billing/signup.", "chunk": "IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User"} -{"global_id": 393, "doc_id": "batch", "chunk_id": "7", "question_id": 2, "question": "What is created when you sign up for an AWS account?", "answer_span": "When you sign up for an AWS account, an AWS account root user is created.", "chunk": "IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User"} -{"global_id": 394, "doc_id": "batch", "chunk_id": "7", "question_id": 3, "question": "What should you do to secure your AWS account root user?", "answer_span": "Turn on multi-factor authentication (MFA) for your root user.", "chunk": "IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User"} -{"global_id": 395, "doc_id": "batch", "chunk_id": "7", "question_id": 4, "question": "Where can you manage your account after signing up?", "answer_span": "At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account.", "chunk": "IAM account and administrative user 7 AWS Batch User Guide To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User"} -{"global_id": 396, "doc_id": "batch", "chunk_id": "8", "question_id": 1, "question": "What is the first step to create a user with administrative access?", "answer_span": "1. Enable IAM Identity Center.", "chunk": "device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create IAM roles for your compute environments and container instances Your AWS Batch compute environments and container instances require AWS account credentials to make calls to other AWS APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions,"} -{"global_id": 397, "doc_id": "batch", "chunk_id": "8", "question_id": 2, "question": "Where can you find instructions for enabling AWS IAM Identity Center?", "answer_span": "For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide.", "chunk": "device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create IAM roles for your compute environments and container instances Your AWS Batch compute environments and container instances require AWS account credentials to make calls to other AWS APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions,"} -{"global_id": 398, "doc_id": "batch", "chunk_id": "8", "question_id": 3, "question": "What should you do to sign in as a user with administrative access?", "answer_span": "To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user.", "chunk": "device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create IAM roles for your compute environments and container instances Your AWS Batch compute environments and container instances require AWS account credentials to make calls to other AWS APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions,"} -{"global_id": 399, "doc_id": "batch", "chunk_id": "8", "question_id": 4, "question": "What is required for your AWS Batch compute environments and container instances?", "answer_span": "Your AWS Batch compute environments and container instances require AWS account credentials to make calls to other AWS APIs on your behalf.", "chunk": "device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. Create a user with administrative access 8 AWS Batch User Guide For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide. Assign access to additional users 1. In IAM Identity Center, create a permission set that follows the best practice of applying leastprivilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide. 2. Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide. Create IAM roles for your compute environments and container instances Your AWS Batch compute environments and container instances require AWS account credentials to make calls to other AWS APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions,"} -{"global_id": 400, "doc_id": "batch", "chunk_id": "9", "question_id": 1, "question": "What role must be created to provide credentials to compute environments and container instances?", "answer_span": "Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments.", "chunk": "APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions, see Initial IAM service set up for your account. The AWS Batch compute environment and container instance roles are automatically created for you in the console first-run experience. So, if you intend to use the AWS Batch console, you can move ahead to the next section. If you plan to use the AWS CLI instead, complete the procedures in Using service-linked roles for AWS Batch, Amazon ECS instance role, and Tutorial: Create the IAM execution role before creating your first compute environment. Create a key pair for your instances AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an AWS Batch compute environment container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you create your compute environment, then provide the private key when you log in using SSH. If you didn't create a key pair already, you can create one using the Amazon EC2 console. Note that, if you plan to launch instances in multiple AWS Regions, create a key pair in each Region. For more information about Regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however,"} -{"global_id": 401, "doc_id": "batch", "chunk_id": "9", "question_id": 2, "question": "What should you do if you plan to use the AWS CLI instead of the AWS Batch console?", "answer_span": "If you plan to use the AWS CLI instead, complete the procedures in Using service-linked roles for AWS Batch, Amazon ECS instance role, and Tutorial: Create the IAM execution role before creating your first compute environment.", "chunk": "APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions, see Initial IAM service set up for your account. The AWS Batch compute environment and container instance roles are automatically created for you in the console first-run experience. So, if you intend to use the AWS Batch console, you can move ahead to the next section. If you plan to use the AWS CLI instead, complete the procedures in Using service-linked roles for AWS Batch, Amazon ECS instance role, and Tutorial: Create the IAM execution role before creating your first compute environment. Create a key pair for your instances AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an AWS Batch compute environment container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you create your compute environment, then provide the private key when you log in using SSH. If you didn't create a key pair already, you can create one using the Amazon EC2 console. Note that, if you plan to launch instances in multiple AWS Regions, create a key pair in each Region. For more information about Regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however,"} -{"global_id": 402, "doc_id": "batch", "chunk_id": "9", "question_id": 3, "question": "How does AWS secure the login information for your instance?", "answer_span": "AWS uses public-key cryptography to secure the login information for your instance.", "chunk": "APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions, see Initial IAM service set up for your account. The AWS Batch compute environment and container instance roles are automatically created for you in the console first-run experience. So, if you intend to use the AWS Batch console, you can move ahead to the next section. If you plan to use the AWS CLI instead, complete the procedures in Using service-linked roles for AWS Batch, Amazon ECS instance role, and Tutorial: Create the IAM execution role before creating your first compute environment. Create a key pair for your instances AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an AWS Batch compute environment container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you create your compute environment, then provide the private key when you log in using SSH. If you didn't create a key pair already, you can create one using the Amazon EC2 console. Note that, if you plan to launch instances in multiple AWS Regions, create a key pair in each Region. For more information about Regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however,"} -{"global_id": 403, "doc_id": "batch", "chunk_id": "9", "question_id": 4, "question": "What should you do if you plan to launch instances in multiple AWS Regions?", "answer_span": "Note that, if you plan to launch instances in multiple AWS Regions, create a key pair in each Region.", "chunk": "APIs on your behalf. Create an AWS Identity and Access Management role that provides these credentials to your compute environments and container instances, then associate that role with your compute environments. Create IAM roles 9 AWS Batch User Guide Note To verify that your AWS account has the required permissions, see Initial IAM service set up for your account. The AWS Batch compute environment and container instance roles are automatically created for you in the console first-run experience. So, if you intend to use the AWS Batch console, you can move ahead to the next section. If you plan to use the AWS CLI instead, complete the procedures in Using service-linked roles for AWS Batch, Amazon ECS instance role, and Tutorial: Create the IAM execution role before creating your first compute environment. Create a key pair for your instances AWS uses public-key cryptography to secure the login information for your instance. A Linux instance, such as an AWS Batch compute environment container instance, has no password to use for SSH access. You use a key pair to log in to your instance securely. You specify the name of the key pair when you create your compute environment, then provide the private key when you log in using SSH. If you didn't create a key pair already, you can create one using the Amazon EC2 console. Note that, if you plan to launch instances in multiple AWS Regions, create a key pair in each Region. For more information about Regions, see Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however,"} -{"global_id": 404, "doc_id": "batch", "chunk_id": "10", "question_id": 1, "question": "What is the first step to create a key pair in Amazon EC2?", "answer_span": "Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.", "chunk": "Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however, key pairs are specific to a Region. For example, if you plan to launch an instance in the US West (Oregon) Region, create a key pair for the instance in the same Region. 3. In the navigation pane, choose Key Pairs, Create Key Pair. 4. In the Create Key Pair dialog box, for Key pair name, enter a name for the new key pair , and choose Create. Choose a name that you can remember, such as your user name, followed by key-pair, plus the Region name. For example, me-key-pair-uswest2. Create a key pair 10 AWS Batch 5. User Guide The private key file is automatically downloaded by your browser. The base file name is the name that you specified as the name of your key pair, and the file name extension is .pem. Save the private key file in a safe place. Important This is the only chance for you to save the private key file. You need to provide the name of your key pair when you launch an instance and the corresponding private key each time that you connect to the instance. 6. If you use an SSH client on a Mac or Linux computer to connect to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key"} -{"global_id": 405, "doc_id": "batch", "chunk_id": "10", "question_id": 2, "question": "Why is it important to create a key pair in the same Region as the instance?", "answer_span": "key pairs are specific to a Region.", "chunk": "Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however, key pairs are specific to a Region. For example, if you plan to launch an instance in the US West (Oregon) Region, create a key pair for the instance in the same Region. 3. In the navigation pane, choose Key Pairs, Create Key Pair. 4. In the Create Key Pair dialog box, for Key pair name, enter a name for the new key pair , and choose Create. Choose a name that you can remember, such as your user name, followed by key-pair, plus the Region name. For example, me-key-pair-uswest2. Create a key pair 10 AWS Batch 5. User Guide The private key file is automatically downloaded by your browser. The base file name is the name that you specified as the name of your key pair, and the file name extension is .pem. Save the private key file in a safe place. Important This is the only chance for you to save the private key file. You need to provide the name of your key pair when you launch an instance and the corresponding private key each time that you connect to the instance. 6. If you use an SSH client on a Mac or Linux computer to connect to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key"} -{"global_id": 406, "doc_id": "batch", "chunk_id": "10", "question_id": 3, "question": "What file extension is used for the private key file?", "answer_span": "the file name extension is .pem.", "chunk": "Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however, key pairs are specific to a Region. For example, if you plan to launch an instance in the US West (Oregon) Region, create a key pair for the instance in the same Region. 3. In the navigation pane, choose Key Pairs, Create Key Pair. 4. In the Create Key Pair dialog box, for Key pair name, enter a name for the new key pair , and choose Create. Choose a name that you can remember, such as your user name, followed by key-pair, plus the Region name. For example, me-key-pair-uswest2. Create a key pair 10 AWS Batch 5. User Guide The private key file is automatically downloaded by your browser. The base file name is the name that you specified as the name of your key pair, and the file name extension is .pem. Save the private key file in a safe place. Important This is the only chance for you to save the private key file. You need to provide the name of your key pair when you launch an instance and the corresponding private key each time that you connect to the instance. 6. If you use an SSH client on a Mac or Linux computer to connect to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key"} -{"global_id": 407, "doc_id": "batch", "chunk_id": "10", "question_id": 4, "question": "What command should you use to set the permissions of your private key file on a Mac or Linux computer?", "answer_span": "$ chmod 400 your_user_name-key-pair-region_name.pem", "chunk": "Regions and Availability Zones in the Amazon EC2 User Guide. To create a key pair 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select an AWS Region for the key pair. You can select any Region that's available to you, regardless of your location: however, key pairs are specific to a Region. For example, if you plan to launch an instance in the US West (Oregon) Region, create a key pair for the instance in the same Region. 3. In the navigation pane, choose Key Pairs, Create Key Pair. 4. In the Create Key Pair dialog box, for Key pair name, enter a name for the new key pair , and choose Create. Choose a name that you can remember, such as your user name, followed by key-pair, plus the Region name. For example, me-key-pair-uswest2. Create a key pair 10 AWS Batch 5. User Guide The private key file is automatically downloaded by your browser. The base file name is the name that you specified as the name of your key pair, and the file name extension is .pem. Save the private key file in a safe place. Important This is the only chance for you to save the private key file. You need to provide the name of your key pair when you launch an instance and the corresponding private key each time that you connect to the instance. 6. If you use an SSH client on a Mac or Linux computer to connect to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key"} -{"global_id": 408, "doc_id": "batch", "chunk_id": "11", "question_id": 1, "question": "What command is used to set the permissions of your private key file?", "answer_span": "$ chmod 400 your_user_name-key-pair-region_name.pem", "chunk": "to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key pair To connect to your Linux instance from a computer running Mac or Linux, specify the .pem file to your SSH client with the -i option and the path to your private key. To connect to your Linux instance from a computer running Windows, use either MindTerm or PuTTY. If you plan to use PuTTY, install it and use the following procedure to convert the .pem file to a .ppk file. (Optional) To prepare to connect to a Linux instance from Windows using PuTTY 1. Download and install PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/. Be sure to install the entire suite. 2. Start PuTTYgen (for example, from the Start menu, choose All Programs, PuTTY, and PuTTYgen). 3. Under Type of key to generate, choose RSA. If you're using an earlier version of PuTTYgen, choose SSH-2 RSA. Create a key pair 11 AWS Batch 4. User Guide Choose Load. By default, PuTTYgen displays only files with the extension .ppk. To locate your .pem file, choose the option to display files of all types. 5. Select the private key file that you created in the previous procedure and choose Open. Choose OK to dismiss the confirmation dialog box. 6. Choose Save private key. PuTTYgen displays a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a"} -{"global_id": 409, "doc_id": "batch", "chunk_id": "11", "question_id": 2, "question": "What should you specify to your SSH client when connecting to your Linux instance?", "answer_span": "specify the .pem file to your SSH client with the -i option and the path to your private key.", "chunk": "to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key pair To connect to your Linux instance from a computer running Mac or Linux, specify the .pem file to your SSH client with the -i option and the path to your private key. To connect to your Linux instance from a computer running Windows, use either MindTerm or PuTTY. If you plan to use PuTTY, install it and use the following procedure to convert the .pem file to a .ppk file. (Optional) To prepare to connect to a Linux instance from Windows using PuTTY 1. Download and install PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/. Be sure to install the entire suite. 2. Start PuTTYgen (for example, from the Start menu, choose All Programs, PuTTY, and PuTTYgen). 3. Under Type of key to generate, choose RSA. If you're using an earlier version of PuTTYgen, choose SSH-2 RSA. Create a key pair 11 AWS Batch 4. User Guide Choose Load. By default, PuTTYgen displays only files with the extension .ppk. To locate your .pem file, choose the option to display files of all types. 5. Select the private key file that you created in the previous procedure and choose Open. Choose OK to dismiss the confirmation dialog box. 6. Choose Save private key. PuTTYgen displays a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a"} -{"global_id": 410, "doc_id": "batch", "chunk_id": "11", "question_id": 3, "question": "What is the first step to prepare to connect to a Linux instance from Windows using PuTTY?", "answer_span": "Download and install PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/. Be sure to install the entire suite.", "chunk": "to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key pair To connect to your Linux instance from a computer running Mac or Linux, specify the .pem file to your SSH client with the -i option and the path to your private key. To connect to your Linux instance from a computer running Windows, use either MindTerm or PuTTY. If you plan to use PuTTY, install it and use the following procedure to convert the .pem file to a .ppk file. (Optional) To prepare to connect to a Linux instance from Windows using PuTTY 1. Download and install PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/. Be sure to install the entire suite. 2. Start PuTTYgen (for example, from the Start menu, choose All Programs, PuTTY, and PuTTYgen). 3. Under Type of key to generate, choose RSA. If you're using an earlier version of PuTTYgen, choose SSH-2 RSA. Create a key pair 11 AWS Batch 4. User Guide Choose Load. By default, PuTTYgen displays only files with the extension .ppk. To locate your .pem file, choose the option to display files of all types. 5. Select the private key file that you created in the previous procedure and choose Open. Choose OK to dismiss the confirmation dialog box. 6. Choose Save private key. PuTTYgen displays a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a"} -{"global_id": 411, "doc_id": "batch", "chunk_id": "11", "question_id": 4, "question": "What type of key should you choose to generate in PuTTYgen?", "answer_span": "Under Type of key to generate, choose RSA.", "chunk": "to your Linux instance, use the following command to set the permissions of your private key file. That way, only you can read it. $ chmod 400 your_user_name-key-pair-region_name.pem For more information, see Amazon EC2 Key Pairs in the Amazon EC2 User Guide. To connect to your instance using your key pair To connect to your Linux instance from a computer running Mac or Linux, specify the .pem file to your SSH client with the -i option and the path to your private key. To connect to your Linux instance from a computer running Windows, use either MindTerm or PuTTY. If you plan to use PuTTY, install it and use the following procedure to convert the .pem file to a .ppk file. (Optional) To prepare to connect to a Linux instance from Windows using PuTTY 1. Download and install PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/. Be sure to install the entire suite. 2. Start PuTTYgen (for example, from the Start menu, choose All Programs, PuTTY, and PuTTYgen). 3. Under Type of key to generate, choose RSA. If you're using an earlier version of PuTTYgen, choose SSH-2 RSA. Create a key pair 11 AWS Batch 4. User Guide Choose Load. By default, PuTTYgen displays only files with the extension .ppk. To locate your .pem file, choose the option to display files of all types. 5. Select the private key file that you created in the previous procedure and choose Open. Choose OK to dismiss the confirmation dialog box. 6. Choose Save private key. PuTTYgen displays a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a"} -{"global_id": 412, "doc_id": "batch", "chunk_id": "12", "question_id": 1, "question": "What should you do when saving the key?", "answer_span": "a warning about saving the key without a passphrase. Choose Yes.", "chunk": "a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a virtual network that you've defined. We strongly recommend that you launch your container instances in a VPC. If you have a default VPC, you also can skip this section and move to the next task Create a security group. To determine whether you have a default VPC, see Supported Platforms in the Amazon EC2 Console in the Amazon EC2 User Guide For information about how to create an Amazon VPC, see Create a VPC only in the Amazon VPC User Guide. Refer to the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. Create a VPC 12 AWS Batch User Guide Option Value IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated compute environment container instances, controlling both inbound and outbound traffic at the container instance level. A security group can be used only in the VPC for which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that"} -{"global_id": 413, "doc_id": "batch", "chunk_id": "12", "question_id": 2, "question": "What does PuTTY automatically add to the key file?", "answer_span": "PuTTY automatically adds the .ppk file extension.", "chunk": "a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a virtual network that you've defined. We strongly recommend that you launch your container instances in a VPC. If you have a default VPC, you also can skip this section and move to the next task Create a security group. To determine whether you have a default VPC, see Supported Platforms in the Amazon EC2 Console in the Amazon EC2 User Guide For information about how to create an Amazon VPC, see Create a VPC only in the Amazon VPC User Guide. Refer to the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. Create a VPC 12 AWS Batch User Guide Option Value IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated compute environment container instances, controlling both inbound and outbound traffic at the container instance level. A security group can be used only in the VPC for which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that"} -{"global_id": 414, "doc_id": "batch", "chunk_id": "12", "question_id": 3, "question": "What is recommended for launching container instances?", "answer_span": "We strongly recommend that you launch your container instances in a VPC.", "chunk": "a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a virtual network that you've defined. We strongly recommend that you launch your container instances in a VPC. If you have a default VPC, you also can skip this section and move to the next task Create a security group. To determine whether you have a default VPC, see Supported Platforms in the Amazon EC2 Console in the Amazon EC2 User Guide For information about how to create an Amazon VPC, see Create a VPC only in the Amazon VPC User Guide. Refer to the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. Create a VPC 12 AWS Batch User Guide Option Value IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated compute environment container instances, controlling both inbound and outbound traffic at the container instance level. A security group can be used only in the VPC for which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that"} -{"global_id": 415, "doc_id": "batch", "chunk_id": "12", "question_id": 4, "question": "What do security groups control?", "answer_span": "Security groups act as a firewall for associated compute environment container instances, controlling both inbound and outbound traffic at the container instance level.", "chunk": "a warning about saving the key without a passphrase. Choose Yes. 7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds the .ppk file extension. Create a VPC With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources into a virtual network that you've defined. We strongly recommend that you launch your container instances in a VPC. If you have a default VPC, you also can skip this section and move to the next task Create a security group. To determine whether you have a default VPC, see Supported Platforms in the Amazon EC2 Console in the Amazon EC2 User Guide For information about how to create an Amazon VPC, see Create a VPC only in the Amazon VPC User Guide. Refer to the following table to determine what options to select. Option Value Resources to create VPC only Name Optionally provide a name for your VPC. IPv4 CIDR block IPv4 CIDR manual input The CIDR block size must have a size between /16 and /28. Create a VPC 12 AWS Batch User Guide Option Value IPv6 CIDR block No IPv6 CIDR block Tenancy Default For more information about Amazon VPC, see What is Amazon VPC? in the Amazon VPC User Guide. Create a security group Security groups act as a firewall for associated compute environment container instances, controlling both inbound and outbound traffic at the container instance level. A security group can be used only in the VPC for which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that"} -{"global_id": 416, "doc_id": "batch", "chunk_id": "13", "question_id": 1, "question": "What can you add to a security group to connect to your container instance from your IP address?", "answer_span": "You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH.", "chunk": "which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Note that if you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Note You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you're connecting through an Internet service provider (ISP) or from behind a firewall without a static IP address, find out the range of IP addresses that are used by client computers. To create a security group using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Security Groups. 3. Choose Create security group. Create a security group 13 AWS Batch 4. User Guide Enter a name and description for the security group. You cannot change the name and description of a security group after it is created. 5. From VPC, choose the VPC. 6. (Optional) By default, new security groups start with only an outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can"} -{"global_id": 417, "doc_id": "batch", "chunk_id": "13", "question_id": 2, "question": "What should you do if you plan to launch container instances in multiple Regions?", "answer_span": "Note that if you plan to launch container instances in multiple Regions, you need to create a security group in each Region.", "chunk": "which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Note that if you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Note You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you're connecting through an Internet service provider (ISP) or from behind a firewall without a static IP address, find out the range of IP addresses that are used by client computers. To create a security group using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Security Groups. 3. Choose Create security group. Create a security group 13 AWS Batch 4. User Guide Enter a name and description for the security group. You cannot change the name and description of a security group after it is created. 5. From VPC, choose the VPC. 6. (Optional) By default, new security groups start with only an outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can"} -{"global_id": 418, "doc_id": "batch", "chunk_id": "13", "question_id": 3, "question": "How can you find your public IP address?", "answer_span": "Note You need the public IP address of your local computer, which you can get using a service.", "chunk": "which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Note that if you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Note You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you're connecting through an Internet service provider (ISP) or from behind a firewall without a static IP address, find out the range of IP addresses that are used by client computers. To create a security group using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Security Groups. 3. Choose Create security group. Create a security group 13 AWS Batch 4. User Guide Enter a name and description for the security group. You cannot change the name and description of a security group after it is created. 5. From VPC, choose the VPC. 6. (Optional) By default, new security groups start with only an outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can"} -{"global_id": 419, "doc_id": "batch", "chunk_id": "13", "question_id": 4, "question": "What is the first step to create a security group using the console?", "answer_span": "Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.", "chunk": "which it is created. You can add rules to a security group that enable you to connect to your container instance from your IP address using SSH. You can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere. Add any rules to open ports that are required by your tasks. Note that if you plan to launch container instances in multiple Regions, you need to create a security group in each Region. For more information, see Regions and Availability Zones in the Amazon EC2 User Guide. Note You need the public IP address of your local computer, which you can get using a service. For example, we provide the following service: http://checkip.amazonaws.com/ or https:// checkip.amazonaws.com/. To locate another service that provides your IP address, use the search phrase \"what is my IP address.\" If you're connecting through an Internet service provider (ISP) or from behind a firewall without a static IP address, find out the range of IP addresses that are used by client computers. To create a security group using the console 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/. 2. In the navigation pane, choose Security Groups. 3. Choose Create security group. Create a security group 13 AWS Batch 4. User Guide Enter a name and description for the security group. You cannot change the name and description of a security group after it is created. 5. From VPC, choose the VPC. 6. (Optional) By default, new security groups start with only an outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can"} -{"global_id": 420, "doc_id": "batch", "chunk_id": "14", "question_id": 1, "question": "What must you do to enable any inbound traffic or restrict outbound traffic?", "answer_span": "You must add rules to enable any inbound traffic or to restrict the outbound traffic.", "chunk": "outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can log into the container instance and examine the containers in jobs with Docker commands. If you want your container instance to host a job that runs a web server, you can also add rules for HTTP. Complete the following steps to add these optional security group rules. On the Inbound tab, create the following rules and choose Create: • Choose Add Rule. For Type, choose HTTP. For Source, choose Anywhere (0.0.0.0/0). • Choose Add Rule. For Type, choose SSH. For Source, choose Custom IP, and specify the public IP address of your computer or network in Classless Inter-Domain Routing (CIDR) notation. If your company allocates addresses from a range, specify the entire range, such as 203.0.113.0/24. To specify an individual IP address in CIDR notation, choose My IP. This adds the routing prefix /32 to the public IP address. Note For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0) to your instance but only for testing purposes and only for a short time. 7. You can add tags now, or you can add them later. To add a tag, choose Add new tag and enter the tag key and value. 8. Choose Create security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest"} -{"global_id": 421, "doc_id": "batch", "chunk_id": "14", "question_id": 2, "question": "What is the purpose of adding an SSH rule to the AWS Batch container instance?", "answer_span": "However, you might want to add an SSH rule. That way, you can log into the container instance and examine the containers in jobs with Docker commands.", "chunk": "outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can log into the container instance and examine the containers in jobs with Docker commands. If you want your container instance to host a job that runs a web server, you can also add rules for HTTP. Complete the following steps to add these optional security group rules. On the Inbound tab, create the following rules and choose Create: • Choose Add Rule. For Type, choose HTTP. For Source, choose Anywhere (0.0.0.0/0). • Choose Add Rule. For Type, choose SSH. For Source, choose Custom IP, and specify the public IP address of your computer or network in Classless Inter-Domain Routing (CIDR) notation. If your company allocates addresses from a range, specify the entire range, such as 203.0.113.0/24. To specify an individual IP address in CIDR notation, choose My IP. This adds the routing prefix /32 to the public IP address. Note For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0) to your instance but only for testing purposes and only for a short time. 7. You can add tags now, or you can add them later. To add a tag, choose Add new tag and enter the tag key and value. 8. Choose Create security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest"} -{"global_id": 422, "doc_id": "batch", "chunk_id": "14", "question_id": 3, "question": "What steps should you follow to add optional security group rules?", "answer_span": "Complete the following steps to add these optional security group rules.", "chunk": "outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can log into the container instance and examine the containers in jobs with Docker commands. If you want your container instance to host a job that runs a web server, you can also add rules for HTTP. Complete the following steps to add these optional security group rules. On the Inbound tab, create the following rules and choose Create: • Choose Add Rule. For Type, choose HTTP. For Source, choose Anywhere (0.0.0.0/0). • Choose Add Rule. For Type, choose SSH. For Source, choose Custom IP, and specify the public IP address of your computer or network in Classless Inter-Domain Routing (CIDR) notation. If your company allocates addresses from a range, specify the entire range, such as 203.0.113.0/24. To specify an individual IP address in CIDR notation, choose My IP. This adds the routing prefix /32 to the public IP address. Note For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0) to your instance but only for testing purposes and only for a short time. 7. You can add tags now, or you can add them later. To add a tag, choose Add new tag and enter the tag key and value. 8. Choose Create security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest"} -{"global_id": 423, "doc_id": "batch", "chunk_id": "14", "question_id": 4, "question": "What is the recommendation regarding SSH access from all IP addresses?", "answer_span": "Note For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0) to your instance but only for testing purposes and only for a short time.", "chunk": "outbound rule that allows all traffic to leave the resource. You must add rules to enable any inbound traffic or to restrict the outbound traffic. AWS Batch container instances don't require any inbound ports to be open. However, you might want to add an SSH rule. That way, you can log into the container instance and examine the containers in jobs with Docker commands. If you want your container instance to host a job that runs a web server, you can also add rules for HTTP. Complete the following steps to add these optional security group rules. On the Inbound tab, create the following rules and choose Create: • Choose Add Rule. For Type, choose HTTP. For Source, choose Anywhere (0.0.0.0/0). • Choose Add Rule. For Type, choose SSH. For Source, choose Custom IP, and specify the public IP address of your computer or network in Classless Inter-Domain Routing (CIDR) notation. If your company allocates addresses from a range, specify the entire range, such as 203.0.113.0/24. To specify an individual IP address in CIDR notation, choose My IP. This adds the routing prefix /32 to the public IP address. Note For security reasons, we don't recommend that you allow SSH access from all IP addresses (0.0.0.0/0) to your instance but only for testing purposes and only for a short time. 7. You can add tags now, or you can add them later. To add a tag, choose Add new tag and enter the tag key and value. 8. Choose Create security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest"} -{"global_id": 424, "doc_id": "batch", "chunk_id": "15", "question_id": 1, "question": "What is the purpose of the AWS Batch first-run wizard?", "answer_span": "You can use the AWS Batch first-run wizard to get started quickly with AWS Batch.", "chunk": "security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. Install the AWS CLI 15 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition, and a job queue. You can also submit a sample \"Hello World\" job using the AWS Batch first-run wizard to test your configuration. If you already have a Docker image that you want to launch in AWS Batch, you can use that image to create a job definition. Afterward, you can use the AWS Batch first-run wizard to create a compute environment, job queue, and submit a sample Hello World job. Getting started with Amazon EC2 orchestration using the Wizard Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup"} -{"global_id": 425, "doc_id": "batch", "chunk_id": "15", "question_id": 2, "question": "What should you do after completing the prerequisites for AWS Batch?", "answer_span": "After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition, and a job queue.", "chunk": "security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. Install the AWS CLI 15 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition, and a job queue. You can also submit a sample \"Hello World\" job using the AWS Batch first-run wizard to test your configuration. If you already have a Docker image that you want to launch in AWS Batch, you can use that image to create a job definition. Afterward, you can use the AWS Batch first-run wizard to create a compute environment, job queue, and submit a sample Hello World job. Getting started with Amazon EC2 orchestration using the Wizard Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup"} -{"global_id": 426, "doc_id": "batch", "chunk_id": "15", "question_id": 3, "question": "What does Amazon EC2 provide?", "answer_span": "Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud.", "chunk": "security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. Install the AWS CLI 15 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition, and a job queue. You can also submit a sample \"Hello World\" job using the AWS Batch first-run wizard to test your configuration. If you already have a Docker image that you want to launch in AWS Batch, you can use that image to create a job definition. Afterward, you can use the AWS Batch first-run wizard to create a compute environment, job queue, and submit a sample Hello World job. Getting started with Amazon EC2 orchestration using the Wizard Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup"} -{"global_id": 427, "doc_id": "batch", "chunk_id": "15", "question_id": 4, "question": "How does using Amazon EC2 benefit application development and deployment?", "answer_span": "Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster.", "chunk": "security group. To create a security group using the command line, see create-security-group (AWS CLI) For more information about security groups, see Work with security groups. Create a security group 14 AWS Batch User Guide Install the AWS CLI To use the AWS CLI with AWS Batch, install the latest AWS CLI version. For information about installing the AWS CLI or upgrading it to the latest version, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide. Install the AWS CLI 15 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition, and a job queue. You can also submit a sample \"Hello World\" job using the AWS Batch first-run wizard to test your configuration. If you already have a Docker image that you want to launch in AWS Batch, you can use that image to create a job definition. Afterward, you can use the AWS Batch first-run wizard to create a compute environment, job queue, and submit a sample Hello World job. Getting started with Amazon EC2 orchestration using the Wizard Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup"} -{"global_id": 428, "doc_id": "batch", "chunk_id": "16", "question_id": 1, "question": "What does Amazon EC2 enable you to do?", "answer_span": "Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.", "chunk": "to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup AWS Batch with the Wizard to configure Amazon EC2 and run Hello World. Intended Audience This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition • Create and submit a job to run • View the output of the job in CloudWatch Time Required It should take about 10–15 minutes to complete this tutorial. Regional Restrictions There are no country or regional restrictions associated with using this solution. Resource Usage Costs There's no charge for creating an AWS account. However, by implementing this solution, you might incur some or all of the costs that are listed in the following table. Description Cost (US dollars) Amazon EC2 instance You pay for each Amazon EC2 instance that is created. For more information about pricing, see Amazon EC2 Pricing. Prerequisites Before you begin: • Create an AWS account if you don't have one. • Create the ecsInstanceRole Instance role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a"} -{"global_id": 429, "doc_id": "batch", "chunk_id": "16", "question_id": 2, "question": "Who is the intended audience for this tutorial?", "answer_span": "This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch.", "chunk": "to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup AWS Batch with the Wizard to configure Amazon EC2 and run Hello World. Intended Audience This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition • Create and submit a job to run • View the output of the job in CloudWatch Time Required It should take about 10–15 minutes to complete this tutorial. Regional Restrictions There are no country or regional restrictions associated with using this solution. Resource Usage Costs There's no charge for creating an AWS account. However, by implementing this solution, you might incur some or all of the costs that are listed in the following table. Description Cost (US dollars) Amazon EC2 instance You pay for each Amazon EC2 instance that is created. For more information about pricing, see Amazon EC2 Pricing. Prerequisites Before you begin: • Create an AWS account if you don't have one. • Create the ecsInstanceRole Instance role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a"} -{"global_id": 430, "doc_id": "batch", "chunk_id": "16", "question_id": 3, "question": "How long is it expected to take to complete this tutorial?", "answer_span": "It should take about 10–15 minutes to complete this tutorial.", "chunk": "to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup AWS Batch with the Wizard to configure Amazon EC2 and run Hello World. Intended Audience This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition • Create and submit a job to run • View the output of the job in CloudWatch Time Required It should take about 10–15 minutes to complete this tutorial. Regional Restrictions There are no country or regional restrictions associated with using this solution. Resource Usage Costs There's no charge for creating an AWS account. However, by implementing this solution, you might incur some or all of the costs that are listed in the following table. Description Cost (US dollars) Amazon EC2 instance You pay for each Amazon EC2 instance that is created. For more information about pricing, see Amazon EC2 Pricing. Prerequisites Before you begin: • Create an AWS account if you don't have one. • Create the ecsInstanceRole Instance role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a"} -{"global_id": 431, "doc_id": "batch", "chunk_id": "16", "question_id": 4, "question": "What is a prerequisite before starting the tutorial?", "answer_span": "Create an AWS account if you don't have one.", "chunk": "to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup AWS Batch with the Wizard to configure Amazon EC2 and run Hello World. Intended Audience This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition • Create and submit a job to run • View the output of the job in CloudWatch Time Required It should take about 10–15 minutes to complete this tutorial. Regional Restrictions There are no country or regional restrictions associated with using this solution. Resource Usage Costs There's no charge for creating an AWS account. However, by implementing this solution, you might incur some or all of the costs that are listed in the following table. Description Cost (US dollars) Amazon EC2 instance You pay for each Amazon EC2 instance that is created. For more information about pricing, see Amazon EC2 Pricing. Prerequisites Before you begin: • Create an AWS account if you don't have one. • Create the ecsInstanceRole Instance role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a"} -{"global_id": 432, "doc_id": "batch", "chunk_id": "17", "question_id": 1, "question": "What is the first step to create a compute environment?", "answer_span": "Step 1: Create a compute environment", "chunk": "role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a compute environment for an Amazon EC2 orchestration, do the following: Prerequisites 17 AWS Batch User Guide 1. Open the AWS Batch console first-run wizard. 2. For Configure job and orchestration type, choose Amazon Elastic Compute Cloud(Amazon EC2). 3. Choose Next. 4. In the Compute environment configuration section for Name, specify a unique name for your compute environment. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 5. For Instance role, choose an existing instance role that has the required IAM permissions attached. This instance role allows the Amazon ECS container instances in your compute environment to make calls to the required AWS API operations. For more information, see Amazon ECS instance role. The default name of the Instance role is ecsInstanceRole. 6. For Instance configuration you can leave the default settings. 7. For Network configuration use your default VPC for the AWS Region. 8. Choose Next. Step 2: Create a job queue A job queue stores your submitted jobs until the AWS Batch Scheduler runs the job on a resource in your compute environment. For more information, see Job queues To create a job queue for an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration"} -{"global_id": 433, "doc_id": "batch", "chunk_id": "17", "question_id": 2, "question": "What should you do before creating for production use?", "answer_span": "we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements.", "chunk": "role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a compute environment for an Amazon EC2 orchestration, do the following: Prerequisites 17 AWS Batch User Guide 1. Open the AWS Batch console first-run wizard. 2. For Configure job and orchestration type, choose Amazon Elastic Compute Cloud(Amazon EC2). 3. Choose Next. 4. In the Compute environment configuration section for Name, specify a unique name for your compute environment. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 5. For Instance role, choose an existing instance role that has the required IAM permissions attached. This instance role allows the Amazon ECS container instances in your compute environment to make calls to the required AWS API operations. For more information, see Amazon ECS instance role. The default name of the Instance role is ecsInstanceRole. 6. For Instance configuration you can leave the default settings. 7. For Network configuration use your default VPC for the AWS Region. 8. Choose Next. Step 2: Create a job queue A job queue stores your submitted jobs until the AWS Batch Scheduler runs the job on a resource in your compute environment. For more information, see Job queues To create a job queue for an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration"} -{"global_id": 434, "doc_id": "batch", "chunk_id": "17", "question_id": 3, "question": "What is the default name of the Instance role?", "answer_span": "The default name of the Instance role is ecsInstanceRole.", "chunk": "role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a compute environment for an Amazon EC2 orchestration, do the following: Prerequisites 17 AWS Batch User Guide 1. Open the AWS Batch console first-run wizard. 2. For Configure job and orchestration type, choose Amazon Elastic Compute Cloud(Amazon EC2). 3. Choose Next. 4. In the Compute environment configuration section for Name, specify a unique name for your compute environment. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 5. For Instance role, choose an existing instance role that has the required IAM permissions attached. This instance role allows the Amazon ECS container instances in your compute environment to make calls to the required AWS API operations. For more information, see Amazon ECS instance role. The default name of the Instance role is ecsInstanceRole. 6. For Instance configuration you can leave the default settings. 7. For Network configuration use your default VPC for the AWS Region. 8. Choose Next. Step 2: Create a job queue A job queue stores your submitted jobs until the AWS Batch Scheduler runs the job on a resource in your compute environment. For more information, see Job queues To create a job queue for an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration"} -{"global_id": 435, "doc_id": "batch", "chunk_id": "17", "question_id": 4, "question": "What does a job queue do in AWS Batch?", "answer_span": "A job queue stores your submitted jobs until the AWS Batch Scheduler runs the job on a resource in your compute environment.", "chunk": "role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a compute environment for an Amazon EC2 orchestration, do the following: Prerequisites 17 AWS Batch User Guide 1. Open the AWS Batch console first-run wizard. 2. For Configure job and orchestration type, choose Amazon Elastic Compute Cloud(Amazon EC2). 3. Choose Next. 4. In the Compute environment configuration section for Name, specify a unique name for your compute environment. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 5. For Instance role, choose an existing instance role that has the required IAM permissions attached. This instance role allows the Amazon ECS container instances in your compute environment to make calls to the required AWS API operations. For more information, see Amazon ECS instance role. The default name of the Instance role is ecsInstanceRole. 6. For Instance configuration you can leave the default settings. 7. For Network configuration use your default VPC for the AWS Region. 8. Choose Next. Step 2: Create a job queue A job queue stores your submitted jobs until the AWS Batch Scheduler runs the job on a resource in your compute environment. For more information, see Job queues To create a job queue for an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration"} -{"global_id": 436, "doc_id": "batch", "chunk_id": "18", "question_id": 1, "question": "What is the maximum length for the job queue name?", "answer_span": "The name can be up to 128 characters in length.", "chunk": "an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 3: Create a job definition AWS Batch job definitions specify how jobs are to be run. Even though each job must reference a job definition, many of the parameters that are specified in the job definition can be overridden at runtime. Step 2: Create a job queue 18 AWS Batch User Guide To create the job definition: 1. For Create a job definition a. for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). b. For Command - optional you can change hello world to a custom message or leave it as is. 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 4: Create a job To create a job, do the following: 1. In the Job configuration section for Name, specify a unique name for the job. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS"} -{"global_id": 437, "doc_id": "batch", "chunk_id": "18", "question_id": 2, "question": "What can the job queue name contain?", "answer_span": "It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).", "chunk": "an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 3: Create a job definition AWS Batch job definitions specify how jobs are to be run. Even though each job must reference a job definition, many of the parameters that are specified in the job definition can be overridden at runtime. Step 2: Create a job queue 18 AWS Batch User Guide To create the job definition: 1. For Create a job definition a. for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). b. For Command - optional you can change hello world to a custom message or leave it as is. 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 4: Create a job To create a job, do the following: 1. In the Job configuration section for Name, specify a unique name for the job. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS"} -{"global_id": 438, "doc_id": "batch", "chunk_id": "18", "question_id": 3, "question": "What should you do if you need to make changes on the Review and create page?", "answer_span": "If you need to make changes, choose Edit.", "chunk": "an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 3: Create a job definition AWS Batch job definitions specify how jobs are to be run. Even though each job must reference a job definition, many of the parameters that are specified in the job definition can be overridden at runtime. Step 2: Create a job queue 18 AWS Batch User Guide To create the job definition: 1. For Create a job definition a. for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). b. For Command - optional you can change hello world to a custom message or leave it as is. 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 4: Create a job To create a job, do the following: 1. In the Job configuration section for Name, specify a unique name for the job. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS"} -{"global_id": 439, "doc_id": "batch", "chunk_id": "18", "question_id": 4, "question": "What is specified in AWS Batch job definitions?", "answer_span": "AWS Batch job definitions specify how jobs are to be run.", "chunk": "an Amazon EC2 orchestration, do the following: 1. For Job queue configuration for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 3: Create a job definition AWS Batch job definitions specify how jobs are to be run. Even though each job must reference a job definition, many of the parameters that are specified in the job definition can be overridden at runtime. Step 2: Create a job queue 18 AWS Batch User Guide To create the job definition: 1. For Create a job definition a. for Name, specify a unique name for your job queue. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). b. For Command - optional you can change hello world to a custom message or leave it as is. 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 4: Create a job To create a job, do the following: 1. In the Job configuration section for Name, specify a unique name for the job. The name can be up to 128 characters in length. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_). 2. For all other configuration options you can leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS"} -{"global_id": 440, "doc_id": "batch", "chunk_id": "19", "question_id": 1, "question": "What should you do if you need to make changes on the Review and create page?", "answer_span": "If you need to make changes, choose Edit.", "chunk": "leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS Batch starts to allocate your resources. Once complete choose Go to dashboard. On the dashboard you should see all of your allocated resources and that the job is in the Runnable state. Your job is scheduled to run and should complete in 2–3 minuets. Step 6: View the Job's output To view the Job's output, do the following: Step 4: Create a job 19 AWS Batch User Guide 1. In the navigation pane choose Jobs. 2. In the Job queue drop down choose the Job queue you created for the tutorial. 3. The Jobs table lists all of your Jobs and what their current status is. Once the Job's Status is Succeeded choose the Name of the Job to view the Job's details. 4. In the Details pane choose Log stream name. The CloudWatch console for the Job will open and there should be one event with the Message of hello world or your custom message. Step 7: Clean up your tutorial resources You are charged for the Amazon EC2 instance while it is enabled. You can delete the instance to stop incurring charges. To delete the resources you created, do the following: 1. In the navigation pane choose Job queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you"} -{"global_id": 441, "doc_id": "batch", "chunk_id": "19", "question_id": 2, "question": "What happens after you choose Create resources?", "answer_span": "A window opens as AWS Batch starts to allocate your resources.", "chunk": "leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS Batch starts to allocate your resources. Once complete choose Go to dashboard. On the dashboard you should see all of your allocated resources and that the job is in the Runnable state. Your job is scheduled to run and should complete in 2–3 minuets. Step 6: View the Job's output To view the Job's output, do the following: Step 4: Create a job 19 AWS Batch User Guide 1. In the navigation pane choose Jobs. 2. In the Job queue drop down choose the Job queue you created for the tutorial. 3. The Jobs table lists all of your Jobs and what their current status is. Once the Job's Status is Succeeded choose the Name of the Job to view the Job's details. 4. In the Details pane choose Log stream name. The CloudWatch console for the Job will open and there should be one event with the Message of hello world or your custom message. Step 7: Clean up your tutorial resources You are charged for the Amazon EC2 instance while it is enabled. You can delete the instance to stop incurring charges. To delete the resources you created, do the following: 1. In the navigation pane choose Job queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you"} -{"global_id": 442, "doc_id": "batch", "chunk_id": "19", "question_id": 3, "question": "How can you view the Job's output?", "answer_span": "To view the Job's output, do the following: Step 4: Create a job.", "chunk": "leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS Batch starts to allocate your resources. Once complete choose Go to dashboard. On the dashboard you should see all of your allocated resources and that the job is in the Runnable state. Your job is scheduled to run and should complete in 2–3 minuets. Step 6: View the Job's output To view the Job's output, do the following: Step 4: Create a job 19 AWS Batch User Guide 1. In the navigation pane choose Jobs. 2. In the Job queue drop down choose the Job queue you created for the tutorial. 3. The Jobs table lists all of your Jobs and what their current status is. Once the Job's Status is Succeeded choose the Name of the Job to view the Job's details. 4. In the Details pane choose Log stream name. The CloudWatch console for the Job will open and there should be one event with the Message of hello world or your custom message. Step 7: Clean up your tutorial resources You are charged for the Amazon EC2 instance while it is enabled. You can delete the instance to stop incurring charges. To delete the resources you created, do the following: 1. In the navigation pane choose Job queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you"} -{"global_id": 443, "doc_id": "batch", "chunk_id": "19", "question_id": 4, "question": "What should you do to stop incurring charges for the Amazon EC2 instance?", "answer_span": "You can delete the instance to stop incurring charges.", "chunk": "leave the default value. 3. Choose Next. Step 5: Review and create On the Review and create page, review the configuration steps. If you need to make changes, choose Edit. When you're finished, choose Create resources. 1. For Review and create choose Create resources. 2. A window opens as AWS Batch starts to allocate your resources. Once complete choose Go to dashboard. On the dashboard you should see all of your allocated resources and that the job is in the Runnable state. Your job is scheduled to run and should complete in 2–3 minuets. Step 6: View the Job's output To view the Job's output, do the following: Step 4: Create a job 19 AWS Batch User Guide 1. In the navigation pane choose Jobs. 2. In the Job queue drop down choose the Job queue you created for the tutorial. 3. The Jobs table lists all of your Jobs and what their current status is. Once the Job's Status is Succeeded choose the Name of the Job to view the Job's details. 4. In the Details pane choose Log stream name. The CloudWatch console for the Job will open and there should be one event with the Message of hello world or your custom message. Step 7: Clean up your tutorial resources You are charged for the Amazon EC2 instance while it is enabled. You can delete the instance to stop incurring charges. To delete the resources you created, do the following: 1. In the navigation pane choose Job queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you"} -{"global_id": 444, "doc_id": "batch", "chunk_id": "20", "question_id": 1, "question": "What should you choose after selecting the Job queue you created for the tutorial?", "answer_span": "Choose Disable.", "chunk": "queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you created for this tutorial and then choose Disable. It may take 1–2 minuets for the compute environment to complete being disabled. 6. Once the compute environment’s State is Disabled, choose Delete. It may take 1–2 minuets for the compute environment to be deleted. Additional resources After you complete the tutorial, you might want to explore the following topics:: • Explore the AWS Batch core components. For more information, see Components of AWS Batch. • Learn more about the different Compute Environments available in AWS Batch. • Learn more about Job queues and their different scheduling options. • Learn more about Job definitions and the different configuration options. • Learn more about the different types of Jobs. Step 7: Clean up your tutorial resources 20"} -{"global_id": 445, "doc_id": "batch", "chunk_id": "20", "question_id": 2, "question": "What do you do once the Job queue State is Disabled?", "answer_span": "you can choose Delete.", "chunk": "queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you created for this tutorial and then choose Disable. It may take 1–2 minuets for the compute environment to complete being disabled. 6. Once the compute environment’s State is Disabled, choose Delete. It may take 1–2 minuets for the compute environment to be deleted. Additional resources After you complete the tutorial, you might want to explore the following topics:: • Explore the AWS Batch core components. For more information, see Components of AWS Batch. • Learn more about the different Compute Environments available in AWS Batch. • Learn more about Job queues and their different scheduling options. • Learn more about Job definitions and the different configuration options. • Learn more about the different types of Jobs. Step 7: Clean up your tutorial resources 20"} -{"global_id": 446, "doc_id": "batch", "chunk_id": "20", "question_id": 3, "question": "How long may it take for the compute environment to complete being disabled?", "answer_span": "It may take 1–2 minuets for the compute environment to complete being disabled.", "chunk": "queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you created for this tutorial and then choose Disable. It may take 1–2 minuets for the compute environment to complete being disabled. 6. Once the compute environment’s State is Disabled, choose Delete. It may take 1–2 minuets for the compute environment to be deleted. Additional resources After you complete the tutorial, you might want to explore the following topics:: • Explore the AWS Batch core components. For more information, see Components of AWS Batch. • Learn more about the different Compute Environments available in AWS Batch. • Learn more about Job queues and their different scheduling options. • Learn more about Job definitions and the different configuration options. • Learn more about the different types of Jobs. Step 7: Clean up your tutorial resources 20"} -{"global_id": 447, "doc_id": "batch", "chunk_id": "20", "question_id": 4, "question": "What topics might you want to explore after completing the tutorial?", "answer_span": "you might want to explore the following topics:: • Explore the AWS Batch core components.", "chunk": "queue. 2. In the Job queue table choose the Job queue you created for the tutorial. 3. Choose Disable. Once the Job queue State is Disabled you can choose Delete. 4. Once the Job queue is deleted, in the navigation pane choose Compute environments. 5. Choose the compute environment you created for this tutorial and then choose Disable. It may take 1–2 minuets for the compute environment to complete being disabled. 6. Once the compute environment’s State is Disabled, choose Delete. It may take 1–2 minuets for the compute environment to be deleted. Additional resources After you complete the tutorial, you might want to explore the following topics:: • Explore the AWS Batch core components. For more information, see Components of AWS Batch. • Learn more about the different Compute Environments available in AWS Batch. • Learn more about Job queues and their different scheduling options. • Learn more about Job definitions and the different configuration options. • Learn more about the different types of Jobs. Step 7: Clean up your tutorial resources 20"} -{"global_id": 448, "doc_id": "eks", "chunk_id": "0", "question_id": 1, "question": "What does Amazon EKS provide?", "answer_span": "Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters.", "chunk": "Amazon EKS User Guide What is Amazon EKS? Amazon EKS: Simplified Kubernetes Management Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters. With EKS, you can: • Deploy applications faster with less operational overhead • Scale seamlessly to meet changing workload demands • Improve security through AWS integration and automated updates • Choose between standard EKS or fully automated EKS Auto Mode Amazon Elastic Kubernetes Service (Amazon EKS) is the premiere platform for running Kubernetes clusters, both in the Amazon Web Services (AWS) cloud and in your own data centers (EKS Anywhere and Amazon EKS Hybrid Nodes). Amazon EKS simplifies building, securing, and maintaining Kubernetes clusters. It can be more cost effective at providing enough resources to meet peak demand than maintaining your own data centers. Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. Components that manage nodes, schedule workloads, integrate with the AWS cloud, and store and scale control plane information to keep your clusters up and running, are handled for you automatically. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well. It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services. The following diagram illustrates how Amazon EKS integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic"} -{"global_id": 449, "doc_id": "eks", "chunk_id": "0", "question_id": 2, "question": "What are two main approaches to using Amazon EKS?", "answer_span": "Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well.", "chunk": "Amazon EKS User Guide What is Amazon EKS? Amazon EKS: Simplified Kubernetes Management Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters. With EKS, you can: • Deploy applications faster with less operational overhead • Scale seamlessly to meet changing workload demands • Improve security through AWS integration and automated updates • Choose between standard EKS or fully automated EKS Auto Mode Amazon Elastic Kubernetes Service (Amazon EKS) is the premiere platform for running Kubernetes clusters, both in the Amazon Web Services (AWS) cloud and in your own data centers (EKS Anywhere and Amazon EKS Hybrid Nodes). Amazon EKS simplifies building, securing, and maintaining Kubernetes clusters. It can be more cost effective at providing enough resources to meet peak demand than maintaining your own data centers. Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. Components that manage nodes, schedule workloads, integrate with the AWS cloud, and store and scale control plane information to keep your clusters up and running, are handled for you automatically. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well. It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services. The following diagram illustrates how Amazon EKS integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic"} -{"global_id": 450, "doc_id": "eks", "chunk_id": "0", "question_id": 3, "question": "How does Amazon EKS help with application deployment?", "answer_span": "With EKS, you can: • Deploy applications faster with less operational overhead.", "chunk": "Amazon EKS User Guide What is Amazon EKS? Amazon EKS: Simplified Kubernetes Management Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters. With EKS, you can: • Deploy applications faster with less operational overhead • Scale seamlessly to meet changing workload demands • Improve security through AWS integration and automated updates • Choose between standard EKS or fully automated EKS Auto Mode Amazon Elastic Kubernetes Service (Amazon EKS) is the premiere platform for running Kubernetes clusters, both in the Amazon Web Services (AWS) cloud and in your own data centers (EKS Anywhere and Amazon EKS Hybrid Nodes). Amazon EKS simplifies building, securing, and maintaining Kubernetes clusters. It can be more cost effective at providing enough resources to meet peak demand than maintaining your own data centers. Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. Components that manage nodes, schedule workloads, integrate with the AWS cloud, and store and scale control plane information to keep your clusters up and running, are handled for you automatically. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well. It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services. The following diagram illustrates how Amazon EKS integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic"} -{"global_id": 451, "doc_id": "eks", "chunk_id": "0", "question_id": 4, "question": "What is the benefit of using EKS Auto Mode?", "answer_span": "It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services.", "chunk": "Amazon EKS User Guide What is Amazon EKS? Amazon EKS: Simplified Kubernetes Management Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters. With EKS, you can: • Deploy applications faster with less operational overhead • Scale seamlessly to meet changing workload demands • Improve security through AWS integration and automated updates • Choose between standard EKS or fully automated EKS Auto Mode Amazon Elastic Kubernetes Service (Amazon EKS) is the premiere platform for running Kubernetes clusters, both in the Amazon Web Services (AWS) cloud and in your own data centers (EKS Anywhere and Amazon EKS Hybrid Nodes). Amazon EKS simplifies building, securing, and maintaining Kubernetes clusters. It can be more cost effective at providing enough resources to meet peak demand than maintaining your own data centers. Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. Components that manage nodes, schedule workloads, integrate with the AWS cloud, and store and scale control plane information to keep your clusters up and running, are handled for you automatically. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well. It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services. The following diagram illustrates how Amazon EKS integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic"} -{"global_id": 452, "doc_id": "eks", "chunk_id": "1", "question_id": 1, "question": "What does Amazon EKS help you accelerate?", "answer_span": "Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security.", "chunk": "integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic Kubernetes Service. Features of Amazon EKS Amazon EKS provides the following high-level features: Management interfaces EKS offers multiple interfaces to provision, manage, and maintain clusters, including AWS Management Console, Amazon EKS API/SDKs, CDK, AWS CLI, eksctl CLI, AWS CloudFormation, and Terraform. For more information, see Get started and Configure clusters. Features of Amazon EKS 2 Amazon EKS User Guide Access control tools EKS relies on both Kubernetes and AWS Identity and Access Management (AWS IAM) features to manage access from users and workloads. For more information, see the section called “Kubernetes API access” and the section called “Workload access to AWS ”. Compute resources For compute resources, EKS allows the full range of Amazon EC2 instance types and AWS innovations such as Nitro and Graviton with Amazon EKS for you to optimize the compute for your workloads. For more information, see Manage compute. Storage EKS Auto Mode automatically creates storage classes using EBS volumes. Using Container Storage Interface (CSI) drivers, you can also use Amazon S3, Amazon EFS, Amazon FSX, and Amazon File Cache for your application storage needs. For more information, see App data storage. Security The shared responsibility model is employed as it relates to Security in Amazon EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics"} -{"global_id": 453, "doc_id": "eks", "chunk_id": "1", "question_id": 2, "question": "What management interfaces does EKS offer?", "answer_span": "EKS offers multiple interfaces to provision, manage, and maintain clusters, including AWS Management Console, Amazon EKS API/SDKs, CDK, AWS CLI, eksctl CLI, AWS CloudFormation, and Terraform.", "chunk": "integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic Kubernetes Service. Features of Amazon EKS Amazon EKS provides the following high-level features: Management interfaces EKS offers multiple interfaces to provision, manage, and maintain clusters, including AWS Management Console, Amazon EKS API/SDKs, CDK, AWS CLI, eksctl CLI, AWS CloudFormation, and Terraform. For more information, see Get started and Configure clusters. Features of Amazon EKS 2 Amazon EKS User Guide Access control tools EKS relies on both Kubernetes and AWS Identity and Access Management (AWS IAM) features to manage access from users and workloads. For more information, see the section called “Kubernetes API access” and the section called “Workload access to AWS ”. Compute resources For compute resources, EKS allows the full range of Amazon EC2 instance types and AWS innovations such as Nitro and Graviton with Amazon EKS for you to optimize the compute for your workloads. For more information, see Manage compute. Storage EKS Auto Mode automatically creates storage classes using EBS volumes. Using Container Storage Interface (CSI) drivers, you can also use Amazon S3, Amazon EFS, Amazon FSX, and Amazon File Cache for your application storage needs. For more information, see App data storage. Security The shared responsibility model is employed as it relates to Security in Amazon EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics"} -{"global_id": 454, "doc_id": "eks", "chunk_id": "1", "question_id": 3, "question": "What compute resources does EKS allow?", "answer_span": "For compute resources, EKS allows the full range of Amazon EC2 instance types and AWS innovations such as Nitro and Graviton with Amazon EKS for you to optimize the compute for your workloads.", "chunk": "integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic Kubernetes Service. Features of Amazon EKS Amazon EKS provides the following high-level features: Management interfaces EKS offers multiple interfaces to provision, manage, and maintain clusters, including AWS Management Console, Amazon EKS API/SDKs, CDK, AWS CLI, eksctl CLI, AWS CloudFormation, and Terraform. For more information, see Get started and Configure clusters. Features of Amazon EKS 2 Amazon EKS User Guide Access control tools EKS relies on both Kubernetes and AWS Identity and Access Management (AWS IAM) features to manage access from users and workloads. For more information, see the section called “Kubernetes API access” and the section called “Workload access to AWS ”. Compute resources For compute resources, EKS allows the full range of Amazon EC2 instance types and AWS innovations such as Nitro and Graviton with Amazon EKS for you to optimize the compute for your workloads. For more information, see Manage compute. Storage EKS Auto Mode automatically creates storage classes using EBS volumes. Using Container Storage Interface (CSI) drivers, you can also use Amazon S3, Amazon EFS, Amazon FSX, and Amazon File Cache for your application storage needs. For more information, see App data storage. Security The shared responsibility model is employed as it relates to Security in Amazon EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics"} -{"global_id": 455, "doc_id": "eks", "chunk_id": "1", "question_id": 4, "question": "What monitoring tools are mentioned for Amazon EKS?", "answer_span": "Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator.", "chunk": "integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic Kubernetes Service. Features of Amazon EKS Amazon EKS provides the following high-level features: Management interfaces EKS offers multiple interfaces to provision, manage, and maintain clusters, including AWS Management Console, Amazon EKS API/SDKs, CDK, AWS CLI, eksctl CLI, AWS CloudFormation, and Terraform. For more information, see Get started and Configure clusters. Features of Amazon EKS 2 Amazon EKS User Guide Access control tools EKS relies on both Kubernetes and AWS Identity and Access Management (AWS IAM) features to manage access from users and workloads. For more information, see the section called “Kubernetes API access” and the section called “Workload access to AWS ”. Compute resources For compute resources, EKS allows the full range of Amazon EC2 instance types and AWS innovations such as Nitro and Graviton with Amazon EKS for you to optimize the compute for your workloads. For more information, see Manage compute. Storage EKS Auto Mode automatically creates storage classes using EBS volumes. Using Container Storage Interface (CSI) drivers, you can also use Amazon S3, Amazon EFS, Amazon FSX, and Amazon File Cache for your application storage needs. For more information, see App data storage. Security The shared responsibility model is employed as it relates to Security in Amazon EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics"} -{"global_id": 456, "doc_id": "eks", "chunk_id": "2", "question_id": 1, "question": "What tools are included for monitoring Amazon EKS clusters?", "answer_span": "Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator.", "chunk": "EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics Server. Kubernetes compatibility and support Amazon EKS is certified Kubernetes-conformant, so you can deploy Kubernetes-compatible applications without refactoring and use Kubernetes community tooling and plugins. EKS offers both standard support and eks/latest/userguide/kubernetes-versions-extended.html[extended support,type=\"documentation\"] for Kubernetes. For more information, see eks/latest/ userguide/kubernetes-versions.html[Understand the Kubernetes version lifecycle on EKS,type=\"documentation\"]. Related services Services to use with Amazon EKS Related services 3 Amazon EKS User Guide You can use other AWS services with the clusters that you deploy using Amazon EKS: Amazon EC2 Obtain on-demand, scalable compute capacity with Amazon EC2. Amazon EBS Attach scalable, high-performance block storage resources with Amazon EBS. Amazon ECR Store container images securely with Amazon ECR. Amazon CloudWatch Monitor AWS resources and applications in real time with Amazon CloudWatch. Amazon Prometheus Track metrics for containerized applications with Amazon Managed Service for Prometheus. Elastic Load Balancing Distribute incoming traffic across multiple targets with Elastic Load Balancing. Amazon GuardDuty Detect threats to EKS clusters with Amazon GuardDuty. AWS Resilience Hub Assess EKS cluster resiliency with AWS Resilience Hub. Amazon EKS Pricing Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes. When using Amazon EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the"} -{"global_id": 457, "doc_id": "eks", "chunk_id": "2", "question_id": 2, "question": "What type of support does Amazon EKS offer for Kubernetes?", "answer_span": "EKS offers both standard support and eks/latest/userguide/kubernetes-versions-extended.html[extended support,type=\"documentation\"] for Kubernetes.", "chunk": "EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics Server. Kubernetes compatibility and support Amazon EKS is certified Kubernetes-conformant, so you can deploy Kubernetes-compatible applications without refactoring and use Kubernetes community tooling and plugins. EKS offers both standard support and eks/latest/userguide/kubernetes-versions-extended.html[extended support,type=\"documentation\"] for Kubernetes. For more information, see eks/latest/ userguide/kubernetes-versions.html[Understand the Kubernetes version lifecycle on EKS,type=\"documentation\"]. Related services Services to use with Amazon EKS Related services 3 Amazon EKS User Guide You can use other AWS services with the clusters that you deploy using Amazon EKS: Amazon EC2 Obtain on-demand, scalable compute capacity with Amazon EC2. Amazon EBS Attach scalable, high-performance block storage resources with Amazon EBS. Amazon ECR Store container images securely with Amazon ECR. Amazon CloudWatch Monitor AWS resources and applications in real time with Amazon CloudWatch. Amazon Prometheus Track metrics for containerized applications with Amazon Managed Service for Prometheus. Elastic Load Balancing Distribute incoming traffic across multiple targets with Elastic Load Balancing. Amazon GuardDuty Detect threats to EKS clusters with Amazon GuardDuty. AWS Resilience Hub Assess EKS cluster resiliency with AWS Resilience Hub. Amazon EKS Pricing Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes. When using Amazon EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the"} -{"global_id": 458, "doc_id": "eks", "chunk_id": "2", "question_id": 3, "question": "Which AWS service can be used to store container images securely?", "answer_span": "Amazon ECR Store container images securely with Amazon ECR.", "chunk": "EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics Server. Kubernetes compatibility and support Amazon EKS is certified Kubernetes-conformant, so you can deploy Kubernetes-compatible applications without refactoring and use Kubernetes community tooling and plugins. EKS offers both standard support and eks/latest/userguide/kubernetes-versions-extended.html[extended support,type=\"documentation\"] for Kubernetes. For more information, see eks/latest/ userguide/kubernetes-versions.html[Understand the Kubernetes version lifecycle on EKS,type=\"documentation\"]. Related services Services to use with Amazon EKS Related services 3 Amazon EKS User Guide You can use other AWS services with the clusters that you deploy using Amazon EKS: Amazon EC2 Obtain on-demand, scalable compute capacity with Amazon EC2. Amazon EBS Attach scalable, high-performance block storage resources with Amazon EBS. Amazon ECR Store container images securely with Amazon ECR. Amazon CloudWatch Monitor AWS resources and applications in real time with Amazon CloudWatch. Amazon Prometheus Track metrics for containerized applications with Amazon Managed Service for Prometheus. Elastic Load Balancing Distribute incoming traffic across multiple targets with Elastic Load Balancing. Amazon GuardDuty Detect threats to EKS clusters with Amazon GuardDuty. AWS Resilience Hub Assess EKS cluster resiliency with AWS Resilience Hub. Amazon EKS Pricing Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes. When using Amazon EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the"} -{"global_id": 459, "doc_id": "eks", "chunk_id": "2", "question_id": 4, "question": "What is the basis for Amazon EKS pricing?", "answer_span": "Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes.", "chunk": "EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics Server. Kubernetes compatibility and support Amazon EKS is certified Kubernetes-conformant, so you can deploy Kubernetes-compatible applications without refactoring and use Kubernetes community tooling and plugins. EKS offers both standard support and eks/latest/userguide/kubernetes-versions-extended.html[extended support,type=\"documentation\"] for Kubernetes. For more information, see eks/latest/ userguide/kubernetes-versions.html[Understand the Kubernetes version lifecycle on EKS,type=\"documentation\"]. Related services Services to use with Amazon EKS Related services 3 Amazon EKS User Guide You can use other AWS services with the clusters that you deploy using Amazon EKS: Amazon EC2 Obtain on-demand, scalable compute capacity with Amazon EC2. Amazon EBS Attach scalable, high-performance block storage resources with Amazon EBS. Amazon ECR Store container images securely with Amazon ECR. Amazon CloudWatch Monitor AWS resources and applications in real time with Amazon CloudWatch. Amazon Prometheus Track metrics for containerized applications with Amazon Managed Service for Prometheus. Elastic Load Balancing Distribute incoming traffic across multiple targets with Elastic Load Balancing. Amazon GuardDuty Detect threats to EKS clusters with Amazon GuardDuty. AWS Resilience Hub Assess EKS cluster resiliency with AWS Resilience Hub. Amazon EKS Pricing Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes. When using Amazon EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the"} -{"global_id": 460, "doc_id": "eks", "chunk_id": "3", "question_id": 1, "question": "What do you pay for when using EKS?", "answer_span": "EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes.", "chunk": "EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the volume capacity through Amazon EBS, and the IPv4 address through Amazon VPC. Amazon EKS Pricing 4 Amazon EKS User Guide Visit the respective pricing pages of the AWS services you are using with your Kubernetes applications for detailed pricing information. • For Amazon EKS cluster, Amazon EKS Auto Mode, and Amazon EKS Hybrid Nodes pricing, see Amazon EKS Pricing. • For Amazon EC2 pricing, see Amazon EC2 On-Demand Pricing and Amazon EC2 Spot Pricing. • For AWS Fargate pricing, see AWS Fargate Pricing. • You can use your savings plans for compute used in Amazon EKS clusters. For more information, see Pricing with Savings Plans. Common use cases in Amazon EKS Amazon EKS offers robust managed Kubernetes services on AWS, designed to optimize containerized applications. The following are a few of the most common use cases of Amazon EKS, helping you leverage its strengths for your specific needs. Deploying high-availability applications Using Elastic Load Balancing, you can make sure that your applications are highly available across multiple Availability Zones. Building microservices architectures Use Kubernetes service discovery features with AWS Cloud Map or Amazon VPC Lattice to build resilient systems. Automating software release process Manage continuous integration and continuous deployment (CICD) pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS"} -{"global_id": 461, "doc_id": "eks", "chunk_id": "3", "question_id": 2, "question": "What is one of the common use cases of Amazon EKS?", "answer_span": "Deploying high-availability applications Using Elastic Load Balancing, you can make sure that your applications are highly available across multiple Availability Zones.", "chunk": "EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the volume capacity through Amazon EBS, and the IPv4 address through Amazon VPC. Amazon EKS Pricing 4 Amazon EKS User Guide Visit the respective pricing pages of the AWS services you are using with your Kubernetes applications for detailed pricing information. • For Amazon EKS cluster, Amazon EKS Auto Mode, and Amazon EKS Hybrid Nodes pricing, see Amazon EKS Pricing. • For Amazon EC2 pricing, see Amazon EC2 On-Demand Pricing and Amazon EC2 Spot Pricing. • For AWS Fargate pricing, see AWS Fargate Pricing. • You can use your savings plans for compute used in Amazon EKS clusters. For more information, see Pricing with Savings Plans. Common use cases in Amazon EKS Amazon EKS offers robust managed Kubernetes services on AWS, designed to optimize containerized applications. The following are a few of the most common use cases of Amazon EKS, helping you leverage its strengths for your specific needs. Deploying high-availability applications Using Elastic Load Balancing, you can make sure that your applications are highly available across multiple Availability Zones. Building microservices architectures Use Kubernetes service discovery features with AWS Cloud Map or Amazon VPC Lattice to build resilient systems. Automating software release process Manage continuous integration and continuous deployment (CICD) pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS"} -{"global_id": 462, "doc_id": "eks", "chunk_id": "3", "question_id": 3, "question": "How can you run serverless applications with Amazon EKS?", "answer_span": "Use AWS Fargate with Amazon EKS to run serverless applications.", "chunk": "EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the volume capacity through Amazon EBS, and the IPv4 address through Amazon VPC. Amazon EKS Pricing 4 Amazon EKS User Guide Visit the respective pricing pages of the AWS services you are using with your Kubernetes applications for detailed pricing information. • For Amazon EKS cluster, Amazon EKS Auto Mode, and Amazon EKS Hybrid Nodes pricing, see Amazon EKS Pricing. • For Amazon EC2 pricing, see Amazon EC2 On-Demand Pricing and Amazon EC2 Spot Pricing. • For AWS Fargate pricing, see AWS Fargate Pricing. • You can use your savings plans for compute used in Amazon EKS clusters. For more information, see Pricing with Savings Plans. Common use cases in Amazon EKS Amazon EKS offers robust managed Kubernetes services on AWS, designed to optimize containerized applications. The following are a few of the most common use cases of Amazon EKS, helping you leverage its strengths for your specific needs. Deploying high-availability applications Using Elastic Load Balancing, you can make sure that your applications are highly available across multiple Availability Zones. Building microservices architectures Use Kubernetes service discovery features with AWS Cloud Map or Amazon VPC Lattice to build resilient systems. Automating software release process Manage continuous integration and continuous deployment (CICD) pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS"} -{"global_id": 463, "doc_id": "eks", "chunk_id": "3", "question_id": 4, "question": "What should you visit for detailed pricing information on AWS services used with Kubernetes applications?", "answer_span": "Visit the respective pricing pages of the AWS services you are using with your Kubernetes applications for detailed pricing information.", "chunk": "EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the volume capacity through Amazon EBS, and the IPv4 address through Amazon VPC. Amazon EKS Pricing 4 Amazon EKS User Guide Visit the respective pricing pages of the AWS services you are using with your Kubernetes applications for detailed pricing information. • For Amazon EKS cluster, Amazon EKS Auto Mode, and Amazon EKS Hybrid Nodes pricing, see Amazon EKS Pricing. • For Amazon EC2 pricing, see Amazon EC2 On-Demand Pricing and Amazon EC2 Spot Pricing. • For AWS Fargate pricing, see AWS Fargate Pricing. • You can use your savings plans for compute used in Amazon EKS clusters. For more information, see Pricing with Savings Plans. Common use cases in Amazon EKS Amazon EKS offers robust managed Kubernetes services on AWS, designed to optimize containerized applications. The following are a few of the most common use cases of Amazon EKS, helping you leverage its strengths for your specific needs. Deploying high-availability applications Using Elastic Load Balancing, you can make sure that your applications are highly available across multiple Availability Zones. Building microservices architectures Use Kubernetes service discovery features with AWS Cloud Map or Amazon VPC Lattice to build resilient systems. Automating software release process Manage continuous integration and continuous deployment (CICD) pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS"} -{"global_id": 464, "doc_id": "eks", "chunk_id": "4", "question_id": 1, "question": "What does AWS Fargate allow you to focus on when running serverless applications?", "answer_span": "This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure.", "chunk": "pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch. With GPU support, you can handle even complex machine learning tasks effectively. Common use cases 5 Amazon EKS User Guide Deploying consistently on premises and in the cloud To simplify running Kubernetes in on-premises environments, you can use the same Amazon EKS clusters, features, and tools to run self-managed nodes on AWS Outposts or can use Amazon EKS Hybrid Nodes with your own infrastructure. For self-contained, air-gapped environments, you can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure. Running cost-effective batch processing and big data workloads Utilize Spot Instances to run your batch processing and big data workloads such as Apache Hadoop and Spark, at a fraction of the cost. This lets you take advantage of unused Amazon EC2 capacity at discounted prices. Securing application and ensuring compliance Implement strong security practices and maintain compliance with Amazon EKS, which integrates with AWS security services such as AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (Amazon VPC), and AWS Key Management Service (AWS KMS). This ensures data privacy and protection as per industry standards. Amazon EKS architecture Amazon EKS aligns with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with"} -{"global_id": 465, "doc_id": "eks", "chunk_id": "4", "question_id": 2, "question": "Which machine learning frameworks is Amazon EKS compatible with?", "answer_span": "Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch.", "chunk": "pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch. With GPU support, you can handle even complex machine learning tasks effectively. Common use cases 5 Amazon EKS User Guide Deploying consistently on premises and in the cloud To simplify running Kubernetes in on-premises environments, you can use the same Amazon EKS clusters, features, and tools to run self-managed nodes on AWS Outposts or can use Amazon EKS Hybrid Nodes with your own infrastructure. For self-contained, air-gapped environments, you can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure. Running cost-effective batch processing and big data workloads Utilize Spot Instances to run your batch processing and big data workloads such as Apache Hadoop and Spark, at a fraction of the cost. This lets you take advantage of unused Amazon EC2 capacity at discounted prices. Securing application and ensuring compliance Implement strong security practices and maintain compliance with Amazon EKS, which integrates with AWS security services such as AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (Amazon VPC), and AWS Key Management Service (AWS KMS). This ensures data privacy and protection as per industry standards. Amazon EKS architecture Amazon EKS aligns with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with"} -{"global_id": 466, "doc_id": "eks", "chunk_id": "4", "question_id": 3, "question": "What can you use to automate Kubernetes cluster lifecycle management in self-contained environments?", "answer_span": "For self-contained, air-gapped environments, you can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure.", "chunk": "pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch. With GPU support, you can handle even complex machine learning tasks effectively. Common use cases 5 Amazon EKS User Guide Deploying consistently on premises and in the cloud To simplify running Kubernetes in on-premises environments, you can use the same Amazon EKS clusters, features, and tools to run self-managed nodes on AWS Outposts or can use Amazon EKS Hybrid Nodes with your own infrastructure. For self-contained, air-gapped environments, you can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure. Running cost-effective batch processing and big data workloads Utilize Spot Instances to run your batch processing and big data workloads such as Apache Hadoop and Spark, at a fraction of the cost. This lets you take advantage of unused Amazon EC2 capacity at discounted prices. Securing application and ensuring compliance Implement strong security practices and maintain compliance with Amazon EKS, which integrates with AWS security services such as AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (Amazon VPC), and AWS Key Management Service (AWS KMS). This ensures data privacy and protection as per industry standards. Amazon EKS architecture Amazon EKS aligns with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with"} -{"global_id": 467, "doc_id": "eks", "chunk_id": "4", "question_id": 4, "question": "How does Amazon EKS ensure data privacy and protection?", "answer_span": "Implement strong security practices and maintain compliance with Amazon EKS, which integrates with AWS security services such as AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (Amazon VPC), and AWS Key Management Service (AWS KMS).", "chunk": "pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch. With GPU support, you can handle even complex machine learning tasks effectively. Common use cases 5 Amazon EKS User Guide Deploying consistently on premises and in the cloud To simplify running Kubernetes in on-premises environments, you can use the same Amazon EKS clusters, features, and tools to run self-managed nodes on AWS Outposts or can use Amazon EKS Hybrid Nodes with your own infrastructure. For self-contained, air-gapped environments, you can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure. Running cost-effective batch processing and big data workloads Utilize Spot Instances to run your batch processing and big data workloads such as Apache Hadoop and Spark, at a fraction of the cost. This lets you take advantage of unused Amazon EC2 capacity at discounted prices. Securing application and ensuring compliance Implement strong security practices and maintain compliance with Amazon EKS, which integrates with AWS security services such as AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (Amazon VPC), and AWS Key Management Service (AWS KMS). This ensures data privacy and protection as per industry standards. Amazon EKS architecture Amazon EKS aligns with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with"} -{"global_id": 468, "doc_id": "eks", "chunk_id": "5", "question_id": 1, "question": "What does Amazon EKS ensure for every cluster?", "answer_span": "Amazon EKS ensures every cluster has its own unique Kubernetes control plane.", "chunk": "with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with no overlaps between clusters or AWS accounts. The setup includes: Distributed components The control plane positions at least two API server instances and three etcd instances across three AWS Availability Zones within an AWS Region. Optimal performance Amazon EKS actively monitors and adjusts control plane instances to maintain peak performance. Architecture 6 Amazon EKS User Guide Resilience If a control plane instance falters, Amazon EKS quickly replaces it, using different Availability Zone if needed. Consistent uptime By running clusters across multiple Availability Zones, a reliable API server endpoint availability Service Level Agreement (SLA) is achieved. Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster. Cluster components can’t view or receive communication from other clusters or AWS accounts, except when authorized by Kubernetes role-based access control (RBAC) policies. Compute In addition to the control plane, an Amazon EKS cluster has a set of worker machines called nodes. Selecting the appropriate Amazon EKS cluster node type is crucial for meeting your specific requirements and optimizing resource utilization. Amazon EKS offers the following primary node types: EKS Auto Mode EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod"} -{"global_id": 469, "doc_id": "eks", "chunk_id": "5", "question_id": 2, "question": "How many API server instances are positioned in the control plane?", "answer_span": "The control plane positions at least two API server instances and three etcd instances across three AWS Availability Zones within an AWS Region.", "chunk": "with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with no overlaps between clusters or AWS accounts. The setup includes: Distributed components The control plane positions at least two API server instances and three etcd instances across three AWS Availability Zones within an AWS Region. Optimal performance Amazon EKS actively monitors and adjusts control plane instances to maintain peak performance. Architecture 6 Amazon EKS User Guide Resilience If a control plane instance falters, Amazon EKS quickly replaces it, using different Availability Zone if needed. Consistent uptime By running clusters across multiple Availability Zones, a reliable API server endpoint availability Service Level Agreement (SLA) is achieved. Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster. Cluster components can’t view or receive communication from other clusters or AWS accounts, except when authorized by Kubernetes role-based access control (RBAC) policies. Compute In addition to the control plane, an Amazon EKS cluster has a set of worker machines called nodes. Selecting the appropriate Amazon EKS cluster node type is crucial for meeting your specific requirements and optimizing resource utilization. Amazon EKS offers the following primary node types: EKS Auto Mode EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod"} -{"global_id": 470, "doc_id": "eks", "chunk_id": "5", "question_id": 3, "question": "What does Amazon EKS use to limit traffic between control plane components?", "answer_span": "Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster.", "chunk": "with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with no overlaps between clusters or AWS accounts. The setup includes: Distributed components The control plane positions at least two API server instances and three etcd instances across three AWS Availability Zones within an AWS Region. Optimal performance Amazon EKS actively monitors and adjusts control plane instances to maintain peak performance. Architecture 6 Amazon EKS User Guide Resilience If a control plane instance falters, Amazon EKS quickly replaces it, using different Availability Zone if needed. Consistent uptime By running clusters across multiple Availability Zones, a reliable API server endpoint availability Service Level Agreement (SLA) is achieved. Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster. Cluster components can’t view or receive communication from other clusters or AWS accounts, except when authorized by Kubernetes role-based access control (RBAC) policies. Compute In addition to the control plane, an Amazon EKS cluster has a set of worker machines called nodes. Selecting the appropriate Amazon EKS cluster node type is crucial for meeting your specific requirements and optimizing resource utilization. Amazon EKS offers the following primary node types: EKS Auto Mode EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod"} -{"global_id": 471, "doc_id": "eks", "chunk_id": "5", "question_id": 4, "question": "What is the purpose of EKS Auto Mode?", "answer_span": "EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management.", "chunk": "with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with no overlaps between clusters or AWS accounts. The setup includes: Distributed components The control plane positions at least two API server instances and three etcd instances across three AWS Availability Zones within an AWS Region. Optimal performance Amazon EKS actively monitors and adjusts control plane instances to maintain peak performance. Architecture 6 Amazon EKS User Guide Resilience If a control plane instance falters, Amazon EKS quickly replaces it, using different Availability Zone if needed. Consistent uptime By running clusters across multiple Availability Zones, a reliable API server endpoint availability Service Level Agreement (SLA) is achieved. Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster. Cluster components can’t view or receive communication from other clusters or AWS accounts, except when authorized by Kubernetes role-based access control (RBAC) policies. Compute In addition to the control plane, an Amazon EKS cluster has a set of worker machines called nodes. Selecting the appropriate Amazon EKS cluster node type is crucial for meeting your specific requirements and optimizing resource utilization. Amazon EKS offers the following primary node types: EKS Auto Mode EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod"} -{"global_id": 472, "doc_id": "eks", "chunk_id": "6", "question_id": 1, "question": "What does EKS Auto Mode do?", "answer_span": "EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features.", "chunk": "data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod Disruption Budgets, and includes managed components that would otherwise require add-on management. This option is ideal for users who want to leverage AWS expertise for day-to-day operations, minimize operational overhead, and focus on application development rather than infrastructure management. AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances. With Fargate, you specify your application’s resource needs, and AWS automatically provisions, scales, and maintains the infrastructure. This option is ideal for users who prioritize ease-of-use and want to concentrate on application development and deployment rather than managing infrastructure. Compute 7 Amazon EKS User Guide Karpenter Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency. Karpenter launches right-sized compute resources in response to changing application load. This option can provision just-in-time compute resources that meet the requirements of your workload. Managed node groups Managed node groups are a blend of automation and customization for managing a collection of Amazon EC2 instances within an Amazon EKS cluster. AWS takes care of tasks like patching, updating, and scaling nodes, easing operational aspects. In parallel, custom kubelet arguments are supported, opening up possibilities for advanced CPU and memory management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling,"} -{"global_id": 473, "doc_id": "eks", "chunk_id": "6", "question_id": 2, "question": "What is AWS Fargate?", "answer_span": "AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances.", "chunk": "data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod Disruption Budgets, and includes managed components that would otherwise require add-on management. This option is ideal for users who want to leverage AWS expertise for day-to-day operations, minimize operational overhead, and focus on application development rather than infrastructure management. AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances. With Fargate, you specify your application’s resource needs, and AWS automatically provisions, scales, and maintains the infrastructure. This option is ideal for users who prioritize ease-of-use and want to concentrate on application development and deployment rather than managing infrastructure. Compute 7 Amazon EKS User Guide Karpenter Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency. Karpenter launches right-sized compute resources in response to changing application load. This option can provision just-in-time compute resources that meet the requirements of your workload. Managed node groups Managed node groups are a blend of automation and customization for managing a collection of Amazon EC2 instances within an Amazon EKS cluster. AWS takes care of tasks like patching, updating, and scaling nodes, easing operational aspects. In parallel, custom kubelet arguments are supported, opening up possibilities for advanced CPU and memory management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling,"} -{"global_id": 474, "doc_id": "eks", "chunk_id": "6", "question_id": 3, "question": "What is the purpose of Karpenter?", "answer_span": "Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency.", "chunk": "data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod Disruption Budgets, and includes managed components that would otherwise require add-on management. This option is ideal for users who want to leverage AWS expertise for day-to-day operations, minimize operational overhead, and focus on application development rather than infrastructure management. AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances. With Fargate, you specify your application’s resource needs, and AWS automatically provisions, scales, and maintains the infrastructure. This option is ideal for users who prioritize ease-of-use and want to concentrate on application development and deployment rather than managing infrastructure. Compute 7 Amazon EKS User Guide Karpenter Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency. Karpenter launches right-sized compute resources in response to changing application load. This option can provision just-in-time compute resources that meet the requirements of your workload. Managed node groups Managed node groups are a blend of automation and customization for managing a collection of Amazon EC2 instances within an Amazon EKS cluster. AWS takes care of tasks like patching, updating, and scaling nodes, easing operational aspects. In parallel, custom kubelet arguments are supported, opening up possibilities for advanced CPU and memory management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling,"} -{"global_id": 475, "doc_id": "eks", "chunk_id": "6", "question_id": 4, "question": "What do managed node groups provide?", "answer_span": "Managed node groups are a blend of automation and customization for managing a collection of Amazon EC2 instances within an Amazon EKS cluster.", "chunk": "data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod Disruption Budgets, and includes managed components that would otherwise require add-on management. This option is ideal for users who want to leverage AWS expertise for day-to-day operations, minimize operational overhead, and focus on application development rather than infrastructure management. AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances. With Fargate, you specify your application’s resource needs, and AWS automatically provisions, scales, and maintains the infrastructure. This option is ideal for users who prioritize ease-of-use and want to concentrate on application development and deployment rather than managing infrastructure. Compute 7 Amazon EKS User Guide Karpenter Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency. Karpenter launches right-sized compute resources in response to changing application load. This option can provision just-in-time compute resources that meet the requirements of your workload. Managed node groups Managed node groups are a blend of automation and customization for managing a collection of Amazon EC2 instances within an Amazon EKS cluster. AWS takes care of tasks like patching, updating, and scaling nodes, easing operational aspects. In parallel, custom kubelet arguments are supported, opening up possibilities for advanced CPU and memory management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling,"} -{"global_id": 476, "doc_id": "eks", "chunk_id": "7", "question_id": 1, "question": "What do AWS Identity and Access Management (IAM) roles enhance?", "answer_span": "Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster.", "chunk": "management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling, and maintaining the nodes, giving you total control over the underlying infrastructure. This option is suitable for users who need granular control and customization of their nodes and are ready to invest time in managing and maintaining their infrastructure. Amazon EKS Hybrid Nodes With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your onpremises and edge applications. Kubernetes concepts Amazon Elastic Kubernetes Service (Amazon EKS) is an AWS managed service based on the open source Kubernetes project. While there are things you need to know about how the Amazon EKS service integrates with AWS Cloud (particularly when you first create an Amazon EKS cluster), once it’s up and running, you use your Amazon EKS cluster in much that same way as you would any other Kubernetes cluster. So to begin managing Kubernetes clusters and deploying workloads, you need at least a basic understanding of Kubernetes concepts. Kubernetes concepts 8 Amazon EKS User Guide This page divides Kubernetes concepts into three sections: the section called “Why Kubernetes?”, the section called “Clusters”, and the section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and"} -{"global_id": 477, "doc_id": "eks", "chunk_id": "7", "question_id": 2, "question": "What do self-managed nodes offer users?", "answer_span": "Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster.", "chunk": "management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling, and maintaining the nodes, giving you total control over the underlying infrastructure. This option is suitable for users who need granular control and customization of their nodes and are ready to invest time in managing and maintaining their infrastructure. Amazon EKS Hybrid Nodes With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your onpremises and edge applications. Kubernetes concepts Amazon Elastic Kubernetes Service (Amazon EKS) is an AWS managed service based on the open source Kubernetes project. While there are things you need to know about how the Amazon EKS service integrates with AWS Cloud (particularly when you first create an Amazon EKS cluster), once it’s up and running, you use your Amazon EKS cluster in much that same way as you would any other Kubernetes cluster. So to begin managing Kubernetes clusters and deploying workloads, you need at least a basic understanding of Kubernetes concepts. Kubernetes concepts 8 Amazon EKS User Guide This page divides Kubernetes concepts into three sections: the section called “Why Kubernetes?”, the section called “Clusters”, and the section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and"} -{"global_id": 478, "doc_id": "eks", "chunk_id": "7", "question_id": 3, "question": "What is the purpose of Amazon EKS Hybrid Nodes?", "answer_span": "With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters.", "chunk": "management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling, and maintaining the nodes, giving you total control over the underlying infrastructure. This option is suitable for users who need granular control and customization of their nodes and are ready to invest time in managing and maintaining their infrastructure. Amazon EKS Hybrid Nodes With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your onpremises and edge applications. Kubernetes concepts Amazon Elastic Kubernetes Service (Amazon EKS) is an AWS managed service based on the open source Kubernetes project. While there are things you need to know about how the Amazon EKS service integrates with AWS Cloud (particularly when you first create an Amazon EKS cluster), once it’s up and running, you use your Amazon EKS cluster in much that same way as you would any other Kubernetes cluster. So to begin managing Kubernetes clusters and deploying workloads, you need at least a basic understanding of Kubernetes concepts. Kubernetes concepts 8 Amazon EKS User Guide This page divides Kubernetes concepts into three sections: the section called “Why Kubernetes?”, the section called “Clusters”, and the section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and"} -{"global_id": 479, "doc_id": "eks", "chunk_id": "7", "question_id": 4, "question": "What does the first section of Kubernetes concepts describe?", "answer_span": "The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS.", "chunk": "management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling, and maintaining the nodes, giving you total control over the underlying infrastructure. This option is suitable for users who need granular control and customization of their nodes and are ready to invest time in managing and maintaining their infrastructure. Amazon EKS Hybrid Nodes With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your onpremises and edge applications. Kubernetes concepts Amazon Elastic Kubernetes Service (Amazon EKS) is an AWS managed service based on the open source Kubernetes project. While there are things you need to know about how the Amazon EKS service integrates with AWS Cloud (particularly when you first create an Amazon EKS cluster), once it’s up and running, you use your Amazon EKS cluster in much that same way as you would any other Kubernetes cluster. So to begin managing Kubernetes clusters and deploying workloads, you need at least a basic understanding of Kubernetes concepts. Kubernetes concepts 8 Amazon EKS User Guide This page divides Kubernetes concepts into three sections: the section called “Why Kubernetes?”, the section called “Clusters”, and the section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and"} -{"global_id": 480, "doc_id": "eks", "chunk_id": "8", "question_id": 1, "question": "What is the main focus of the Workloads section?", "answer_span": "The Workloads section covers how Kubernetes applications are built, stored, run, and managed.", "chunk": "section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and what your responsibilities are for creating and maintaining Kubernetes clusters. Topics • Why Kubernetes? • Clusters • Workloads • Next steps As you go through this content, links will lead you to further descriptions of Kubernetes concepts in both Amazon EKS and Kubernetes documentation, in case you want to take deep dives into any of the topics we cover here. For details about how Amazon EKS implements Kubernetes control plane and compute features, see the section called “Architecture”. Why Kubernetes? Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications. Rather than just running Kubernetes on a single machine (although that is possible), Kubernetes achieves those goals by allowing you to run applications across sets of computers that can expand or contract to meet demand. Kubernetes includes features that make it easier for you to: • Deploy applications on multiple machines (using containers deployed in Pods) • Monitor container health and restart failed containers • Scale containers up and down based on load • Update containers with new versions • Allocate resources between containers • Balance traffic across machines Having Kubernetes automate these types of complex tasks allows an application developer to focus on building and improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity"} -{"global_id": 481, "doc_id": "eks", "chunk_id": "8", "question_id": 2, "question": "What are some features of Kubernetes that help with application management?", "answer_span": "Kubernetes includes features that make it easier for you to: • Deploy applications on multiple machines (using containers deployed in Pods) • Monitor container health and restart failed containers • Scale containers up and down based on load • Update containers with new versions • Allocate resources between containers • Balance traffic across machines", "chunk": "section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and what your responsibilities are for creating and maintaining Kubernetes clusters. Topics • Why Kubernetes? • Clusters • Workloads • Next steps As you go through this content, links will lead you to further descriptions of Kubernetes concepts in both Amazon EKS and Kubernetes documentation, in case you want to take deep dives into any of the topics we cover here. For details about how Amazon EKS implements Kubernetes control plane and compute features, see the section called “Architecture”. Why Kubernetes? Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications. Rather than just running Kubernetes on a single machine (although that is possible), Kubernetes achieves those goals by allowing you to run applications across sets of computers that can expand or contract to meet demand. Kubernetes includes features that make it easier for you to: • Deploy applications on multiple machines (using containers deployed in Pods) • Monitor container health and restart failed containers • Scale containers up and down based on load • Update containers with new versions • Allocate resources between containers • Balance traffic across machines Having Kubernetes automate these types of complex tasks allows an application developer to focus on building and improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity"} -{"global_id": 482, "doc_id": "eks", "chunk_id": "8", "question_id": 3, "question": "What is the purpose of creating configuration files in Kubernetes?", "answer_span": "The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application.", "chunk": "section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and what your responsibilities are for creating and maintaining Kubernetes clusters. Topics • Why Kubernetes? • Clusters • Workloads • Next steps As you go through this content, links will lead you to further descriptions of Kubernetes concepts in both Amazon EKS and Kubernetes documentation, in case you want to take deep dives into any of the topics we cover here. For details about how Amazon EKS implements Kubernetes control plane and compute features, see the section called “Architecture”. Why Kubernetes? Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications. Rather than just running Kubernetes on a single machine (although that is possible), Kubernetes achieves those goals by allowing you to run applications across sets of computers that can expand or contract to meet demand. Kubernetes includes features that make it easier for you to: • Deploy applications on multiple machines (using containers deployed in Pods) • Monitor container health and restart failed containers • Scale containers up and down based on load • Update containers with new versions • Allocate resources between containers • Balance traffic across machines Having Kubernetes automate these types of complex tasks allows an application developer to focus on building and improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity"} -{"global_id": 483, "doc_id": "eks", "chunk_id": "8", "question_id": 4, "question": "Why was Kubernetes designed?", "answer_span": "Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications.", "chunk": "section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and what your responsibilities are for creating and maintaining Kubernetes clusters. Topics • Why Kubernetes? • Clusters • Workloads • Next steps As you go through this content, links will lead you to further descriptions of Kubernetes concepts in both Amazon EKS and Kubernetes documentation, in case you want to take deep dives into any of the topics we cover here. For details about how Amazon EKS implements Kubernetes control plane and compute features, see the section called “Architecture”. Why Kubernetes? Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications. Rather than just running Kubernetes on a single machine (although that is possible), Kubernetes achieves those goals by allowing you to run applications across sets of computers that can expand or contract to meet demand. Kubernetes includes features that make it easier for you to: • Deploy applications on multiple machines (using containers deployed in Pods) • Monitor container health and restart failed containers • Scale containers up and down based on load • Update containers with new versions • Allocate resources between containers • Balance traffic across machines Having Kubernetes automate these types of complex tasks allows an application developer to focus on building and improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity"} -{"global_id": 484, "doc_id": "eks", "chunk_id": "9", "question_id": 1, "question": "What format do developers typically use to create configuration files for Kubernetes?", "answer_span": "The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application.", "chunk": "improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity rules, and more. Attributes of Kubernetes To achieve its goals, Kubernetes has the following attributes: • Containerized — Kubernetes is a container orchestration tool. To use Kubernetes, you must first have your applications containerized. Depending on the type of application, this could be as a set of microservices, as batch jobs or in other forms. Then, your applications can take advantage of a Kubernetes workflow that encompasses a huge ecosystem of tools, where containers can be stored as images in a container registry, deployed to a Kubernetes cluster, and run on an available node. You can build and test individual containers on your local computer with Docker or another container runtime, before deploying them to your Kubernetes cluster. • Scalable — If the demand for your applications exceeds the capacity of the running instances of those applications, Kubernetes is able to scale up. As needed, Kubernetes can tell if applications require more CPU or memory and respond by either automatically expanding available capacity or using more of existing capacity. Scaling can be done at the Pod level, if there is enough compute available to just run more instances of the application (horizontal Pod autoscaling), or at the node level, if more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads"} -{"global_id": 485, "doc_id": "eks", "chunk_id": "9", "question_id": 2, "question": "What is a key requirement for using Kubernetes?", "answer_span": "To use Kubernetes, you must first have your applications containerized.", "chunk": "improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity rules, and more. Attributes of Kubernetes To achieve its goals, Kubernetes has the following attributes: • Containerized — Kubernetes is a container orchestration tool. To use Kubernetes, you must first have your applications containerized. Depending on the type of application, this could be as a set of microservices, as batch jobs or in other forms. Then, your applications can take advantage of a Kubernetes workflow that encompasses a huge ecosystem of tools, where containers can be stored as images in a container registry, deployed to a Kubernetes cluster, and run on an available node. You can build and test individual containers on your local computer with Docker or another container runtime, before deploying them to your Kubernetes cluster. • Scalable — If the demand for your applications exceeds the capacity of the running instances of those applications, Kubernetes is able to scale up. As needed, Kubernetes can tell if applications require more CPU or memory and respond by either automatically expanding available capacity or using more of existing capacity. Scaling can be done at the Pod level, if there is enough compute available to just run more instances of the application (horizontal Pod autoscaling), or at the node level, if more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads"} -{"global_id": 486, "doc_id": "eks", "chunk_id": "9", "question_id": 3, "question": "How does Kubernetes respond if the demand for applications exceeds capacity?", "answer_span": "Kubernetes is able to scale up.", "chunk": "improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity rules, and more. Attributes of Kubernetes To achieve its goals, Kubernetes has the following attributes: • Containerized — Kubernetes is a container orchestration tool. To use Kubernetes, you must first have your applications containerized. Depending on the type of application, this could be as a set of microservices, as batch jobs or in other forms. Then, your applications can take advantage of a Kubernetes workflow that encompasses a huge ecosystem of tools, where containers can be stored as images in a container registry, deployed to a Kubernetes cluster, and run on an available node. You can build and test individual containers on your local computer with Docker or another container runtime, before deploying them to your Kubernetes cluster. • Scalable — If the demand for your applications exceeds the capacity of the running instances of those applications, Kubernetes is able to scale up. As needed, Kubernetes can tell if applications require more CPU or memory and respond by either automatically expanding available capacity or using more of existing capacity. Scaling can be done at the Pod level, if there is enough compute available to just run more instances of the application (horizontal Pod autoscaling), or at the node level, if more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads"} -{"global_id": 487, "doc_id": "eks", "chunk_id": "9", "question_id": 4, "question": "What happens if an application or node becomes unhealthy or unavailable?", "answer_span": "Kubernetes can move running workloads.", "chunk": "improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity rules, and more. Attributes of Kubernetes To achieve its goals, Kubernetes has the following attributes: • Containerized — Kubernetes is a container orchestration tool. To use Kubernetes, you must first have your applications containerized. Depending on the type of application, this could be as a set of microservices, as batch jobs or in other forms. Then, your applications can take advantage of a Kubernetes workflow that encompasses a huge ecosystem of tools, where containers can be stored as images in a container registry, deployed to a Kubernetes cluster, and run on an available node. You can build and test individual containers on your local computer with Docker or another container runtime, before deploying them to your Kubernetes cluster. • Scalable — If the demand for your applications exceeds the capacity of the running instances of those applications, Kubernetes is able to scale up. As needed, Kubernetes can tell if applications require more CPU or memory and respond by either automatically expanding available capacity or using more of existing capacity. Scaling can be done at the Pod level, if there is enough compute available to just run more instances of the application (horizontal Pod autoscaling), or at the node level, if more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads"} -{"global_id": 488, "doc_id": "eks", "chunk_id": "10", "question_id": 1, "question": "What services can delete unnecessary Pods and shut down unneeded nodes?", "answer_span": "Cluster Autoscaler or Karpenter", "chunk": "more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads to another available node. You can force the issue by simply deleting a running instance of a workload or node that’s running your workloads. The bottom line here is that workloads can be brought up in other locations if they can no longer run where they are. • Declarative — Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state. By applying Kubernetes objects to a cluster, typically through YAML-formatted configuration files, you can, for example, ask to start up the workloads you want to run on your cluster. You can later change the configurations to do something like use a later version of a container or allocate more memory. Kubernetes will do what it needs to do to establish the desired state. This can include bringing nodes up or down, stopping and restarting workloads, or pulling updated containers. • Composable — Because an application typically consists of multiple components, you want to be able to manage a set of these components (often represented by multiple containers) together. While Docker Compose offers a way to do this directly with Docker, the Kubernetes Why Kubernetes? 10 Amazon EKS User Guide Kompose command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like"} -{"global_id": 489, "doc_id": "eks", "chunk_id": "10", "question_id": 2, "question": "What happens if an application or node becomes unhealthy or unavailable?", "answer_span": "Kubernetes can move running workloads to another available node.", "chunk": "more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads to another available node. You can force the issue by simply deleting a running instance of a workload or node that’s running your workloads. The bottom line here is that workloads can be brought up in other locations if they can no longer run where they are. • Declarative — Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state. By applying Kubernetes objects to a cluster, typically through YAML-formatted configuration files, you can, for example, ask to start up the workloads you want to run on your cluster. You can later change the configurations to do something like use a later version of a container or allocate more memory. Kubernetes will do what it needs to do to establish the desired state. This can include bringing nodes up or down, stopping and restarting workloads, or pulling updated containers. • Composable — Because an application typically consists of multiple components, you want to be able to manage a set of these components (often represented by multiple containers) together. While Docker Compose offers a way to do this directly with Docker, the Kubernetes Why Kubernetes? 10 Amazon EKS User Guide Kompose command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like"} -{"global_id": 490, "doc_id": "eks", "chunk_id": "10", "question_id": 3, "question": "How does Kubernetes ensure that the declared state matches the actual state?", "answer_span": "Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state.", "chunk": "more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads to another available node. You can force the issue by simply deleting a running instance of a workload or node that’s running your workloads. The bottom line here is that workloads can be brought up in other locations if they can no longer run where they are. • Declarative — Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state. By applying Kubernetes objects to a cluster, typically through YAML-formatted configuration files, you can, for example, ask to start up the workloads you want to run on your cluster. You can later change the configurations to do something like use a later version of a container or allocate more memory. Kubernetes will do what it needs to do to establish the desired state. This can include bringing nodes up or down, stopping and restarting workloads, or pulling updated containers. • Composable — Because an application typically consists of multiple components, you want to be able to manage a set of these components (often represented by multiple containers) together. While Docker Compose offers a way to do this directly with Docker, the Kubernetes Why Kubernetes? 10 Amazon EKS User Guide Kompose command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like"} -{"global_id": 491, "doc_id": "eks", "chunk_id": "10", "question_id": 4, "question": "What command can help manage multiple components in Kubernetes?", "answer_span": "the Kubernetes Kompose command can help you do that with Kubernetes.", "chunk": "more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads to another available node. You can force the issue by simply deleting a running instance of a workload or node that’s running your workloads. The bottom line here is that workloads can be brought up in other locations if they can no longer run where they are. • Declarative — Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state. By applying Kubernetes objects to a cluster, typically through YAML-formatted configuration files, you can, for example, ask to start up the workloads you want to run on your cluster. You can later change the configurations to do something like use a later version of a container or allocate more memory. Kubernetes will do what it needs to do to establish the desired state. This can include bringing nodes up or down, stopping and restarting workloads, or pulling updated containers. • Composable — Because an application typically consists of multiple components, you want to be able to manage a set of these components (often represented by multiple containers) together. While Docker Compose offers a way to do this directly with Docker, the Kubernetes Why Kubernetes? 10 Amazon EKS User Guide Kompose command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like"} -{"global_id": 492, "doc_id": "eks", "chunk_id": "11", "question_id": 1, "question": "What can command help you do with Kubernetes?", "answer_span": "command can help you do that with Kubernetes.", "chunk": "command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs. APIs and configuration files are open to direct modifications. Third-parties are encouraged to write their own Controllers, to extend both infrastructure and end-user Kubernetes features. Webhooks let you set up cluster rules to enforce policies and adapt to changing conditions. For more ideas on how to extend Kubernetes clusters, see Extending Kubernetes. • Portable — Many organizations have standardized their operations on Kubernetes because it allows them to manage all of their application needs in the same way. Developers can use the same pipelines to build and store containerized applications. Those applications can then be deployed to Kubernetes clusters running on-premises, in clouds, on point-of-sales terminals in restaurants, or on IOT devices dispersed across company’s remote sites. Its open source nature makes it possible for people to develop these special Kubernetes distributions, along will tools needed to manage them. Managing Kubernetes Kubernetes source code is freely available, so with your own equipment you could install and manage Kubernetes yourself. However, self-managing Kubernetes requires deep operational expertise and takes time and effort to maintain. For those reasons, most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS"} -{"global_id": 493, "doc_id": "eks", "chunk_id": "11", "question_id": 2, "question": "What is the nature of the Kubernetes project?", "answer_span": "the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs.", "chunk": "command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs. APIs and configuration files are open to direct modifications. Third-parties are encouraged to write their own Controllers, to extend both infrastructure and end-user Kubernetes features. Webhooks let you set up cluster rules to enforce policies and adapt to changing conditions. For more ideas on how to extend Kubernetes clusters, see Extending Kubernetes. • Portable — Many organizations have standardized their operations on Kubernetes because it allows them to manage all of their application needs in the same way. Developers can use the same pipelines to build and store containerized applications. Those applications can then be deployed to Kubernetes clusters running on-premises, in clouds, on point-of-sales terminals in restaurants, or on IOT devices dispersed across company’s remote sites. Its open source nature makes it possible for people to develop these special Kubernetes distributions, along will tools needed to manage them. Managing Kubernetes Kubernetes source code is freely available, so with your own equipment you could install and manage Kubernetes yourself. However, self-managing Kubernetes requires deep operational expertise and takes time and effort to maintain. For those reasons, most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS"} -{"global_id": 494, "doc_id": "eks", "chunk_id": "11", "question_id": 3, "question": "Why do many organizations standardize their operations on Kubernetes?", "answer_span": "Many organizations have standardized their operations on Kubernetes because it allows them to manage all of their application needs in the same way.", "chunk": "command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs. APIs and configuration files are open to direct modifications. Third-parties are encouraged to write their own Controllers, to extend both infrastructure and end-user Kubernetes features. Webhooks let you set up cluster rules to enforce policies and adapt to changing conditions. For more ideas on how to extend Kubernetes clusters, see Extending Kubernetes. • Portable — Many organizations have standardized their operations on Kubernetes because it allows them to manage all of their application needs in the same way. Developers can use the same pipelines to build and store containerized applications. Those applications can then be deployed to Kubernetes clusters running on-premises, in clouds, on point-of-sales terminals in restaurants, or on IOT devices dispersed across company’s remote sites. Its open source nature makes it possible for people to develop these special Kubernetes distributions, along will tools needed to manage them. Managing Kubernetes Kubernetes source code is freely available, so with your own equipment you could install and manage Kubernetes yourself. However, self-managing Kubernetes requires deep operational expertise and takes time and effort to maintain. For those reasons, most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS"} -{"global_id": 495, "doc_id": "eks", "chunk_id": "11", "question_id": 4, "question": "What do most people deploying production workloads choose for managing Kubernetes?", "answer_span": "most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts.", "chunk": "command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs. APIs and configuration files are open to direct modifications. Third-parties are encouraged to write their own Controllers, to extend both infrastructure and end-user Kubernetes features. Webhooks let you set up cluster rules to enforce policies and adapt to changing conditions. For more ideas on how to extend Kubernetes clusters, see Extending Kubernetes. • Portable — Many organizations have standardized their operations on Kubernetes because it allows them to manage all of their application needs in the same way. Developers can use the same pipelines to build and store containerized applications. Those applications can then be deployed to Kubernetes clusters running on-premises, in clouds, on point-of-sales terminals in restaurants, or on IOT devices dispersed across company’s remote sites. Its open source nature makes it possible for people to develop these special Kubernetes distributions, along will tools needed to manage them. Managing Kubernetes Kubernetes source code is freely available, so with your own equipment you could install and manage Kubernetes yourself. However, self-managing Kubernetes requires deep operational expertise and takes time and effort to maintain. For those reasons, most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS"} -{"global_id": 496, "doc_id": "eks", "chunk_id": "12", "question_id": 1, "question": "What does Amazon EKS allow you to do regarding hardware?", "answer_span": "With Amazon EKS, this means that you can consume the best cloud resources offered by AWS, including computer instances (Amazon Elastic Compute Cloud), your own private environment (Amazon VPC), central identity and permissions management (IAM), and storage (Amazon EBS).", "chunk": "Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS Amazon EKS can save you on upfront costs. With Amazon EKS, this means that you can consume the best cloud resources offered by AWS, including computer instances (Amazon Elastic Compute Cloud), your own private environment (Amazon VPC), central identity and permissions management (IAM), and storage (Amazon EBS). AWS manages the computers, networks, data centers, and all the other physical components needed to run Kubernetes. Likewise, you don’t have to plan your datacenter to handle the maximum capacity on your highest-demand days. For Amazon EKS Anywhere, or other on premises Kubernetes clusters, you are responsible for managing the infrastructure used in your Kubernetes deployments, but you can still rely on AWS to help you keep Kubernetes up to date. Why Kubernetes? 11 Amazon EKS User Guide • Control plane management — Amazon EKS manages the security and availability of the AWShosted Kubernetes control plane, which is responsible for scheduling containers, managing the availability of applications, and other key tasks, so you can focus on your application workloads. If your cluster breaks, AWS should have the means to restore your cluster to a running state. For Amazon EKS Anywhere, you would manage the control plane yourself. • Tested upgrades — When you upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running"} -{"global_id": 497, "doc_id": "eks", "chunk_id": "12", "question_id": 2, "question": "What is the responsibility of users for Amazon EKS Anywhere?", "answer_span": "For Amazon EKS Anywhere, or other on premises Kubernetes clusters, you are responsible for managing the infrastructure used in your Kubernetes deployments.", "chunk": "Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS Amazon EKS can save you on upfront costs. With Amazon EKS, this means that you can consume the best cloud resources offered by AWS, including computer instances (Amazon Elastic Compute Cloud), your own private environment (Amazon VPC), central identity and permissions management (IAM), and storage (Amazon EBS). AWS manages the computers, networks, data centers, and all the other physical components needed to run Kubernetes. Likewise, you don’t have to plan your datacenter to handle the maximum capacity on your highest-demand days. For Amazon EKS Anywhere, or other on premises Kubernetes clusters, you are responsible for managing the infrastructure used in your Kubernetes deployments, but you can still rely on AWS to help you keep Kubernetes up to date. Why Kubernetes? 11 Amazon EKS User Guide • Control plane management — Amazon EKS manages the security and availability of the AWShosted Kubernetes control plane, which is responsible for scheduling containers, managing the availability of applications, and other key tasks, so you can focus on your application workloads. If your cluster breaks, AWS should have the means to restore your cluster to a running state. For Amazon EKS Anywhere, you would manage the control plane yourself. • Tested upgrades — When you upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running"} -{"global_id": 498, "doc_id": "eks", "chunk_id": "12", "question_id": 3, "question": "What does Amazon EKS manage regarding the control plane?", "answer_span": "Amazon EKS manages the security and availability of the AWShosted Kubernetes control plane, which is responsible for scheduling containers, managing the availability of applications, and other key tasks.", "chunk": "Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS Amazon EKS can save you on upfront costs. With Amazon EKS, this means that you can consume the best cloud resources offered by AWS, including computer instances (Amazon Elastic Compute Cloud), your own private environment (Amazon VPC), central identity and permissions management (IAM), and storage (Amazon EBS). AWS manages the computers, networks, data centers, and all the other physical components needed to run Kubernetes. Likewise, you don’t have to plan your datacenter to handle the maximum capacity on your highest-demand days. For Amazon EKS Anywhere, or other on premises Kubernetes clusters, you are responsible for managing the infrastructure used in your Kubernetes deployments, but you can still rely on AWS to help you keep Kubernetes up to date. Why Kubernetes? 11 Amazon EKS User Guide • Control plane management — Amazon EKS manages the security and availability of the AWShosted Kubernetes control plane, which is responsible for scheduling containers, managing the availability of applications, and other key tasks, so you can focus on your application workloads. If your cluster breaks, AWS should have the means to restore your cluster to a running state. For Amazon EKS Anywhere, you would manage the control plane yourself. • Tested upgrades — When you upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running"} -{"global_id": 499, "doc_id": "eks", "chunk_id": "12", "question_id": 4, "question": "What can users rely on when upgrading their clusters?", "answer_span": "When you upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions.", "chunk": "Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS Amazon EKS can save you on upfront costs. With Amazon EKS, this means that you can consume the best cloud resources offered by AWS, including computer instances (Amazon Elastic Compute Cloud), your own private environment (Amazon VPC), central identity and permissions management (IAM), and storage (Amazon EBS). AWS manages the computers, networks, data centers, and all the other physical components needed to run Kubernetes. Likewise, you don’t have to plan your datacenter to handle the maximum capacity on your highest-demand days. For Amazon EKS Anywhere, or other on premises Kubernetes clusters, you are responsible for managing the infrastructure used in your Kubernetes deployments, but you can still rely on AWS to help you keep Kubernetes up to date. Why Kubernetes? 11 Amazon EKS User Guide • Control plane management — Amazon EKS manages the security and availability of the AWShosted Kubernetes control plane, which is responsible for scheduling containers, managing the availability of applications, and other key tasks, so you can focus on your application workloads. If your cluster breaks, AWS should have the means to restore your cluster to a running state. For Amazon EKS Anywhere, you would manage the control plane yourself. • Tested upgrades — When you upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running"} -{"global_id": 500, "doc_id": "eks", "chunk_id": "13", "question_id": 1, "question": "What services can you rely on to upgrade your clusters?", "answer_span": "you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions.", "chunk": "upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running of your workloads. Instead of building and managing those add-ons yourself, AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters. Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects. So you don’t have to build the software yourself or manage critical security patches, bug fixes, or upgrades. Likewise, if the defaults meet your needs, it’s typical for very little configuration of those add-ons to be needed. See the section called “Extend Clusters” for details on extending your cluster with add-ons. Kubernetes in action The following diagram shows key activities you would do as a Kubernetes Admin or Application Developer to create and use a Kubernetes cluster. In the process, it illustrates how Kubernetes components interact with each other, using the AWS cloud as the example of the underlying cloud provider. Why Kubernetes? 12 Amazon EKS User Guide A Kubernetes Admin creates the Kubernetes cluster using a tool specific to the type of provider on which the cluster will be built. This example uses the AWS cloud as the provider, which offers the managed Kubernetes service called Amazon EKS. The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon"} -{"global_id": 501, "doc_id": "eks", "chunk_id": "13", "question_id": 2, "question": "What does AWS provide to help with add-ons for Kubernetes?", "answer_span": "AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters.", "chunk": "upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running of your workloads. Instead of building and managing those add-ons yourself, AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters. Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects. So you don’t have to build the software yourself or manage critical security patches, bug fixes, or upgrades. Likewise, if the defaults meet your needs, it’s typical for very little configuration of those add-ons to be needed. See the section called “Extend Clusters” for details on extending your cluster with add-ons. Kubernetes in action The following diagram shows key activities you would do as a Kubernetes Admin or Application Developer to create and use a Kubernetes cluster. In the process, it illustrates how Kubernetes components interact with each other, using the AWS cloud as the example of the underlying cloud provider. Why Kubernetes? 12 Amazon EKS User Guide A Kubernetes Admin creates the Kubernetes cluster using a tool specific to the type of provider on which the cluster will be built. This example uses the AWS cloud as the provider, which offers the managed Kubernetes service called Amazon EKS. The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon"} -{"global_id": 502, "doc_id": "eks", "chunk_id": "13", "question_id": 3, "question": "What does Amazon EKS Anywhere provide for managing software?", "answer_span": "Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects.", "chunk": "upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running of your workloads. Instead of building and managing those add-ons yourself, AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters. Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects. So you don’t have to build the software yourself or manage critical security patches, bug fixes, or upgrades. Likewise, if the defaults meet your needs, it’s typical for very little configuration of those add-ons to be needed. See the section called “Extend Clusters” for details on extending your cluster with add-ons. Kubernetes in action The following diagram shows key activities you would do as a Kubernetes Admin or Application Developer to create and use a Kubernetes cluster. In the process, it illustrates how Kubernetes components interact with each other, using the AWS cloud as the example of the underlying cloud provider. Why Kubernetes? 12 Amazon EKS User Guide A Kubernetes Admin creates the Kubernetes cluster using a tool specific to the type of provider on which the cluster will be built. This example uses the AWS cloud as the provider, which offers the managed Kubernetes service called Amazon EKS. The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon"} -{"global_id": 503, "doc_id": "eks", "chunk_id": "13", "question_id": 4, "question": "What does the managed service Amazon EKS automatically allocate?", "answer_span": "The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management.", "chunk": "upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running of your workloads. Instead of building and managing those add-ons yourself, AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters. Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects. So you don’t have to build the software yourself or manage critical security patches, bug fixes, or upgrades. Likewise, if the defaults meet your needs, it’s typical for very little configuration of those add-ons to be needed. See the section called “Extend Clusters” for details on extending your cluster with add-ons. Kubernetes in action The following diagram shows key activities you would do as a Kubernetes Admin or Application Developer to create and use a Kubernetes cluster. In the process, it illustrates how Kubernetes components interact with each other, using the AWS cloud as the example of the underlying cloud provider. Why Kubernetes? 12 Amazon EKS User Guide A Kubernetes Admin creates the Kubernetes cluster using a tool specific to the type of provider on which the cluster will be built. This example uses the AWS cloud as the provider, which offers the managed Kubernetes service called Amazon EKS. The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon"} -{"global_id": 504, "doc_id": "eks", "chunk_id": "14", "question_id": 1, "question": "What does the managed service allocate for running workloads?", "answer_span": "allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads.", "chunk": "the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads. AWS manages one Amazon VPC itself for the control plane, while the other Amazon VPC contains the customer nodes that run workloads. Many of the Kubernetes Admin’s tasks going forward are done using Kubernetes tools such as kubectl. That tool makes requests for services directly to the cluster’s control plane. The ways that queries and changes are made to the cluster are then very similar to the ways you would do them on any Kubernetes cluster. An application developer wanting to deploy workloads to this cluster can perform several tasks. The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster. AWS offers the Amazon Elastic Container Registry (Amazon ECR) for that purpose. Why Kubernetes? 13 Amazon EKS User Guide To run the application, the developer can create YAML-formatted configuration files that tell the cluster how to run the application, including which containers to pull from the registry and how to wrap those containers in Pods. The control plane (scheduler) schedules the containers to one or more nodes and the container runtime on each node actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to"} -{"global_id": 505, "doc_id": "eks", "chunk_id": "14", "question_id": 2, "question": "What tool does the Kubernetes Admin use to make requests for services?", "answer_span": "That tool makes requests for services directly to the cluster’s control plane.", "chunk": "the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads. AWS manages one Amazon VPC itself for the control plane, while the other Amazon VPC contains the customer nodes that run workloads. Many of the Kubernetes Admin’s tasks going forward are done using Kubernetes tools such as kubectl. That tool makes requests for services directly to the cluster’s control plane. The ways that queries and changes are made to the cluster are then very similar to the ways you would do them on any Kubernetes cluster. An application developer wanting to deploy workloads to this cluster can perform several tasks. The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster. AWS offers the Amazon Elastic Container Registry (Amazon ECR) for that purpose. Why Kubernetes? 13 Amazon EKS User Guide To run the application, the developer can create YAML-formatted configuration files that tell the cluster how to run the application, including which containers to pull from the registry and how to wrap those containers in Pods. The control plane (scheduler) schedules the containers to one or more nodes and the container runtime on each node actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to"} -{"global_id": 506, "doc_id": "eks", "chunk_id": "14", "question_id": 3, "question": "What must an application developer do to deploy workloads to the cluster?", "answer_span": "The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster.", "chunk": "the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads. AWS manages one Amazon VPC itself for the control plane, while the other Amazon VPC contains the customer nodes that run workloads. Many of the Kubernetes Admin’s tasks going forward are done using Kubernetes tools such as kubectl. That tool makes requests for services directly to the cluster’s control plane. The ways that queries and changes are made to the cluster are then very similar to the ways you would do them on any Kubernetes cluster. An application developer wanting to deploy workloads to this cluster can perform several tasks. The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster. AWS offers the Amazon Elastic Container Registry (Amazon ECR) for that purpose. Why Kubernetes? 13 Amazon EKS User Guide To run the application, the developer can create YAML-formatted configuration files that tell the cluster how to run the application, including which containers to pull from the registry and how to wrap those containers in Pods. The control plane (scheduler) schedules the containers to one or more nodes and the container runtime on each node actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to"} -{"global_id": 507, "doc_id": "eks", "chunk_id": "14", "question_id": 4, "question": "What does the control plane do with the containers?", "answer_span": "The control plane (scheduler) schedules the containers to one or more nodes and the container runtime on each node actually pulls and runs the needed containers.", "chunk": "the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads. AWS manages one Amazon VPC itself for the control plane, while the other Amazon VPC contains the customer nodes that run workloads. Many of the Kubernetes Admin’s tasks going forward are done using Kubernetes tools such as kubectl. That tool makes requests for services directly to the cluster’s control plane. The ways that queries and changes are made to the cluster are then very similar to the ways you would do them on any Kubernetes cluster. An application developer wanting to deploy workloads to this cluster can perform several tasks. The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster. AWS offers the Amazon Elastic Container Registry (Amazon ECR) for that purpose. Why Kubernetes? 13 Amazon EKS User Guide To run the application, the developer can create YAML-formatted configuration files that tell the cluster how to run the application, including which containers to pull from the registry and how to wrap those containers in Pods. The control plane (scheduler) schedules the containers to one or more nodes and the container runtime on each node actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to"} -{"global_id": 508, "doc_id": "eks", "chunk_id": "15", "question_id": 1, "question": "What can a developer set up to balance traffic to available containers?", "answer_span": "The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world.", "chunk": "actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to use the application can connect to the application endpoint to access it. The following sections go through details of each of these features, from the perspective of Kubernetes Clusters and Workloads. Clusters If your job is to start and manage Kubernetes clusters, you should know how Kubernetes clusters are created, enhanced, managed, and deleted. You should also know what the components are that make up a cluster and what you need to do to maintain those components. Tools for managing clusters handle the overlap between the Kubernetes services and the underlying hardware provider. For that reason, automation of these tasks tend to be done by the Kubernetes provider (such as Amazon EKS or Amazon EKS Anywhere) using tools that are specific to the provider. For example, to start an Amazon EKS cluster you can use eksctl create cluster, while for Amazon EKS Anywhere you can use eksctl anywhere create cluster. Note that while these commands create a Kubernetes cluster, they are specific to the provider and are not part of the Kubernetes project itself. Cluster creation and management tools The Kubernetes project offers tools for creating a Kubernetes cluster manually. So if you want to install Kubernetes on a single machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported"} -{"global_id": 509, "doc_id": "eks", "chunk_id": "15", "question_id": 2, "question": "What should someone managing Kubernetes clusters know?", "answer_span": "If your job is to start and manage Kubernetes clusters, you should know how Kubernetes clusters are created, enhanced, managed, and deleted.", "chunk": "actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to use the application can connect to the application endpoint to access it. The following sections go through details of each of these features, from the perspective of Kubernetes Clusters and Workloads. Clusters If your job is to start and manage Kubernetes clusters, you should know how Kubernetes clusters are created, enhanced, managed, and deleted. You should also know what the components are that make up a cluster and what you need to do to maintain those components. Tools for managing clusters handle the overlap between the Kubernetes services and the underlying hardware provider. For that reason, automation of these tasks tend to be done by the Kubernetes provider (such as Amazon EKS or Amazon EKS Anywhere) using tools that are specific to the provider. For example, to start an Amazon EKS cluster you can use eksctl create cluster, while for Amazon EKS Anywhere you can use eksctl anywhere create cluster. Note that while these commands create a Kubernetes cluster, they are specific to the provider and are not part of the Kubernetes project itself. Cluster creation and management tools The Kubernetes project offers tools for creating a Kubernetes cluster manually. So if you want to install Kubernetes on a single machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported"} -{"global_id": 510, "doc_id": "eks", "chunk_id": "15", "question_id": 3, "question": "What tools can be used to create a Kubernetes cluster manually?", "answer_span": "So if you want to install Kubernetes on a single machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools.", "chunk": "actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to use the application can connect to the application endpoint to access it. The following sections go through details of each of these features, from the perspective of Kubernetes Clusters and Workloads. Clusters If your job is to start and manage Kubernetes clusters, you should know how Kubernetes clusters are created, enhanced, managed, and deleted. You should also know what the components are that make up a cluster and what you need to do to maintain those components. Tools for managing clusters handle the overlap between the Kubernetes services and the underlying hardware provider. For that reason, automation of these tasks tend to be done by the Kubernetes provider (such as Amazon EKS or Amazon EKS Anywhere) using tools that are specific to the provider. For example, to start an Amazon EKS cluster you can use eksctl create cluster, while for Amazon EKS Anywhere you can use eksctl anywhere create cluster. Note that while these commands create a Kubernetes cluster, they are specific to the provider and are not part of the Kubernetes project itself. Cluster creation and management tools The Kubernetes project offers tools for creating a Kubernetes cluster manually. So if you want to install Kubernetes on a single machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported"} -{"global_id": 511, "doc_id": "eks", "chunk_id": "15", "question_id": 4, "question": "What is the purpose of automation in managing Kubernetes clusters?", "answer_span": "For that reason, automation of these tasks tend to be done by the Kubernetes provider (such as Amazon EKS or Amazon EKS Anywhere) using tools that are specific to the provider.", "chunk": "actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to use the application can connect to the application endpoint to access it. The following sections go through details of each of these features, from the perspective of Kubernetes Clusters and Workloads. Clusters If your job is to start and manage Kubernetes clusters, you should know how Kubernetes clusters are created, enhanced, managed, and deleted. You should also know what the components are that make up a cluster and what you need to do to maintain those components. Tools for managing clusters handle the overlap between the Kubernetes services and the underlying hardware provider. For that reason, automation of these tasks tend to be done by the Kubernetes provider (such as Amazon EKS or Amazon EKS Anywhere) using tools that are specific to the provider. For example, to start an Amazon EKS cluster you can use eksctl create cluster, while for Amazon EKS Anywhere you can use eksctl anywhere create cluster. Note that while these commands create a Kubernetes cluster, they are specific to the provider and are not part of the Kubernetes project itself. Cluster creation and management tools The Kubernetes project offers tools for creating a Kubernetes cluster manually. So if you want to install Kubernetes on a single machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported"} -{"global_id": 512, "doc_id": "eks", "chunk_id": "16", "question_id": 1, "question": "What tools can be used to create and manage Kubernetes clusters?", "answer_span": "you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools.", "chunk": "machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported by an established Kubernetes provider, such as Amazon EKS or Amazon EKS Anywhere. In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform (see Amazon EKS Blueprints for Terraform). You can also create a cluster from the AWS Management Console. See Amazon EKS features for a list what you get with Amazon EKS. Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones. • Node management — Instead of manually adding nodes, you can have Amazon EKS create nodes automatically as needed, using Managed Node Groups (see the section called “Managed node groups”) or Karpenter. Managed Node Groups have integrations with Kubernetes Cluster Autoscaling. Using node management tools, you can take advantage of cost savings, with things like Spot Instances and node consolidation, and availability, using Scheduling features to set how workloads are deployed and nodes are selected. • Cluster networking — Using CloudFormation templates, eksctl sets up networking between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities"} -{"global_id": 513, "doc_id": "eks", "chunk_id": "16", "question_id": 2, "question": "What does Amazon EKS manage for you?", "answer_span": "Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones.", "chunk": "machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported by an established Kubernetes provider, such as Amazon EKS or Amazon EKS Anywhere. In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform (see Amazon EKS Blueprints for Terraform). You can also create a cluster from the AWS Management Console. See Amazon EKS features for a list what you get with Amazon EKS. Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones. • Node management — Instead of manually adding nodes, you can have Amazon EKS create nodes automatically as needed, using Managed Node Groups (see the section called “Managed node groups”) or Karpenter. Managed Node Groups have integrations with Kubernetes Cluster Autoscaling. Using node management tools, you can take advantage of cost savings, with things like Spot Instances and node consolidation, and availability, using Scheduling features to set how workloads are deployed and nodes are selected. • Cluster networking — Using CloudFormation templates, eksctl sets up networking between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities"} -{"global_id": 514, "doc_id": "eks", "chunk_id": "16", "question_id": 3, "question": "How can you create Amazon EKS clusters in AWS Cloud?", "answer_span": "In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform.", "chunk": "machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported by an established Kubernetes provider, such as Amazon EKS or Amazon EKS Anywhere. In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform (see Amazon EKS Blueprints for Terraform). You can also create a cluster from the AWS Management Console. See Amazon EKS features for a list what you get with Amazon EKS. Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones. • Node management — Instead of manually adding nodes, you can have Amazon EKS create nodes automatically as needed, using Managed Node Groups (see the section called “Managed node groups”) or Karpenter. Managed Node Groups have integrations with Kubernetes Cluster Autoscaling. Using node management tools, you can take advantage of cost savings, with things like Spot Instances and node consolidation, and availability, using Scheduling features to set how workloads are deployed and nodes are selected. • Cluster networking — Using CloudFormation templates, eksctl sets up networking between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities"} -{"global_id": 515, "doc_id": "eks", "chunk_id": "16", "question_id": 4, "question": "What is the purpose of Managed Node Groups in Amazon EKS?", "answer_span": "Instead of manually adding nodes, you can have Amazon EKS create nodes automatically as needed, using Managed Node Groups.", "chunk": "machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported by an established Kubernetes provider, such as Amazon EKS or Amazon EKS Anywhere. In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform (see Amazon EKS Blueprints for Terraform). You can also create a cluster from the AWS Management Console. See Amazon EKS features for a list what you get with Amazon EKS. Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones. • Node management — Instead of manually adding nodes, you can have Amazon EKS create nodes automatically as needed, using Managed Node Groups (see the section called “Managed node groups”) or Karpenter. Managed Node Groups have integrations with Kubernetes Cluster Autoscaling. Using node management tools, you can take advantage of cost savings, with things like Spot Instances and node consolidation, and availability, using Scheduling features to set how workloads are deployed and nodes are selected. • Cluster networking — Using CloudFormation templates, eksctl sets up networking between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities"} -{"global_id": 516, "doc_id": "eks", "chunk_id": "17", "question_id": 1, "question": "What does Amazon EKS save you from having to build?", "answer_span": "Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters.", "chunk": "between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities (see the section called “Pod Identity”), which provides a means of letting Pods tap into AWS cloud methods of managing credentials and permissions. • Add-Ons — Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters. For example, when you create an Amazon EKS cluster from the AWS Management Console, it automatically adds the Amazon EKS kube-proxy (the section called “kube-proxy”), Amazon VPC CNI plugin for Kubernetes (the section called “Amazon VPC CNI”), and CoreDNS (the section called “CoreDNS”) add-ons. See the section called “Amazon EKS add-ons” for more on these add-ons, including a list of which are available. To run your clusters on your own on-premises computers and networks, Amazon offers Amazon EKS Anywhere. Instead of the AWS Cloud being the provider, you have the choice of running Amazon EKS Anywhere on VMWare vSphere, bare metal (Tinkerbell provider), Snow, CloudStack, or Nutanix platforms using your own equipment. Amazon EKS Anywhere is based on the same Amazon EKS Distro software that is used by Amazon EKS. However, Amazon EKS Anywhere relies on different implementations of the Kubernetes Cluster API (CAPI) interface to manage the full lifecycle of the machines in an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User"} -{"global_id": 517, "doc_id": "eks", "chunk_id": "17", "question_id": 2, "question": "What is the purpose of Amazon EKS Pod Identities?", "answer_span": "Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities, which provides a means of letting Pods tap into AWS cloud methods of managing credentials and permissions.", "chunk": "between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities (see the section called “Pod Identity”), which provides a means of letting Pods tap into AWS cloud methods of managing credentials and permissions. • Add-Ons — Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters. For example, when you create an Amazon EKS cluster from the AWS Management Console, it automatically adds the Amazon EKS kube-proxy (the section called “kube-proxy”), Amazon VPC CNI plugin for Kubernetes (the section called “Amazon VPC CNI”), and CoreDNS (the section called “CoreDNS”) add-ons. See the section called “Amazon EKS add-ons” for more on these add-ons, including a list of which are available. To run your clusters on your own on-premises computers and networks, Amazon offers Amazon EKS Anywhere. Instead of the AWS Cloud being the provider, you have the choice of running Amazon EKS Anywhere on VMWare vSphere, bare metal (Tinkerbell provider), Snow, CloudStack, or Nutanix platforms using your own equipment. Amazon EKS Anywhere is based on the same Amazon EKS Distro software that is used by Amazon EKS. However, Amazon EKS Anywhere relies on different implementations of the Kubernetes Cluster API (CAPI) interface to manage the full lifecycle of the machines in an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User"} -{"global_id": 518, "doc_id": "eks", "chunk_id": "17", "question_id": 3, "question": "What platforms can Amazon EKS Anywhere run on?", "answer_span": "You have the choice of running Amazon EKS Anywhere on VMWare vSphere, bare metal (Tinkerbell provider), Snow, CloudStack, or Nutanix platforms using your own equipment.", "chunk": "between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities (see the section called “Pod Identity”), which provides a means of letting Pods tap into AWS cloud methods of managing credentials and permissions. • Add-Ons — Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters. For example, when you create an Amazon EKS cluster from the AWS Management Console, it automatically adds the Amazon EKS kube-proxy (the section called “kube-proxy”), Amazon VPC CNI plugin for Kubernetes (the section called “Amazon VPC CNI”), and CoreDNS (the section called “CoreDNS”) add-ons. See the section called “Amazon EKS add-ons” for more on these add-ons, including a list of which are available. To run your clusters on your own on-premises computers and networks, Amazon offers Amazon EKS Anywhere. Instead of the AWS Cloud being the provider, you have the choice of running Amazon EKS Anywhere on VMWare vSphere, bare metal (Tinkerbell provider), Snow, CloudStack, or Nutanix platforms using your own equipment. Amazon EKS Anywhere is based on the same Amazon EKS Distro software that is used by Amazon EKS. However, Amazon EKS Anywhere relies on different implementations of the Kubernetes Cluster API (CAPI) interface to manage the full lifecycle of the machines in an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User"} -{"global_id": 519, "doc_id": "eks", "chunk_id": "17", "question_id": 4, "question": "What added responsibility comes with running Amazon EKS Anywhere?", "answer_span": "Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data.", "chunk": "between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities (see the section called “Pod Identity”), which provides a means of letting Pods tap into AWS cloud methods of managing credentials and permissions. • Add-Ons — Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters. For example, when you create an Amazon EKS cluster from the AWS Management Console, it automatically adds the Amazon EKS kube-proxy (the section called “kube-proxy”), Amazon VPC CNI plugin for Kubernetes (the section called “Amazon VPC CNI”), and CoreDNS (the section called “CoreDNS”) add-ons. See the section called “Amazon EKS add-ons” for more on these add-ons, including a list of which are available. To run your clusters on your own on-premises computers and networks, Amazon offers Amazon EKS Anywhere. Instead of the AWS Cloud being the provider, you have the choice of running Amazon EKS Anywhere on VMWare vSphere, bare metal (Tinkerbell provider), Snow, CloudStack, or Nutanix platforms using your own equipment. Amazon EKS Anywhere is based on the same Amazon EKS Distro software that is used by Amazon EKS. However, Amazon EKS Anywhere relies on different implementations of the Kubernetes Cluster API (CAPI) interface to manage the full lifecycle of the machines in an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User"} -{"global_id": 520, "doc_id": "eks", "chunk_id": "18", "question_id": 1, "question": "What is the responsibility of managing the control plane in an Amazon EKS Anywhere cluster?", "answer_span": "Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data.", "chunk": "an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User Guide Cluster components Kubernetes cluster components are divided into two major areas: control plane and worker nodes. Control Plane Components manage the cluster and provide access to its APIs. Worker nodes (sometimes just referred to as Nodes) provide the places where the actual workloads are run. Node Components consist of services that run on each node to communicate with the control plane and run containers. The set of worker nodes for your cluster is referred to as the Data Plane. Control plane The control plane consists of a set of services that manage the cluster. These services may all be running on a single computer or may be spread across multiple computers. Internally, these are referred to as Control Plane Instances (CPIs). How CPIs are run depends on the size of the cluster and requirements for high availability. As demand increase in the cluster, a control plane service can scale to provide more instances of that service, with requests being load balanced between the instances. Tasks that components of the Kubernetes control plane performs include: • Communicating with cluster components (API server) — The API server (kube-apiserver) exposes the Kubernetes API so requests to the cluster can be made from both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within"} -{"global_id": 521, "doc_id": "eks", "chunk_id": "18", "question_id": 2, "question": "What are the two major areas into which Kubernetes cluster components are divided?", "answer_span": "Kubernetes cluster components are divided into two major areas: control plane and worker nodes.", "chunk": "an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User Guide Cluster components Kubernetes cluster components are divided into two major areas: control plane and worker nodes. Control Plane Components manage the cluster and provide access to its APIs. Worker nodes (sometimes just referred to as Nodes) provide the places where the actual workloads are run. Node Components consist of services that run on each node to communicate with the control plane and run containers. The set of worker nodes for your cluster is referred to as the Data Plane. Control plane The control plane consists of a set of services that manage the cluster. These services may all be running on a single computer or may be spread across multiple computers. Internally, these are referred to as Control Plane Instances (CPIs). How CPIs are run depends on the size of the cluster and requirements for high availability. As demand increase in the cluster, a control plane service can scale to provide more instances of that service, with requests being load balanced between the instances. Tasks that components of the Kubernetes control plane performs include: • Communicating with cluster components (API server) — The API server (kube-apiserver) exposes the Kubernetes API so requests to the cluster can be made from both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within"} -{"global_id": 522, "doc_id": "eks", "chunk_id": "18", "question_id": 3, "question": "What is referred to as the Data Plane in a Kubernetes cluster?", "answer_span": "The set of worker nodes for your cluster is referred to as the Data Plane.", "chunk": "an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User Guide Cluster components Kubernetes cluster components are divided into two major areas: control plane and worker nodes. Control Plane Components manage the cluster and provide access to its APIs. Worker nodes (sometimes just referred to as Nodes) provide the places where the actual workloads are run. Node Components consist of services that run on each node to communicate with the control plane and run containers. The set of worker nodes for your cluster is referred to as the Data Plane. Control plane The control plane consists of a set of services that manage the cluster. These services may all be running on a single computer or may be spread across multiple computers. Internally, these are referred to as Control Plane Instances (CPIs). How CPIs are run depends on the size of the cluster and requirements for high availability. As demand increase in the cluster, a control plane service can scale to provide more instances of that service, with requests being load balanced between the instances. Tasks that components of the Kubernetes control plane performs include: • Communicating with cluster components (API server) — The API server (kube-apiserver) exposes the Kubernetes API so requests to the cluster can be made from both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within"} -{"global_id": 523, "doc_id": "eks", "chunk_id": "18", "question_id": 4, "question": "What does the API server (kube-apiserver) expose?", "answer_span": "The API server (kube-apiserver) exposes the Kubernetes API so requests to the cluster can be made from both inside and outside of the cluster.", "chunk": "an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User Guide Cluster components Kubernetes cluster components are divided into two major areas: control plane and worker nodes. Control Plane Components manage the cluster and provide access to its APIs. Worker nodes (sometimes just referred to as Nodes) provide the places where the actual workloads are run. Node Components consist of services that run on each node to communicate with the control plane and run containers. The set of worker nodes for your cluster is referred to as the Data Plane. Control plane The control plane consists of a set of services that manage the cluster. These services may all be running on a single computer or may be spread across multiple computers. Internally, these are referred to as Control Plane Instances (CPIs). How CPIs are run depends on the size of the cluster and requirements for high availability. As demand increase in the cluster, a control plane service can scale to provide more instances of that service, with requests being load balanced between the instances. Tasks that components of the Kubernetes control plane performs include: • Communicating with cluster components (API server) — The API server (kube-apiserver) exposes the Kubernetes API so requests to the cluster can be made from both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within"} -{"global_id": 524, "doc_id": "eks", "chunk_id": "19", "question_id": 1, "question": "What types of requests can come from outside commands regarding a cluster's objects?", "answer_span": "requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod.", "chunk": "both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within the cluster, such as a query to the kubelet service for the status of a Pod. • Store data about the cluster (etcd key value store) — The etcd service provides the critical role of keeping track of the current state of the cluster. If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while. For that reason, critical clusters typically have multiple, loadbalanced instances of the etcd service running at a time and do periodic backups of the etcd key value store in case of data loss or corruption. Keep in mind that, in Amazon EKS, this is all handled for you automatically by default. Amazon EKS Anywhere provides instruction for etcd backup and restore. See the etcd Data Model to learn how etcd manages data. • Schedule Pods to nodes (Scheduler) — Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler). Because a cluster could have multiple nodes that are capable of running the Pod, it is up to the Scheduler to choose which node (or nodes, in the case of replicas) the Pod should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed"} -{"global_id": 525, "doc_id": "eks", "chunk_id": "19", "question_id": 2, "question": "What role does the etcd service play in a cluster?", "answer_span": "The etcd service provides the critical role of keeping track of the current state of the cluster.", "chunk": "both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within the cluster, such as a query to the kubelet service for the status of a Pod. • Store data about the cluster (etcd key value store) — The etcd service provides the critical role of keeping track of the current state of the cluster. If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while. For that reason, critical clusters typically have multiple, loadbalanced instances of the etcd service running at a time and do periodic backups of the etcd key value store in case of data loss or corruption. Keep in mind that, in Amazon EKS, this is all handled for you automatically by default. Amazon EKS Anywhere provides instruction for etcd backup and restore. See the etcd Data Model to learn how etcd manages data. • Schedule Pods to nodes (Scheduler) — Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler). Because a cluster could have multiple nodes that are capable of running the Pod, it is up to the Scheduler to choose which node (or nodes, in the case of replicas) the Pod should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed"} -{"global_id": 526, "doc_id": "eks", "chunk_id": "19", "question_id": 3, "question": "What happens if the etcd service becomes inaccessible?", "answer_span": "If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while.", "chunk": "both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within the cluster, such as a query to the kubelet service for the status of a Pod. • Store data about the cluster (etcd key value store) — The etcd service provides the critical role of keeping track of the current state of the cluster. If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while. For that reason, critical clusters typically have multiple, loadbalanced instances of the etcd service running at a time and do periodic backups of the etcd key value store in case of data loss or corruption. Keep in mind that, in Amazon EKS, this is all handled for you automatically by default. Amazon EKS Anywhere provides instruction for etcd backup and restore. See the etcd Data Model to learn how etcd manages data. • Schedule Pods to nodes (Scheduler) — Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler). Because a cluster could have multiple nodes that are capable of running the Pod, it is up to the Scheduler to choose which node (or nodes, in the case of replicas) the Pod should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed"} -{"global_id": 527, "doc_id": "eks", "chunk_id": "19", "question_id": 4, "question": "Who is responsible for scheduling Pods to nodes in Kubernetes?", "answer_span": "Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler).", "chunk": "both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within the cluster, such as a query to the kubelet service for the status of a Pod. • Store data about the cluster (etcd key value store) — The etcd service provides the critical role of keeping track of the current state of the cluster. If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while. For that reason, critical clusters typically have multiple, loadbalanced instances of the etcd service running at a time and do periodic backups of the etcd key value store in case of data loss or corruption. Keep in mind that, in Amazon EKS, this is all handled for you automatically by default. Amazon EKS Anywhere provides instruction for etcd backup and restore. See the etcd Data Model to learn how etcd manages data. • Schedule Pods to nodes (Scheduler) — Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler). Because a cluster could have multiple nodes that are capable of running the Pod, it is up to the Scheduler to choose which node (or nodes, in the case of replicas) the Pod should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed"} -{"global_id": 528, "doc_id": "eks", "chunk_id": "20", "question_id": 1, "question": "What happens if there is not enough available capacity to run the requested Pod on an existing node?", "answer_span": "the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions.", "chunk": "should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed node groups”) or Karpenter that can automatically start up new nodes to handle the workloads. • Keep components in desired state (Controller Manager) — The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states. In particular, there are several controllers that watch over different Kubernetes objects, which includes a statefulsetcontroller, endpoint-controller, cronjob-controller, node-controller, and others. • Manage cloud resources (Cloud Controller Manager) — Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager). Controllers managed by the Cloud Controller Manager can include a route controller (for setting up cloud network routes), service controller (for using cloud load balancing services), and node lifecycle controller (to keep nodes in sync with Kubernetes throughout their lifecycles). Worker Nodes (data plane) For a single-node Kubernetes cluster, workloads run on the same machine as the control plane. However, a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads. When you first create a Kubernetes cluster, some cluster creation tools allow you to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) —"} -{"global_id": 529, "doc_id": "eks", "chunk_id": "20", "question_id": 2, "question": "What is the role of the Kubernetes Controller Manager?", "answer_span": "The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states.", "chunk": "should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed node groups”) or Karpenter that can automatically start up new nodes to handle the workloads. • Keep components in desired state (Controller Manager) — The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states. In particular, there are several controllers that watch over different Kubernetes objects, which includes a statefulsetcontroller, endpoint-controller, cronjob-controller, node-controller, and others. • Manage cloud resources (Cloud Controller Manager) — Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager). Controllers managed by the Cloud Controller Manager can include a route controller (for setting up cloud network routes), service controller (for using cloud load balancing services), and node lifecycle controller (to keep nodes in sync with Kubernetes throughout their lifecycles). Worker Nodes (data plane) For a single-node Kubernetes cluster, workloads run on the same machine as the control plane. However, a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads. When you first create a Kubernetes cluster, some cluster creation tools allow you to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) —"} -{"global_id": 530, "doc_id": "eks", "chunk_id": "20", "question_id": 3, "question": "What does the Cloud Controller Manager handle?", "answer_span": "Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager).", "chunk": "should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed node groups”) or Karpenter that can automatically start up new nodes to handle the workloads. • Keep components in desired state (Controller Manager) — The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states. In particular, there are several controllers that watch over different Kubernetes objects, which includes a statefulsetcontroller, endpoint-controller, cronjob-controller, node-controller, and others. • Manage cloud resources (Cloud Controller Manager) — Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager). Controllers managed by the Cloud Controller Manager can include a route controller (for setting up cloud network routes), service controller (for using cloud load balancing services), and node lifecycle controller (to keep nodes in sync with Kubernetes throughout their lifecycles). Worker Nodes (data plane) For a single-node Kubernetes cluster, workloads run on the same machine as the control plane. However, a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads. When you first create a Kubernetes cluster, some cluster creation tools allow you to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) —"} -{"global_id": 531, "doc_id": "eks", "chunk_id": "20", "question_id": 4, "question": "What is a more standard configuration for running Kubernetes workloads?", "answer_span": "a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads.", "chunk": "should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed node groups”) or Karpenter that can automatically start up new nodes to handle the workloads. • Keep components in desired state (Controller Manager) — The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states. In particular, there are several controllers that watch over different Kubernetes objects, which includes a statefulsetcontroller, endpoint-controller, cronjob-controller, node-controller, and others. • Manage cloud resources (Cloud Controller Manager) — Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager). Controllers managed by the Cloud Controller Manager can include a route controller (for setting up cloud network routes), service controller (for using cloud load balancing services), and node lifecycle controller (to keep nodes in sync with Kubernetes throughout their lifecycles). Worker Nodes (data plane) For a single-node Kubernetes cluster, workloads run on the same machine as the control plane. However, a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads. When you first create a Kubernetes cluster, some cluster creation tools allow you to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) —"} -{"global_id": 532, "doc_id": "eks", "chunk_id": "21", "question_id": 1, "question": "What is the role of the kubelet in managing nodes?", "answer_span": "The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running.", "chunk": "to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) — The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running. The kubelet can read the Pod manifests and set up storage volumes or other features needed by the Pods on the local system. It can also check on the health of the locally running containers. • Run containers on a node (container runtime) — The Container Runtime on each node manages the containers requested for each Pod assigned to the node. That means that it can pull container images from the appropriate registry, run the container, stop it, and responds to queries about the container. The default container runtime is containerd. As of Kubernetes 1.24, the special integration of Docker (dockershim) that could be used as the container runtime was Clusters 17 Amazon EKS User Guide dropped from Kubernetes. While you can still use Docker to test and run containers on your local system, to use Docker with Kubernetes you would now have to Install Docker Engine on each node to use it with Kubernetes. • Manage networking between containers (kube-proxy) — To be able to support communication between Pods, Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes"} -{"global_id": 533, "doc_id": "eks", "chunk_id": "21", "question_id": 2, "question": "What does the Container Runtime manage on each node?", "answer_span": "The Container Runtime on each node manages the containers requested for each Pod assigned to the node.", "chunk": "to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) — The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running. The kubelet can read the Pod manifests and set up storage volumes or other features needed by the Pods on the local system. It can also check on the health of the locally running containers. • Run containers on a node (container runtime) — The Container Runtime on each node manages the containers requested for each Pod assigned to the node. That means that it can pull container images from the appropriate registry, run the container, stop it, and responds to queries about the container. The default container runtime is containerd. As of Kubernetes 1.24, the special integration of Docker (dockershim) that could be used as the container runtime was Clusters 17 Amazon EKS User Guide dropped from Kubernetes. While you can still use Docker to test and run containers on your local system, to use Docker with Kubernetes you would now have to Install Docker Engine on each node to use it with Kubernetes. • Manage networking between containers (kube-proxy) — To be able to support communication between Pods, Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes"} -{"global_id": 534, "doc_id": "eks", "chunk_id": "21", "question_id": 3, "question": "What is the default container runtime mentioned in the text?", "answer_span": "The default container runtime is containerd.", "chunk": "to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) — The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running. The kubelet can read the Pod manifests and set up storage volumes or other features needed by the Pods on the local system. It can also check on the health of the locally running containers. • Run containers on a node (container runtime) — The Container Runtime on each node manages the containers requested for each Pod assigned to the node. That means that it can pull container images from the appropriate registry, run the container, stop it, and responds to queries about the container. The default container runtime is containerd. As of Kubernetes 1.24, the special integration of Docker (dockershim) that could be used as the container runtime was Clusters 17 Amazon EKS User Guide dropped from Kubernetes. While you can still use Docker to test and run containers on your local system, to use Docker with Kubernetes you would now have to Install Docker Engine on each node to use it with Kubernetes. • Manage networking between containers (kube-proxy) — To be able to support communication between Pods, Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes"} -{"global_id": 535, "doc_id": "eks", "chunk_id": "21", "question_id": 4, "question": "How does Kubernetes support communication between Pods?", "answer_span": "Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods.", "chunk": "to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) — The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running. The kubelet can read the Pod manifests and set up storage volumes or other features needed by the Pods on the local system. It can also check on the health of the locally running containers. • Run containers on a node (container runtime) — The Container Runtime on each node manages the containers requested for each Pod assigned to the node. That means that it can pull container images from the appropriate registry, run the container, stop it, and responds to queries about the container. The default container runtime is containerd. As of Kubernetes 1.24, the special integration of Docker (dockershim) that could be used as the container runtime was Clusters 17 Amazon EKS User Guide dropped from Kubernetes. While you can still use Docker to test and run containers on your local system, to use Docker with Kubernetes you would now have to Install Docker Engine on each node to use it with Kubernetes. • Manage networking between containers (kube-proxy) — To be able to support communication between Pods, Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes"} -{"global_id": 536, "doc_id": "eks", "chunk_id": "22", "question_id": 1, "question": "What feature is used to set up Pod networks that track IP addresses and ports?", "answer_span": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods.", "chunk": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes to support the cluster, but are not run in the control plane. These services often run directly on nodes in the kube-system namespace or in its own namespace (as is often done with third-party service providers). A common example is the CoreDNS service, which provides DNS services to the cluster. Refer to Discovering builtin services for information on how to see which cluster services are running in kube-system on your cluster. There are different types of add-ons you can consider adding to your clusters. To keep your clusters healthy, you can add observability features (see Monitor clusters) that allow you to do things like logging, auditing, and metrics. With this information, you can troubleshoot problems that occur, often through the same observability interfaces. Examples of these types of services include Amazon GuardDuty, CloudWatch (see the section called “Amazon CloudWatch”), AWS Distro for OpenTelemetry, Amazon VPC CNI plugin for Kubernetes (see the section called “Amazon VPC CNI”), and Grafana Kubernetes Monitoring. For storage (see App data storage), add-ons to Amazon EKS include Amazon Elastic Block Store CSI Driver (see the section called “Amazon EBS”), Amazon Elastic File System CSI Driver (see the section called “Amazon EFS”), and several third-party storage add-ons such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of"} -{"global_id": 537, "doc_id": "eks", "chunk_id": "22", "question_id": 2, "question": "What runs on every node to allow communication between Pods?", "answer_span": "The kube-proxy service runs on every node to allow that communication between Pods to take place.", "chunk": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes to support the cluster, but are not run in the control plane. These services often run directly on nodes in the kube-system namespace or in its own namespace (as is often done with third-party service providers). A common example is the CoreDNS service, which provides DNS services to the cluster. Refer to Discovering builtin services for information on how to see which cluster services are running in kube-system on your cluster. There are different types of add-ons you can consider adding to your clusters. To keep your clusters healthy, you can add observability features (see Monitor clusters) that allow you to do things like logging, auditing, and metrics. With this information, you can troubleshoot problems that occur, often through the same observability interfaces. Examples of these types of services include Amazon GuardDuty, CloudWatch (see the section called “Amazon CloudWatch”), AWS Distro for OpenTelemetry, Amazon VPC CNI plugin for Kubernetes (see the section called “Amazon VPC CNI”), and Grafana Kubernetes Monitoring. For storage (see App data storage), add-ons to Amazon EKS include Amazon Elastic Block Store CSI Driver (see the section called “Amazon EBS”), Amazon Elastic File System CSI Driver (see the section called “Amazon EFS”), and several third-party storage add-ons such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of"} -{"global_id": 538, "doc_id": "eks", "chunk_id": "22", "question_id": 3, "question": "What is a common example of a service that provides DNS services to the cluster?", "answer_span": "A common example is the CoreDNS service, which provides DNS services to the cluster.", "chunk": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes to support the cluster, but are not run in the control plane. These services often run directly on nodes in the kube-system namespace or in its own namespace (as is often done with third-party service providers). A common example is the CoreDNS service, which provides DNS services to the cluster. Refer to Discovering builtin services for information on how to see which cluster services are running in kube-system on your cluster. There are different types of add-ons you can consider adding to your clusters. To keep your clusters healthy, you can add observability features (see Monitor clusters) that allow you to do things like logging, auditing, and metrics. With this information, you can troubleshoot problems that occur, often through the same observability interfaces. Examples of these types of services include Amazon GuardDuty, CloudWatch (see the section called “Amazon CloudWatch”), AWS Distro for OpenTelemetry, Amazon VPC CNI plugin for Kubernetes (see the section called ��Amazon VPC CNI”), and Grafana Kubernetes Monitoring. For storage (see App data storage), add-ons to Amazon EKS include Amazon Elastic Block Store CSI Driver (see the section called “Amazon EBS”), Amazon Elastic File System CSI Driver (see the section called “Amazon EFS”), and several third-party storage add-ons such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of"} -{"global_id": 539, "doc_id": "eks", "chunk_id": "22", "question_id": 4, "question": "What does Kubernetes define as a Workload?", "answer_span": "Kubernetes defines a Workload as 'an application running on Kubernetes.'", "chunk": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes to support the cluster, but are not run in the control plane. These services often run directly on nodes in the kube-system namespace or in its own namespace (as is often done with third-party service providers). A common example is the CoreDNS service, which provides DNS services to the cluster. Refer to Discovering builtin services for information on how to see which cluster services are running in kube-system on your cluster. There are different types of add-ons you can consider adding to your clusters. To keep your clusters healthy, you can add observability features (see Monitor clusters) that allow you to do things like logging, auditing, and metrics. With this information, you can troubleshoot problems that occur, often through the same observability interfaces. Examples of these types of services include Amazon GuardDuty, CloudWatch (see the section called “Amazon CloudWatch”), AWS Distro for OpenTelemetry, Amazon VPC CNI plugin for Kubernetes (see the section called “Amazon VPC CNI”), and Grafana Kubernetes Monitoring. For storage (see App data storage), add-ons to Amazon EKS include Amazon Elastic Block Store CSI Driver (see the section called “Amazon EBS”), Amazon Elastic File System CSI Driver (see the section called “Amazon EFS”), and several third-party storage add-ons such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of"} -{"global_id": 540, "doc_id": "eks", "chunk_id": "23", "question_id": 1, "question": "What is defined as a Workload in Kubernetes?", "answer_span": "Kubernetes defines a Workload as \"an application running on Kubernetes.\"", "chunk": "such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of a set of microservices run as Containers in Pods, or could be run as a batch job or other type of applications. The job of Kubernetes is to make sure that the requests that you make for those objects to be set up or deployed are carried out. As someone deploying applications, you Workloads 18 Amazon EKS User Guide should learn about how containers are built, how Pods are defined, and what methods you can use for deploying them. Containers The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod . A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes. Contrast this to something like an RPM or Deb package, which packages together software for a Linux system, but does not itself run as an entity. Because the Pod is the smallest deployable unit, it typically holds a single container. However, multiple containers can be in a Pod in cases where the containers are tightly coupled. For example, a web server container might be packaged in a Pod with a sidecar type of container that may provide logging, monitoring, or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers"} -{"global_id": 541, "doc_id": "eks", "chunk_id": "23", "question_id": 2, "question": "What is the most basic element of an application workload in Kubernetes?", "answer_span": "The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod.", "chunk": "such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of a set of microservices run as Containers in Pods, or could be run as a batch job or other type of applications. The job of Kubernetes is to make sure that the requests that you make for those objects to be set up or deployed are carried out. As someone deploying applications, you Workloads 18 Amazon EKS User Guide should learn about how containers are built, how Pods are defined, and what methods you can use for deploying them. Containers The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod . A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes. Contrast this to something like an RPM or Deb package, which packages together software for a Linux system, but does not itself run as an entity. Because the Pod is the smallest deployable unit, it typically holds a single container. However, multiple containers can be in a Pod in cases where the containers are tightly coupled. For example, a web server container might be packaged in a Pod with a sidecar type of container that may provide logging, monitoring, or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers"} -{"global_id": 542, "doc_id": "eks", "chunk_id": "23", "question_id": 3, "question": "What does a Pod represent in Kubernetes?", "answer_span": "A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes.", "chunk": "such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of a set of microservices run as Containers in Pods, or could be run as a batch job or other type of applications. The job of Kubernetes is to make sure that the requests that you make for those objects to be set up or deployed are carried out. As someone deploying applications, you Workloads 18 Amazon EKS User Guide should learn about how containers are built, how Pods are defined, and what methods you can use for deploying them. Containers The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod . A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes. Contrast this to something like an RPM or Deb package, which packages together software for a Linux system, but does not itself run as an entity. Because the Pod is the smallest deployable unit, it typically holds a single container. However, multiple containers can be in a Pod in cases where the containers are tightly coupled. For example, a web server container might be packaged in a Pod with a sidecar type of container that may provide logging, monitoring, or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers"} -{"global_id": 543, "doc_id": "eks", "chunk_id": "23", "question_id": 4, "question": "Can multiple containers be in a Pod, and under what circumstances?", "answer_span": "However, multiple containers can be in a Pod in cases where the containers are tightly coupled.", "chunk": "such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of a set of microservices run as Containers in Pods, or could be run as a batch job or other type of applications. The job of Kubernetes is to make sure that the requests that you make for those objects to be set up or deployed are carried out. As someone deploying applications, you Workloads 18 Amazon EKS User Guide should learn about how containers are built, how Pods are defined, and what methods you can use for deploying them. Containers The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod . A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes. Contrast this to something like an RPM or Deb package, which packages together software for a Linux system, but does not itself run as an entity. Because the Pod is the smallest deployable unit, it typically holds a single container. However, multiple containers can be in a Pod in cases where the containers are tightly coupled. For example, a web server container might be packaged in a Pod with a sidecar type of container that may provide logging, monitoring, or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers"} -{"global_id": 544, "doc_id": "eks", "chunk_id": "24", "question_id": 1, "question": "What ensures that both containers in a Pod always run on the same node?", "answer_span": "In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node.", "chunk": "or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers in a Pod running as though they are in the same isolated host. The effect of this is that the containers share a single IP address that provides access to the Pod and the containers can communicate with each other as though they were running on their own localhost. Pod specifications (PodSpec) define the desired state of the Pod. You can deploy an individual Pod or multiple Pods by using workload resources to manage Pod Templates. Workload resources include Deployments (to manage multiple Pod Replicas), StatefulSets (to deploy Pods that need to be unique, such as database Pods), and DaemonSets (where a Pod needs to run continuously on every node). More on those later. While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage. Building Containers The Pod is really just a structure around one or more containers, with each container itself holding the file system, executables, configuration files, libraries, and other components to actually run the application. Because a company called Docker Inc. first popularized containers, some people refer to containers as Docker Containers. However, the Open Container Initiative has since defined container runtimes, images, and distribution methods for the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside"} -{"global_id": 545, "doc_id": "eks", "chunk_id": "24", "question_id": 2, "question": "What do Pod specifications (PodSpec) define?", "answer_span": "Pod specifications (PodSpec) define the desired state of the Pod.", "chunk": "or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers in a Pod running as though they are in the same isolated host. The effect of this is that the containers share a single IP address that provides access to the Pod and the containers can communicate with each other as though they were running on their own localhost. Pod specifications (PodSpec) define the desired state of the Pod. You can deploy an individual Pod or multiple Pods by using workload resources to manage Pod Templates. Workload resources include Deployments (to manage multiple Pod Replicas), StatefulSets (to deploy Pods that need to be unique, such as database Pods), and DaemonSets (where a Pod needs to run continuously on every node). More on those later. While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage. Building Containers The Pod is really just a structure around one or more containers, with each container itself holding the file system, executables, configuration files, libraries, and other components to actually run the application. Because a company called Docker Inc. first popularized containers, some people refer to containers as Docker Containers. However, the Open Container Initiative has since defined container runtimes, images, and distribution methods for the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside"} -{"global_id": 546, "doc_id": "eks", "chunk_id": "24", "question_id": 3, "question": "What is the smallest unit you deploy?", "answer_span": "While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage.", "chunk": "or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers in a Pod running as though they are in the same isolated host. The effect of this is that the containers share a single IP address that provides access to the Pod and the containers can communicate with each other as though they were running on their own localhost. Pod specifications (PodSpec) define the desired state of the Pod. You can deploy an individual Pod or multiple Pods by using workload resources to manage Pod Templates. Workload resources include Deployments (to manage multiple Pod Replicas), StatefulSets (to deploy Pods that need to be unique, such as database Pods), and DaemonSets (where a Pod needs to run continuously on every node). More on those later. While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage. Building Containers The Pod is really just a structure around one or more containers, with each container itself holding the file system, executables, configuration files, libraries, and other components to actually run the application. Because a company called Docker Inc. first popularized containers, some people refer to containers as Docker Containers. However, the Open Container Initiative has since defined container runtimes, images, and distribution methods for the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside"} -{"global_id": 547, "doc_id": "eks", "chunk_id": "24", "question_id": 4, "question": "What is typically used to start building a container?", "answer_span": "When you build a container, you typically start with a Dockerfile (literally named that).", "chunk": "or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers in a Pod running as though they are in the same isolated host. The effect of this is that the containers share a single IP address that provides access to the Pod and the containers can communicate with each other as though they were running on their own localhost. Pod specifications (PodSpec) define the desired state of the Pod. You can deploy an individual Pod or multiple Pods by using workload resources to manage Pod Templates. Workload resources include Deployments (to manage multiple Pod Replicas), StatefulSets (to deploy Pods that need to be unique, such as database Pods), and DaemonSets (where a Pod needs to run continuously on every node). More on those later. While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage. Building Containers The Pod is really just a structure around one or more containers, with each container itself holding the file system, executables, configuration files, libraries, and other components to actually run the application. Because a company called Docker Inc. first popularized containers, some people refer to containers as Docker Containers. However, the Open Container Initiative has since defined container runtimes, images, and distribution methods for the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside"} -{"global_id": 548, "doc_id": "eks", "chunk_id": "25", "question_id": 1, "question": "What are containers often referred to as?", "answer_span": "others often refer to containers as OCI Containers, Linux Containers, or just Containers.", "chunk": "the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside that Dockerfile, you identify: • A base image — A base container image is a container that is typically built from either a minimal version of an operating system’s file system (such as Red Hat Enterprise Linux or Ubuntu) or a minimal system that is enhanced to provide software to run specific types of applications (such as a nodejs or python apps). • Application software — You can add your application software to your container in much the same way you would add it to a Linux system. For example, in your Dockerfile you can run npm and yarn to install a Java application or yum and dnf to install RPM packages. In other words, using a RUN command in a Dockerfile, you can run any command that is available in the file system of your base image to install software or configure software inside of the resulting container image. • Instructions — The Dockerfile reference describes the instructions you can add to a Dockerfile when you configure it. These include instructions used to build what is in the container itself (ADD or COPY files from the local system), identify commands to execute when the container is run (CMD or ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build"} -{"global_id": 549, "doc_id": "eks", "chunk_id": "25", "question_id": 2, "question": "What is typically the starting point for building a container?", "answer_span": "you typically start with a Dockerfile (literally named that).", "chunk": "the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside that Dockerfile, you identify: • A base image — A base container image is a container that is typically built from either a minimal version of an operating system’s file system (such as Red Hat Enterprise Linux or Ubuntu) or a minimal system that is enhanced to provide software to run specific types of applications (such as a nodejs or python apps). • Application software — You can add your application software to your container in much the same way you would add it to a Linux system. For example, in your Dockerfile you can run npm and yarn to install a Java application or yum and dnf to install RPM packages. In other words, using a RUN command in a Dockerfile, you can run any command that is available in the file system of your base image to install software or configure software inside of the resulting container image. • Instructions — The Dockerfile reference describes the instructions you can add to a Dockerfile when you configure it. These include instructions used to build what is in the container itself (ADD or COPY files from the local system), identify commands to execute when the container is run (CMD or ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build"} -{"global_id": 550, "doc_id": "eks", "chunk_id": "25", "question_id": 3, "question": "What can you add to your container in a similar way to a Linux system?", "answer_span": "You can add your application software to your container in much the same way you would add it to a Linux system.", "chunk": "the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside that Dockerfile, you identify: • A base image — A base container image is a container that is typically built from either a minimal version of an operating system’s file system (such as Red Hat Enterprise Linux or Ubuntu) or a minimal system that is enhanced to provide software to run specific types of applications (such as a nodejs or python apps). • Application software — You can add your application software to your container in much the same way you would add it to a Linux system. For example, in your Dockerfile you can run npm and yarn to install a Java application or yum and dnf to install RPM packages. In other words, using a RUN command in a Dockerfile, you can run any command that is available in the file system of your base image to install software or configure software inside of the resulting container image. • Instructions — The Dockerfile reference describes the instructions you can add to a Dockerfile when you configure it. These include instructions used to build what is in the container itself (ADD or COPY files from the local system), identify commands to execute when the container is run (CMD or ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build"} -{"global_id": 551, "doc_id": "eks", "chunk_id": "25", "question_id": 4, "question": "What does the Dockerfile reference describe?", "answer_span": "The Dockerfile reference describes the instructions you can add to a Dockerfile when you configure it.", "chunk": "the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside that Dockerfile, you identify: • A base image — A base container image is a container that is typically built from either a minimal version of an operating system’s file system (such as Red Hat Enterprise Linux or Ubuntu) or a minimal system that is enhanced to provide software to run specific types of applications (such as a nodejs or python apps). • Application software — You can add your application software to your container in much the same way you would add it to a Linux system. For example, in your Dockerfile you can run npm and yarn to install a Java application or yum and dnf to install RPM packages. In other words, using a RUN command in a Dockerfile, you can run any command that is available in the file system of your base image to install software or configure software inside of the resulting container image. • Instructions — The Dockerfile reference describes the instructions you can add to a Dockerfile when you configure it. These include instructions used to build what is in the container itself (ADD or COPY files from the local system), identify commands to execute when the container is run (CMD or ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build"} -{"global_id": 552, "doc_id": "eks", "chunk_id": "26", "question_id": 1, "question": "What tools are mentioned as alternatives to the docker command for building container images?", "answer_span": "other tools that are available to build container images include podman and nerdctl.", "chunk": "ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build container images include podman and nerdctl. See Building Better Container Images or Overview of Docker Build to learn about building containers. Storing Containers Once you’ve built your container image, you can store it in a container distribution registry on your workstation or on a public container registry. Running a private container registry on your workstation allows you to store container images locally, making them readily available to you. To store container images in a more public manner, you can push them to a public container registry. Public container registries provide a central location for storing and distributing container images. Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry. When running containerized workloads on Amazon Elastic Kubernetes Service (Amazon EKS) we recommend pulling copies of Docker Official Images that are stored in Amazon Elastic Container Registry. Amazon ECR has been storing these images since 2021. You can search for popular Workloads 20 Amazon EKS User Guide container images in the Amazon ECR Public Gallery, and specifically for the Docker Hub images, you can search the Amazon ECR Docker Gallery. Running containers Because containers are built in a standard format, a container can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start"} -{"global_id": 553, "doc_id": "eks", "chunk_id": "26", "question_id": 2, "question": "What is the purpose of a private container registry?", "answer_span": "Running a private container registry on your workstation allows you to store container images locally, making them readily available to you.", "chunk": "ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build container images include podman and nerdctl. See Building Better Container Images or Overview of Docker Build to learn about building containers. Storing Containers Once you’ve built your container image, you can store it in a container distribution registry on your workstation or on a public container registry. Running a private container registry on your workstation allows you to store container images locally, making them readily available to you. To store container images in a more public manner, you can push them to a public container registry. Public container registries provide a central location for storing and distributing container images. Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry. When running containerized workloads on Amazon Elastic Kubernetes Service (Amazon EKS) we recommend pulling copies of Docker Official Images that are stored in Amazon Elastic Container Registry. Amazon ECR has been storing these images since 2021. You can search for popular Workloads 20 Amazon EKS User Guide container images in the Amazon ECR Public Gallery, and specifically for the Docker Hub images, you can search the Amazon ECR Docker Gallery. Running containers Because containers are built in a standard format, a container can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start"} -{"global_id": 554, "doc_id": "eks", "chunk_id": "26", "question_id": 3, "question": "Which public container registries are mentioned in the text?", "answer_span": "Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry.", "chunk": "ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build container images include podman and nerdctl. See Building Better Container Images or Overview of Docker Build to learn about building containers. Storing Containers Once you’ve built your container image, you can store it in a container distribution registry on your workstation or on a public container registry. Running a private container registry on your workstation allows you to store container images locally, making them readily available to you. To store container images in a more public manner, you can push them to a public container registry. Public container registries provide a central location for storing and distributing container images. Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry. When running containerized workloads on Amazon Elastic Kubernetes Service (Amazon EKS) we recommend pulling copies of Docker Official Images that are stored in Amazon Elastic Container Registry. Amazon ECR has been storing these images since 2021. You can search for popular Workloads 20 Amazon EKS User Guide container images in the Amazon ECR Public Gallery, and specifically for the Docker Hub images, you can search the Amazon ECR Docker Gallery. Running containers Because containers are built in a standard format, a container can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start"} -{"global_id": 555, "doc_id": "eks", "chunk_id": "26", "question_id": 4, "question": "What command can be used to run a container on a local desktop?", "answer_span": "you can use docker run or podman run commands to start", "chunk": "ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build container images include podman and nerdctl. See Building Better Container Images or Overview of Docker Build to learn about building containers. Storing Containers Once you’ve built your container image, you can store it in a container distribution registry on your workstation or on a public container registry. Running a private container registry on your workstation allows you to store container images locally, making them readily available to you. To store container images in a more public manner, you can push them to a public container registry. Public container registries provide a central location for storing and distributing container images. Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry. When running containerized workloads on Amazon Elastic Kubernetes Service (Amazon EKS) we recommend pulling copies of Docker Official Images that are stored in Amazon Elastic Container Registry. Amazon ECR has been storing these images since 2021. You can search for popular Workloads 20 Amazon EKS User Guide container images in the Amazon ECR Public Gallery, and specifically for the Docker Hub images, you can search the Amazon ECR Docker Gallery. Running containers Because containers are built in a standard format, a container can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start"} -{"global_id": 556, "doc_id": "eks", "chunk_id": "27", "question_id": 1, "question": "What is required for a machine to run a container?", "answer_span": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm).", "chunk": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start up a container on the localhost. For Kubernetes, however, each worker node has a container runtime deployed and it is up to Kubernetes to request that a node run a container. Once a container has been assigned to run on a node, the node looks to see if the requested version of the container image already exists on the node. If it doesn’t, Kubernetes tells the container runtime to pull that container from the appropriate container registry, then run that container locally. Keep in mind that a container image refers to the software package that is moved around between your laptop, the container registry, and Kubernetes nodes. A container refers to a running instance of that image. Pods Once your containers are ready, working with Pods includes configuring, deploying, and making the Pods accessible. Configuring Pods When you define a Pod, you assign a set of attributes to it. Those attributes must include at least the Pod name and the container image to run. However, there are many other things you want to configure with your Pod definitions as well (see the PodSpec page for details on what can go into a Pod). These include: • Storage — When a running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device"} -{"global_id": 557, "doc_id": "eks", "chunk_id": "27", "question_id": 2, "question": "What commands can be used to start a container on the localhost?", "answer_span": "you can use docker run or podman run commands to start up a container on the localhost.", "chunk": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start up a container on the localhost. For Kubernetes, however, each worker node has a container runtime deployed and it is up to Kubernetes to request that a node run a container. Once a container has been assigned to run on a node, the node looks to see if the requested version of the container image already exists on the node. If it doesn’t, Kubernetes tells the container runtime to pull that container from the appropriate container registry, then run that container locally. Keep in mind that a container image refers to the software package that is moved around between your laptop, the container registry, and Kubernetes nodes. A container refers to a running instance of that image. Pods Once your containers are ready, working with Pods includes configuring, deploying, and making the Pods accessible. Configuring Pods When you define a Pod, you assign a set of attributes to it. Those attributes must include at least the Pod name and the container image to run. However, there are many other things you want to configure with your Pod definitions as well (see the PodSpec page for details on what can go into a Pod). These include: • Storage — When a running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device"} -{"global_id": 558, "doc_id": "eks", "chunk_id": "27", "question_id": 3, "question": "What does Kubernetes do when a container image is not found on a node?", "answer_span": "If it doesn’t, Kubernetes tells the container runtime to pull that container from the appropriate container registry, then run that container locally.", "chunk": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start up a container on the localhost. For Kubernetes, however, each worker node has a container runtime deployed and it is up to Kubernetes to request that a node run a container. Once a container has been assigned to run on a node, the node looks to see if the requested version of the container image already exists on the node. If it doesn’t, Kubernetes tells the container runtime to pull that container from the appropriate container registry, then run that container locally. Keep in mind that a container image refers to the software package that is moved around between your laptop, the container registry, and Kubernetes nodes. A container refers to a running instance of that image. Pods Once your containers are ready, working with Pods includes configuring, deploying, and making the Pods accessible. Configuring Pods When you define a Pod, you assign a set of attributes to it. Those attributes must include at least the Pod name and the container image to run. However, there are many other things you want to configure with your Pod definitions as well (see the PodSpec page for details on what can go into a Pod). These include: • Storage — When a running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device"} -{"global_id": 559, "doc_id": "eks", "chunk_id": "27", "question_id": 4, "question": "What must be included when defining a Pod?", "answer_span": "Those attributes must include at least the Pod name and the container image to run.", "chunk": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start up a container on the localhost. For Kubernetes, however, each worker node has a container runtime deployed and it is up to Kubernetes to request that a node run a container. Once a container has been assigned to run on a node, the node looks to see if the requested version of the container image already exists on the node. If it doesn’t, Kubernetes tells the container runtime to pull that container from the appropriate container registry, then run that container locally. Keep in mind that a container image refers to the software package that is moved around between your laptop, the container registry, and Kubernetes nodes. A container refers to a running instance of that image. Pods Once your containers are ready, working with Pods includes configuring, deploying, and making the Pods accessible. Configuring Pods When you define a Pod, you assign a set of attributes to it. Those attributes must include at least the Pod name and the container image to run. However, there are many other things you want to configure with your Pod definitions as well (see the PodSpec page for details on what can go into a Pod). These include: • Storage — When a running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device"} -{"global_id": 560, "doc_id": "eks", "chunk_id": "28", "question_id": 1, "question": "What happens to data storage in a running container when it is stopped and deleted?", "answer_span": "data storage in that container will disappear, unless you set up more permanent storage.", "chunk": "running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device from the local computer. With one of those storage types available from your cluster, you can mount the storage volume to a selected mount point in your container’s file system. A Persistent Volume is one that continues to exist after the Pod is deleted, while an Ephemeral Volume is deleted when the Pod is deleted. If your cluster administrator created different storage classes for your cluster, you might have the Workloads 21 Amazon EKS User Guide option for choosing the attributes of the storage you use, such as whether the volume is deleted or reclaimed after use, whether it will expand if more space is needed, and even whether it meets certain performance requirements. • Secrets — By making Secrets available to containers in Pod specs, you can provide the permissions those containers need to access file systems, data bases, or other protected assets. Keys, passwords, and tokens are among the items that can be stored as secrets. Using secrets makes it so you don’t have to store this information in container images, but need only make the secrets available to running containers. Similar to Secrets are ConfigMaps. A ConfigMap tends to hold less critical information, such as key-value pairs for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the"} -{"global_id": 561, "doc_id": "eks", "chunk_id": "28", "question_id": 2, "question": "What types of storage does Kubernetes support?", "answer_span": "Storage types include CephFS, NFS, iSCSI, and others.", "chunk": "running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device from the local computer. With one of those storage types available from your cluster, you can mount the storage volume to a selected mount point in your container’s file system. A Persistent Volume is one that continues to exist after the Pod is deleted, while an Ephemeral Volume is deleted when the Pod is deleted. If your cluster administrator created different storage classes for your cluster, you might have the Workloads 21 Amazon EKS User Guide option for choosing the attributes of the storage you use, such as whether the volume is deleted or reclaimed after use, whether it will expand if more space is needed, and even whether it meets certain performance requirements. • Secrets — By making Secrets available to containers in Pod specs, you can provide the permissions those containers need to access file systems, data bases, or other protected assets. Keys, passwords, and tokens are among the items that can be stored as secrets. Using secrets makes it so you don’t have to store this information in container images, but need only make the secrets available to running containers. Similar to Secrets are ConfigMaps. A ConfigMap tends to hold less critical information, such as key-value pairs for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the"} -{"global_id": 562, "doc_id": "eks", "chunk_id": "28", "question_id": 3, "question": "What is the difference between a Persistent Volume and an Ephemeral Volume?", "answer_span": "A Persistent Volume is one that continues to exist after the Pod is deleted, while an Ephemeral Volume is deleted when the Pod is deleted.", "chunk": "running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device from the local computer. With one of those storage types available from your cluster, you can mount the storage volume to a selected mount point in your container’s file system. A Persistent Volume is one that continues to exist after the Pod is deleted, while an Ephemeral Volume is deleted when the Pod is deleted. If your cluster administrator created different storage classes for your cluster, you might have the Workloads 21 Amazon EKS User Guide option for choosing the attributes of the storage you use, such as whether the volume is deleted or reclaimed after use, whether it will expand if more space is needed, and even whether it meets certain performance requirements. • Secrets — By making Secrets available to containers in Pod specs, you can provide the permissions those containers need to access file systems, data bases, or other protected assets. Keys, passwords, and tokens are among the items that can be stored as secrets. Using secrets makes it so you don’t have to store this information in container images, but need only make the secrets available to running containers. Similar to Secrets are ConfigMaps. A ConfigMap tends to hold less critical information, such as key-value pairs for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the"} -{"global_id": 563, "doc_id": "eks", "chunk_id": "28", "question_id": 4, "question": "What can be stored as secrets in Kubernetes?", "answer_span": "Keys, passwords, and tokens are among the items that can be stored as secrets.", "chunk": "running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device from the local computer. With one of those storage types available from your cluster, you can mount the storage volume to a selected mount point in your container’s file system. A Persistent Volume is one that continues to exist after the Pod is deleted, while an Ephemeral Volume is deleted when the Pod is deleted. If your cluster administrator created different storage classes for your cluster, you might have the Workloads 21 Amazon EKS User Guide option for choosing the attributes of the storage you use, such as whether the volume is deleted or reclaimed after use, whether it will expand if more space is needed, and even whether it meets certain performance requirements. • Secrets — By making Secrets available to containers in Pod specs, you can provide the permissions those containers need to access file systems, data bases, or other protected assets. Keys, passwords, and tokens are among the items that can be stored as secrets. Using secrets makes it so you don’t have to store this information in container images, but need only make the secrets available to running containers. Similar to Secrets are ConfigMaps. A ConfigMap tends to hold less critical information, such as key-value pairs for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the"} -{"global_id": 564, "doc_id": "eks", "chunk_id": "29", "question_id": 1, "question": "What can be requested for each container in terms of resources?", "answer_span": "For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use.", "chunk": "for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use. See Resource Management for Pods and Containers for examples. • Disruptions — Pods can be disrupted involuntarily (a node goes down) or voluntarily (an upgrade is desired). By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur. See Specifying a Disruption Budget for your application for examples. • Namespaces — Kubernetes provides different ways to isolate Kubernetes components and workloads from each other. Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together. You can create your own namespaces to use or choose to not indicate a namespace (which causes Kubernetes to use the default namespace). Kubernetes control plane components typically run in the kubesystem namespace. The configuration just described is typically gathered together in a YAML file to be applied to the Kubernetes cluster. For personal Kubernetes clusters, you might just store these YAML files on your local system. However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources. The objects used to gather together and deploy Pod information is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless"} -{"global_id": 565, "doc_id": "eks", "chunk_id": "29", "question_id": 2, "question": "What is a Pod disruption budget used for?", "answer_span": "By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur.", "chunk": "for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use. See Resource Management for Pods and Containers for examples. • Disruptions — Pods can be disrupted involuntarily (a node goes down) or voluntarily (an upgrade is desired). By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur. See Specifying a Disruption Budget for your application for examples. • Namespaces — Kubernetes provides different ways to isolate Kubernetes components and workloads from each other. Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together. You can create your own namespaces to use or choose to not indicate a namespace (which causes Kubernetes to use the default namespace). Kubernetes control plane components typically run in the kubesystem namespace. The configuration just described is typically gathered together in a YAML file to be applied to the Kubernetes cluster. For personal Kubernetes clusters, you might just store these YAML files on your local system. However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources. The objects used to gather together and deploy Pod information is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless"} -{"global_id": 566, "doc_id": "eks", "chunk_id": "29", "question_id": 3, "question": "What is a common way to secure and manage Pods for a particular application?", "answer_span": "Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together.", "chunk": "for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use. See Resource Management for Pods and Containers for examples. • Disruptions — Pods can be disrupted involuntarily (a node goes down) or voluntarily (an upgrade is desired). By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur. See Specifying a Disruption Budget for your application for examples. • Namespaces — Kubernetes provides different ways to isolate Kubernetes components and workloads from each other. Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together. You can create your own namespaces to use or choose to not indicate a namespace (which causes Kubernetes to use the default namespace). Kubernetes control plane components typically run in the kubesystem namespace. The configuration just described is typically gathered together in a YAML file to be applied to the Kubernetes cluster. For personal Kubernetes clusters, you might just store these YAML files on your local system. However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources. The objects used to gather together and deploy Pod information is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless"} -{"global_id": 567, "doc_id": "eks", "chunk_id": "29", "question_id": 4, "question": "What is GitOps used for in the context of Kubernetes?", "answer_span": "However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources.", "chunk": "for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use. See Resource Management for Pods and Containers for examples. • Disruptions — Pods can be disrupted involuntarily (a node goes down) or voluntarily (an upgrade is desired). By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur. See Specifying a Disruption Budget for your application for examples. • Namespaces — Kubernetes provides different ways to isolate Kubernetes components and workloads from each other. Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together. You can create your own namespaces to use or choose to not indicate a namespace (which causes Kubernetes to use the default namespace). Kubernetes control plane components typically run in the kubesystem namespace. The configuration just described is typically gathered together in a YAML file to be applied to the Kubernetes cluster. For personal Kubernetes clusters, you might just store these YAML files on your local system. However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources. The objects used to gather together and deploy Pod information is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless"} -{"global_id": 568, "doc_id": "eks", "chunk_id": "30", "question_id": 1, "question": "What is the main factor that determines the method for deploying Pods?", "answer_span": "The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods.", "chunk": "is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless application doesn’t save a client’s session data, so another session doesn’t need to refer back to what happened to a previous session. This makes it easier to just replace Pods with new ones if they become unhealthy or move them around without saving state. If you are running a stateless application (such as a web server), you can use a Deployment to deploy Podsand ReplicaSets. A ReplicaSet defines how many instances of a Pod that you want running concurrently. Although you can run a ReplicaSet directly, it is common to run replicas directly within a Deployment, to define how many replicas of a Pod should be running at a time. • Stateful applications — A stateful application is one where the identity of the Pod and the order in which Pods are launched are important. These applications need persistent storage that is stable and need to be deployed and scaled in a consistent manner. To deploy a stateful application in Kubernetes, you can use StatefulSets. An example of an application that is typically run as a StatefulSet is a database. Within a StatefulSet, you could define replicas, the Pod and its containers, storage volumes to mount, and locations in the container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require"} -{"global_id": 569, "doc_id": "eks", "chunk_id": "30", "question_id": 2, "question": "What characterizes a stateless application?", "answer_span": "A stateless application doesn’t save a client’s session data, so another session doesn’t need to refer back to what happened to a previous session.", "chunk": "is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless application doesn’t save a client’s session data, so another session doesn’t need to refer back to what happened to a previous session. This makes it easier to just replace Pods with new ones if they become unhealthy or move them around without saving state. If you are running a stateless application (such as a web server), you can use a Deployment to deploy Podsand ReplicaSets. A ReplicaSet defines how many instances of a Pod that you want running concurrently. Although you can run a ReplicaSet directly, it is common to run replicas directly within a Deployment, to define how many replicas of a Pod should be running at a time. • Stateful applications — A stateful application is one where the identity of the Pod and the order in which Pods are launched are important. These applications need persistent storage that is stable and need to be deployed and scaled in a consistent manner. To deploy a stateful application in Kubernetes, you can use StatefulSets. An example of an application that is typically run as a StatefulSet is a database. Within a StatefulSet, you could define replicas, the Pod and its containers, storage volumes to mount, and locations in the container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require"} -{"global_id": 570, "doc_id": "eks", "chunk_id": "30", "question_id": 3, "question": "What is a common way to deploy Pods for stateless applications?", "answer_span": "If you are running a stateless application (such as a web server), you can use a Deployment to deploy Pods and ReplicaSets.", "chunk": "is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless application doesn’t save a client’s session data, so another session doesn’t need to refer back to what happened to a previous session. This makes it easier to just replace Pods with new ones if they become unhealthy or move them around without saving state. If you are running a stateless application (such as a web server), you can use a Deployment to deploy Podsand ReplicaSets. A ReplicaSet defines how many instances of a Pod that you want running concurrently. Although you can run a ReplicaSet directly, it is common to run replicas directly within a Deployment, to define how many replicas of a Pod should be running at a time. • Stateful applications — A stateful application is one where the identity of the Pod and the order in which Pods are launched are important. These applications need persistent storage that is stable and need to be deployed and scaled in a consistent manner. To deploy a stateful application in Kubernetes, you can use StatefulSets. An example of an application that is typically run as a StatefulSet is a database. Within a StatefulSet, you could define replicas, the Pod and its containers, storage volumes to mount, and locations in the container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require"} -{"global_id": 571, "doc_id": "eks", "chunk_id": "30", "question_id": 4, "question": "What is an example of an application that is typically run as a StatefulSet?", "answer_span": "An example of an application that is typically run as a StatefulSet is a database.", "chunk": "is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide • Stateless applications — A stateless application doesn’t save a client’s session data, so another session doesn’t need to refer back to what happened to a previous session. This makes it easier to just replace Pods with new ones if they become unhealthy or move them around without saving state. If you are running a stateless application (such as a web server), you can use a Deployment to deploy Podsand ReplicaSets. A ReplicaSet defines how many instances of a Pod that you want running concurrently. Although you can run a ReplicaSet directly, it is common to run replicas directly within a Deployment, to define how many replicas of a Pod should be running at a time. • Stateful applications — A stateful application is one where the identity of the Pod and the order in which Pods are launched are important. These applications need persistent storage that is stable and need to be deployed and scaled in a consistent manner. To deploy a stateful application in Kubernetes, you can use StatefulSets. An example of an application that is typically run as a StatefulSet is a database. Within a StatefulSet, you could define replicas, the Pod and its containers, storage volumes to mount, and locations in the container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require"} -{"global_id": 572, "doc_id": "eks", "chunk_id": "31", "question_id": 1, "question": "What is a DaemonSet used for in Kubernetes?", "answer_span": "For Kubernetes, you can use a DaemonSet to ensure that the selected application runs on every node in your cluster.", "chunk": "container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require that every computer run a monitoring application or a particular remote access service. For Kubernetes, you can use a DaemonSet to ensure that the selected application runs on every node in your cluster. • Applications run to completion — There are some applications you want to run to complete a particular task. This could include one that runs monthly status reports or cleans out old data. A Job object can be used to set up an application to start up and run, then exit when the task is done. A CronJob object lets you set up an application to run at a specific hour, minute, day of the month, month, or day of the week, using a structure defined by the Linux crontab format. Making applications accessible from the network With applications often deployed as a set of microservices that moved around to different places, Kubernetes needed a way for those microservices to be able to find each other. Also, for others to access an application outside of the Kubernetes cluster, Kubernetes needed a way to expose that application on outside addresses and ports. These networking-related features are done with Service and Ingress objects, respectively: • Services — Because a Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service,"} -{"global_id": 573, "doc_id": "eks", "chunk_id": "31", "question_id": 2, "question": "What is the purpose of a Job object in Kubernetes?", "answer_span": "A Job object can be used to set up an application to start up and run, then exit when the task is done.", "chunk": "container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require that every computer run a monitoring application or a particular remote access service. For Kubernetes, you can use a DaemonSet to ensure that the selected application runs on every node in your cluster. • Applications run to completion — There are some applications you want to run to complete a particular task. This could include one that runs monthly status reports or cleans out old data. A Job object can be used to set up an application to start up and run, then exit when the task is done. A CronJob object lets you set up an application to run at a specific hour, minute, day of the month, month, or day of the week, using a structure defined by the Linux crontab format. Making applications accessible from the network With applications often deployed as a set of microservices that moved around to different places, Kubernetes needed a way for those microservices to be able to find each other. Also, for others to access an application outside of the Kubernetes cluster, Kubernetes needed a way to expose that application on outside addresses and ports. These networking-related features are done with Service and Ingress objects, respectively: • Services — Because a Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service,"} -{"global_id": 574, "doc_id": "eks", "chunk_id": "31", "question_id": 3, "question": "How does Kubernetes allow applications to be accessible from the network?", "answer_span": "Kubernetes needed a way to expose that application on outside addresses and ports.", "chunk": "container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require that every computer run a monitoring application or a particular remote access service. For Kubernetes, you can use a DaemonSet to ensure that the selected application runs on every node in your cluster. • Applications run to completion — There are some applications you want to run to complete a particular task. This could include one that runs monthly status reports or cleans out old data. A Job object can be used to set up an application to start up and run, then exit when the task is done. A CronJob object lets you set up an application to run at a specific hour, minute, day of the month, month, or day of the week, using a structure defined by the Linux crontab format. Making applications accessible from the network With applications often deployed as a set of microservices that moved around to different places, Kubernetes needed a way for those microservices to be able to find each other. Also, for others to access an application outside of the Kubernetes cluster, Kubernetes needed a way to expose that application on outside addresses and ports. These networking-related features are done with Service and Ingress objects, respectively: • Services — Because a Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service,"} -{"global_id": 575, "doc_id": "eks", "chunk_id": "31", "question_id": 4, "question": "What are Services used for in Kubernetes?", "answer_span": "Kubernetes lets you represent an application as a Service.", "chunk": "container where data are stored. See Run a Replicated Stateful Application for an example of a database being deployed as a ReplicaSet. • Per-node applications — There are times when you want to run an application on every node in your Kubernetes cluster. For example, your data center might require that every computer run a monitoring application or a particular remote access service. For Kubernetes, you can use a DaemonSet to ensure that the selected application runs on every node in your cluster. • Applications run to completion — There are some applications you want to run to complete a particular task. This could include one that runs monthly status reports or cleans out old data. A Job object can be used to set up an application to start up and run, then exit when the task is done. A CronJob object lets you set up an application to run at a specific hour, minute, day of the month, month, or day of the week, using a structure defined by the Linux crontab format. Making applications accessible from the network With applications often deployed as a set of microservices that moved around to different places, Kubernetes needed a way for those microservices to be able to find each other. Also, for others to access an application outside of the Kubernetes cluster, Kubernetes needed a way to expose that application on outside addresses and ports. These networking-related features are done with Service and Ingress objects, respectively: • Services — Because a Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service,"} -{"global_id": 576, "doc_id": "eks", "chunk_id": "32", "question_id": 1, "question": "What does Kubernetes allow you to represent an application as?", "answer_span": "Kubernetes lets you represent an application as a Service.", "chunk": "Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service, you can identify a Pod or set of Pods with a particular name, then indicate what port exposes that application’s service from the Pod and what ports another application could use to contact that service. Another Pod within a cluster can simply request a Service by name and Kubernetes will direct that request to the proper port for an instance of the Pod running that service. • Ingress — Ingress is what can make applications represented by Kubernetes Services available to clients that are outside of the cluster. Basic features of Ingress include a load balancer (managed by Ingress), the Ingress controller, and rules for routing requests from the controller to the Service. There are several Ingress Controllers that you can choose from with Kubernetes. Next steps Understanding basic Kubernetes concepts and how they relate to Amazon EKS will help you navigate both the Amazon EKS documentation and Kubernetes documentation to find the information you need to manage Amazon EKS clusters and deploy workloads to those clusters. To begin using Amazon EKS, choose from the following: • the section called “Create cluster (eksctl)” • the section called “Create a cluster” • the section called “Sample deployment (Linux)” • Cluster management Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure"} -{"global_id": 577, "doc_id": "eks", "chunk_id": "32", "question_id": 2, "question": "How can another Pod within a cluster request a Service?", "answer_span": "Another Pod within a cluster can simply request a Service by name.", "chunk": "Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service, you can identify a Pod or set of Pods with a particular name, then indicate what port exposes that application’s service from the Pod and what ports another application could use to contact that service. Another Pod within a cluster can simply request a Service by name and Kubernetes will direct that request to the proper port for an instance of the Pod running that service. • Ingress — Ingress is what can make applications represented by Kubernetes Services available to clients that are outside of the cluster. Basic features of Ingress include a load balancer (managed by Ingress), the Ingress controller, and rules for routing requests from the controller to the Service. There are several Ingress Controllers that you can choose from with Kubernetes. Next steps Understanding basic Kubernetes concepts and how they relate to Amazon EKS will help you navigate both the Amazon EKS documentation and Kubernetes documentation to find the information you need to manage Amazon EKS clusters and deploy workloads to those clusters. To begin using Amazon EKS, choose from the following: • the section called “Create cluster (eksctl)” • the section called “Create a cluster” • the section called “Sample deployment (Linux)” • Cluster management Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure"} -{"global_id": 578, "doc_id": "eks", "chunk_id": "32", "question_id": 3, "question": "What is the purpose of Ingress in Kubernetes?", "answer_span": "Ingress is what can make applications represented by Kubernetes Services available to clients that are outside of the cluster.", "chunk": "Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service, you can identify a Pod or set of Pods with a particular name, then indicate what port exposes that application’s service from the Pod and what ports another application could use to contact that service. Another Pod within a cluster can simply request a Service by name and Kubernetes will direct that request to the proper port for an instance of the Pod running that service. • Ingress — Ingress is what can make applications represented by Kubernetes Services available to clients that are outside of the cluster. Basic features of Ingress include a load balancer (managed by Ingress), the Ingress controller, and rules for routing requests from the controller to the Service. There are several Ingress Controllers that you can choose from with Kubernetes. Next steps Understanding basic Kubernetes concepts and how they relate to Amazon EKS will help you navigate both the Amazon EKS documentation and Kubernetes documentation to find the information you need to manage Amazon EKS clusters and deploy workloads to those clusters. To begin using Amazon EKS, choose from the following: • the section called “Create cluster (eksctl)” • the section called “Create a cluster” • the section called “Sample deployment (Linux)” • Cluster management Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure"} -{"global_id": 579, "doc_id": "eks", "chunk_id": "32", "question_id": 4, "question": "What is Amazon EKS?", "answer_span": "Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments.", "chunk": "Pod can move around to different nodes and addresses, another Pod that needs to communicate with the first Pod could find it difficult to locate where it is. To solve Workloads 23 Amazon EKS User Guide this problem, Kubernetes lets you represent an application as a Service. With a Service, you can identify a Pod or set of Pods with a particular name, then indicate what port exposes that application’s service from the Pod and what ports another application could use to contact that service. Another Pod within a cluster can simply request a Service by name and Kubernetes will direct that request to the proper port for an instance of the Pod running that service. • Ingress — Ingress is what can make applications represented by Kubernetes Services available to clients that are outside of the cluster. Basic features of Ingress include a load balancer (managed by Ingress), the Ingress controller, and rules for routing requests from the controller to the Service. There are several Ingress Controllers that you can choose from with Kubernetes. Next steps Understanding basic Kubernetes concepts and how they relate to Amazon EKS will help you navigate both the Amazon EKS documentation and Kubernetes documentation to find the information you need to manage Amazon EKS clusters and deploy workloads to those clusters. To begin using Amazon EKS, choose from the following: • the section called “Create cluster (eksctl)” • the section called “Create a cluster” • the section called “Sample deployment (Linux)” • Cluster management Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure"} -{"global_id": 580, "doc_id": "eks", "chunk_id": "33", "question_id": 1, "question": "What is Amazon EKS?", "answer_span": "Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments.", "chunk": "Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure management for the Kubernetes control plane and nodes. This is essential for scheduling containers, managing application availability, dynamically scaling resources, optimizing compute, storing cluster data, and performing other critical functions. With Amazon EKS, you get the robust performance, scalability, reliability, and availability of AWS infrastructure, along with native integrations with AWS networking, security, storage, and observability services. Next steps 24 Amazon EKS User Guide To simplify running Kubernetes in your on-premises environments, you can use the same Amazon EKS clusters, features, and tools to the section called “Nodes” or Amazon EKS Hybrid Nodes on your own infrastructure, or you can use Amazon EKS Anywhere for self-contained air-gapped environments. Amazon EKS in the cloud You can use Amazon EKS with compute in AWS Regions, AWS Local Zones, and AWS Wavelength Zones. With Amazon EKS in the cloud, the security, scalability, and availability of the Kubernetes control plane is fully managed by AWS in the AWS Region. When running applications with compute in AWS Regions, you get the full breadth of AWS and Amazon EKS features, including Amazon EKS Auto Mode, which fully automates Kubernetes cluster infrastructure management for compute, storage, and networking on AWS with a single click. When running applications with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features"} -{"global_id": 581, "doc_id": "eks", "chunk_id": "33", "question_id": 2, "question": "What does Amazon EKS automate in the cloud?", "answer_span": "In the cloud, Amazon EKS automates Kubernetes cluster infrastructure management for the Kubernetes control plane and nodes.", "chunk": "Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure management for the Kubernetes control plane and nodes. This is essential for scheduling containers, managing application availability, dynamically scaling resources, optimizing compute, storing cluster data, and performing other critical functions. With Amazon EKS, you get the robust performance, scalability, reliability, and availability of AWS infrastructure, along with native integrations with AWS networking, security, storage, and observability services. Next steps 24 Amazon EKS User Guide To simplify running Kubernetes in your on-premises environments, you can use the same Amazon EKS clusters, features, and tools to the section called “Nodes” or Amazon EKS Hybrid Nodes on your own infrastructure, or you can use Amazon EKS Anywhere for self-contained air-gapped environments. Amazon EKS in the cloud You can use Amazon EKS with compute in AWS Regions, AWS Local Zones, and AWS Wavelength Zones. With Amazon EKS in the cloud, the security, scalability, and availability of the Kubernetes control plane is fully managed by AWS in the AWS Region. When running applications with compute in AWS Regions, you get the full breadth of AWS and Amazon EKS features, including Amazon EKS Auto Mode, which fully automates Kubernetes cluster infrastructure management for compute, storage, and networking on AWS with a single click. When running applications with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features"} -{"global_id": 582, "doc_id": "eks", "chunk_id": "33", "question_id": 3, "question": "What options are available for running Amazon EKS in on-premises environments?", "answer_span": "To simplify running Kubernetes in your on-premises environments, you can use the same Amazon EKS clusters, features, and tools to the section called “Nodes” or Amazon EKS Hybrid Nodes on your own infrastructure, or you can use Amazon EKS Anywhere for self-contained air-gapped environments.", "chunk": "Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure management for the Kubernetes control plane and nodes. This is essential for scheduling containers, managing application availability, dynamically scaling resources, optimizing compute, storing cluster data, and performing other critical functions. With Amazon EKS, you get the robust performance, scalability, reliability, and availability of AWS infrastructure, along with native integrations with AWS networking, security, storage, and observability services. Next steps 24 Amazon EKS User Guide To simplify running Kubernetes in your on-premises environments, you can use the same Amazon EKS clusters, features, and tools to the section called “Nodes” or Amazon EKS Hybrid Nodes on your own infrastructure, or you can use Amazon EKS Anywhere for self-contained air-gapped environments. Amazon EKS in the cloud You can use Amazon EKS with compute in AWS Regions, AWS Local Zones, and AWS Wavelength Zones. With Amazon EKS in the cloud, the security, scalability, and availability of the Kubernetes control plane is fully managed by AWS in the AWS Region. When running applications with compute in AWS Regions, you get the full breadth of AWS and Amazon EKS features, including Amazon EKS Auto Mode, which fully automates Kubernetes cluster infrastructure management for compute, storage, and networking on AWS with a single click. When running applications with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features"} -{"global_id": 583, "doc_id": "eks", "chunk_id": "33", "question_id": 4, "question": "What is Amazon EKS Auto Mode?", "answer_span": "When running applications with compute in AWS Regions, you get the full breadth of AWS and Amazon EKS features, including Amazon EKS Auto Mode, which fully automates Kubernetes cluster infrastructure management for compute, storage, and networking on AWS with a single click.", "chunk": "Deploy Amazon EKS clusters across cloud and on-premises environments Understand Amazon EKS deployment options Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in the cloud and in your on-premises environments. In the cloud, Amazon EKS automates Kubernetes cluster infrastructure management for the Kubernetes control plane and nodes. This is essential for scheduling containers, managing application availability, dynamically scaling resources, optimizing compute, storing cluster data, and performing other critical functions. With Amazon EKS, you get the robust performance, scalability, reliability, and availability of AWS infrastructure, along with native integrations with AWS networking, security, storage, and observability services. Next steps 24 Amazon EKS User Guide To simplify running Kubernetes in your on-premises environments, you can use the same Amazon EKS clusters, features, and tools to the section called “Nodes” or Amazon EKS Hybrid Nodes on your own infrastructure, or you can use Amazon EKS Anywhere for self-contained air-gapped environments. Amazon EKS in the cloud You can use Amazon EKS with compute in AWS Regions, AWS Local Zones, and AWS Wavelength Zones. With Amazon EKS in the cloud, the security, scalability, and availability of the Kubernetes control plane is fully managed by AWS in the AWS Region. When running applications with compute in AWS Regions, you get the full breadth of AWS and Amazon EKS features, including Amazon EKS Auto Mode, which fully automates Kubernetes cluster infrastructure management for compute, storage, and networking on AWS with a single click. When running applications with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features"} -{"global_id": 584, "doc_id": "eks", "chunk_id": "34", "question_id": 1, "question": "What can you use to connect Amazon EC2 instances for your cluster compute in AWS Local Zones and AWS Wavelength Zones?", "answer_span": "you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute", "chunk": "with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features and AWS Wavelength Zones features. Amazon EKS in AWS Regions Amazon EKS in Local/Wav elength Zones Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions Kubernetes data plane • Amazon EKS Auto Mode • Amazon EKS Managed Node Groups • Amazon EKS Managed Node Groups (Local Zones only) • Amazon EC2 self-managed nodes • Amazon EC2 self-managed nodes • AWS Fargate Kubernetes data plane location Amazon EKS in the cloud AWS Regions AWS Local or Wavelength Zones 25 Amazon EKS User Guide Amazon EKS in your data center or edge environments If you need to run applications in your own data centers or edge environments, you can use Amazon EKS on AWS Outposts or Amazon EKS Hybrid Nodes. You can use self-managed nodes with Amazon EC2 instances on AWS Outposts for your cluster compute, or you can use Amazon EKS Hybrid Nodes with your own on-premises or edge infrastructure for your cluster compute. AWS Outposts is AWS-managed infrastructure that you run in your data centers or co-location facilities, whereas Amazon EKS Hybrid Nodes runs on your physical or virtual machines that you manage in your on-premises or edge environments. Amazon EKS on AWS Outposts and Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on"} -{"global_id": 585, "doc_id": "eks", "chunk_id": "34", "question_id": 2, "question": "What infrastructure does AWS Outposts provide?", "answer_span": "AWS Outposts is AWS-managed infrastructure that you run in your data centers or co-location facilities", "chunk": "with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features and AWS Wavelength Zones features. Amazon EKS in AWS Regions Amazon EKS in Local/Wav elength Zones Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions Kubernetes data plane • Amazon EKS Auto Mode • Amazon EKS Managed Node Groups • Amazon EKS Managed Node Groups (Local Zones only) • Amazon EC2 self-managed nodes • Amazon EC2 self-managed nodes • AWS Fargate Kubernetes data plane location Amazon EKS in the cloud AWS Regions AWS Local or Wavelength Zones 25 Amazon EKS User Guide Amazon EKS in your data center or edge environments If you need to run applications in your own data centers or edge environments, you can use Amazon EKS on AWS Outposts or Amazon EKS Hybrid Nodes. You can use self-managed nodes with Amazon EC2 instances on AWS Outposts for your cluster compute, or you can use Amazon EKS Hybrid Nodes with your own on-premises or edge infrastructure for your cluster compute. AWS Outposts is AWS-managed infrastructure that you run in your data centers or co-location facilities, whereas Amazon EKS Hybrid Nodes runs on your physical or virtual machines that you manage in your on-premises or edge environments. Amazon EKS on AWS Outposts and Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on"} -{"global_id": 586, "doc_id": "eks", "chunk_id": "34", "question_id": 3, "question": "What do Amazon EKS Hybrid Nodes run on?", "answer_span": "Amazon EKS Hybrid Nodes runs on your physical or virtual machines that you manage in your on-premises or edge environments", "chunk": "with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features and AWS Wavelength Zones features. Amazon EKS in AWS Regions Amazon EKS in Local/Wav elength Zones Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions Kubernetes data plane • Amazon EKS Auto Mode • Amazon EKS Managed Node Groups • Amazon EKS Managed Node Groups (Local Zones only) • Amazon EC2 self-managed nodes • Amazon EC2 self-managed nodes • AWS Fargate Kubernetes data plane location Amazon EKS in the cloud AWS Regions AWS Local or Wavelength Zones 25 Amazon EKS User Guide Amazon EKS in your data center or edge environments If you need to run applications in your own data centers or edge environments, you can use Amazon EKS on AWS Outposts or Amazon EKS Hybrid Nodes. You can use self-managed nodes with Amazon EC2 instances on AWS Outposts for your cluster compute, or you can use Amazon EKS Hybrid Nodes with your own on-premises or edge infrastructure for your cluster compute. AWS Outposts is AWS-managed infrastructure that you run in your data centers or co-location facilities, whereas Amazon EKS Hybrid Nodes runs on your physical or virtual machines that you manage in your on-premises or edge environments. Amazon EKS on AWS Outposts and Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on"} -{"global_id": 587, "doc_id": "eks", "chunk_id": "34", "question_id": 4, "question": "What is required for Amazon EKS on AWS Outposts and Amazon EKS Hybrid Nodes?", "answer_span": "Amazon EKS on AWS Outposts and Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region", "chunk": "with compute in AWS Local Zones and AWS Wavelength Zones, you can use Amazon EKS self-managed nodes to connect Amazon EC2 instances for your cluster compute and can use the other available AWS services in AWS Local Zones and AWS Wavelength Zones. For more information see AWS Local Zones features and AWS Wavelength Zones features. Amazon EKS in AWS Regions Amazon EKS in Local/Wav elength Zones Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions Kubernetes data plane • Amazon EKS Auto Mode • Amazon EKS Managed Node Groups • Amazon EKS Managed Node Groups (Local Zones only) • Amazon EC2 self-managed nodes • Amazon EC2 self-managed nodes • AWS Fargate Kubernetes data plane location Amazon EKS in the cloud AWS Regions AWS Local or Wavelength Zones 25 Amazon EKS User Guide Amazon EKS in your data center or edge environments If you need to run applications in your own data centers or edge environments, you can use Amazon EKS on AWS Outposts or Amazon EKS Hybrid Nodes. You can use self-managed nodes with Amazon EC2 instances on AWS Outposts for your cluster compute, or you can use Amazon EKS Hybrid Nodes with your own on-premises or edge infrastructure for your cluster compute. AWS Outposts is AWS-managed infrastructure that you run in your data centers or co-location facilities, whereas Amazon EKS Hybrid Nodes runs on your physical or virtual machines that you manage in your on-premises or edge environments. Amazon EKS on AWS Outposts and Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on"} -{"global_id": 588, "doc_id": "eks", "chunk_id": "35", "question_id": 1, "question": "What is required for Amazon EKS Hybrid Nodes to function?", "answer_span": "Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region.", "chunk": "Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on AWS Outposts with Amazon EKS local clusters on AWS Outposts. Amazon EKS Hybrid Nodes Amazon EKS on AWS Outposts Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions or AWS Outposts Kubernetes data plane Customer-managed physical or virtual machines Amazon EC2 self-managed nodes Kubernetes data plane location Customer data center or edge environment Customer data center or edge environment Amazon EKS Anywhere for air-gapped environments Amazon EKS Anywhere simplifies Kubernetes cluster management through the automation of undifferentiated heavy lifting such as infrastructure setup and Kubernetes cluster lifecycle operations in on-premises and edge environments. Unlike Amazon EKS, Amazon EKS Anywhere is a customer-managed product and customers are responsible for cluster lifecycle operations and maintenance of Amazon EKS Anywhere clusters. Amazon EKS Anywhere is built on the Kubernetes sub-project Cluster API (CAPI) and supports a range of infrastructure including VMware vSphere, Amazon EKS in your data center or edge environments 26 Amazon EKS User Guide bare metal, Nutanix, Apache CloudStack, and AWS Snow. Amazon EKS Anywhere can be run in airgapped environments and offers optional integrations with regional AWS services for observability and identity management. To receive support for Amazon EKS Anywhere and access to AWSvended Kubernetes add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can"} -{"global_id": 589, "doc_id": "eks", "chunk_id": "35", "question_id": 2, "question": "What does Amazon EKS Anywhere simplify?", "answer_span": "Amazon EKS Anywhere simplifies Kubernetes cluster management through the automation of undifferentiated heavy lifting such as infrastructure setup and Kubernetes cluster lifecycle operations in on-premises and edge environments.", "chunk": "Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on AWS Outposts with Amazon EKS local clusters on AWS Outposts. Amazon EKS Hybrid Nodes Amazon EKS on AWS Outposts Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions or AWS Outposts Kubernetes data plane Customer-managed physical or virtual machines Amazon EC2 self-managed nodes Kubernetes data plane location Customer data center or edge environment Customer data center or edge environment Amazon EKS Anywhere for air-gapped environments Amazon EKS Anywhere simplifies Kubernetes cluster management through the automation of undifferentiated heavy lifting such as infrastructure setup and Kubernetes cluster lifecycle operations in on-premises and edge environments. Unlike Amazon EKS, Amazon EKS Anywhere is a customer-managed product and customers are responsible for cluster lifecycle operations and maintenance of Amazon EKS Anywhere clusters. Amazon EKS Anywhere is built on the Kubernetes sub-project Cluster API (CAPI) and supports a range of infrastructure including VMware vSphere, Amazon EKS in your data center or edge environments 26 Amazon EKS User Guide bare metal, Nutanix, Apache CloudStack, and AWS Snow. Amazon EKS Anywhere can be run in airgapped environments and offers optional integrations with regional AWS services for observability and identity management. To receive support for Amazon EKS Anywhere and access to AWSvended Kubernetes add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can"} -{"global_id": 590, "doc_id": "eks", "chunk_id": "35", "question_id": 3, "question": "Who is responsible for cluster lifecycle operations in Amazon EKS Anywhere?", "answer_span": "Unlike Amazon EKS, Amazon EKS Anywhere is a customer-managed product and customers are responsible for cluster lifecycle operations and maintenance of Amazon EKS Anywhere clusters.", "chunk": "Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on AWS Outposts with Amazon EKS local clusters on AWS Outposts. Amazon EKS Hybrid Nodes Amazon EKS on AWS Outposts Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions or AWS Outposts Kubernetes data plane Customer-managed physical or virtual machines Amazon EC2 self-managed nodes Kubernetes data plane location Customer data center or edge environment Customer data center or edge environment Amazon EKS Anywhere for air-gapped environments Amazon EKS Anywhere simplifies Kubernetes cluster management through the automation of undifferentiated heavy lifting such as infrastructure setup and Kubernetes cluster lifecycle operations in on-premises and edge environments. Unlike Amazon EKS, Amazon EKS Anywhere is a customer-managed product and customers are responsible for cluster lifecycle operations and maintenance of Amazon EKS Anywhere clusters. Amazon EKS Anywhere is built on the Kubernetes sub-project Cluster API (CAPI) and supports a range of infrastructure including VMware vSphere, Amazon EKS in your data center or edge environments 26 Amazon EKS User Guide bare metal, Nutanix, Apache CloudStack, and AWS Snow. Amazon EKS Anywhere can be run in airgapped environments and offers optional integrations with regional AWS services for observability and identity management. To receive support for Amazon EKS Anywhere and access to AWSvended Kubernetes add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can"} -{"global_id": 591, "doc_id": "eks", "chunk_id": "35", "question_id": 4, "question": "What types of infrastructure does Amazon EKS Anywhere support?", "answer_span": "Amazon EKS Anywhere is built on the Kubernetes sub-project Cluster API (CAPI) and supports a range of infrastructure including VMware vSphere, bare metal, Nutanix, Apache CloudStack, and AWS Snow.", "chunk": "Amazon EKS Hybrid Nodes require a reliable connection from your on-premises environments to an AWS Region, and you can use the same Amazon EKS clusters, features, and tools you use to run applications in the cloud. When running on AWS Outposts, you can alternatively deploy the entire Kubernetes cluster on AWS Outposts with Amazon EKS local clusters on AWS Outposts. Amazon EKS Hybrid Nodes Amazon EKS on AWS Outposts Kuberenetes control plane management AWS-managed AWS-managed Kubernetes control plane location AWS Regions AWS Regions or AWS Outposts Kubernetes data plane Customer-managed physical or virtual machines Amazon EC2 self-managed nodes Kubernetes data plane location Customer data center or edge environment Customer data center or edge environment Amazon EKS Anywhere for air-gapped environments Amazon EKS Anywhere simplifies Kubernetes cluster management through the automation of undifferentiated heavy lifting such as infrastructure setup and Kubernetes cluster lifecycle operations in on-premises and edge environments. Unlike Amazon EKS, Amazon EKS Anywhere is a customer-managed product and customers are responsible for cluster lifecycle operations and maintenance of Amazon EKS Anywhere clusters. Amazon EKS Anywhere is built on the Kubernetes sub-project Cluster API (CAPI) and supports a range of infrastructure including VMware vSphere, Amazon EKS in your data center or edge environments 26 Amazon EKS User Guide bare metal, Nutanix, Apache CloudStack, and AWS Snow. Amazon EKS Anywhere can be run in airgapped environments and offers optional integrations with regional AWS services for observability and identity management. To receive support for Amazon EKS Anywhere and access to AWSvended Kubernetes add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can"} -{"global_id": 592, "doc_id": "eks", "chunk_id": "36", "question_id": 1, "question": "What can you purchase for Amazon EKS Anywhere?", "answer_span": "you can purchase Amazon EKS Anywhere Enterprise Subscriptions.", "chunk": "add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can use the Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and view it in the Amazon EKS console. After a cluster is connected, you can see the status, configuration, and workloads for that cluster in the Amazon EKS console. You can use this feature to view connected clusters in Amazon EKS console, but the Amazon EKS Connector does not enable management or mutating operations for your connected clusters through the Amazon EKS console. Amazon EKS Distro is the AWS distribution of the underlying Kubernetes components that power all Amazon EKS offerings. It includes the core components required for a functioning Kubernetes cluster such as Kubernetes control plane components (etcd, kube-apiserver, kube-scheduler, kubecontroller-manager) and networking components (CoreDNS, kube-proxy, CNI plugins). Amazon EKS Distro can be used to self-manage Kubernetes clusters with your choice of tooling. Amazon EKS Distro deployments are not covered by AWS Support Plans. Amazon EKS tooling 27 Amazon EKS User Guide Set up to use Amazon EKS To prepare for the command-line management of your Amazon EKS clusters, you need to install several tools. Use the following to set up credentials, create and modify clusters, and work with clusters once they are running: • Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl –"} -{"global_id": 593, "doc_id": "eks", "chunk_id": "36", "question_id": 2, "question": "What does the Amazon EKS Connector allow you to do?", "answer_span": "You can use the Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and view it in the Amazon EKS console.", "chunk": "add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can use the Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and view it in the Amazon EKS console. After a cluster is connected, you can see the status, configuration, and workloads for that cluster in the Amazon EKS console. You can use this feature to view connected clusters in Amazon EKS console, but the Amazon EKS Connector does not enable management or mutating operations for your connected clusters through the Amazon EKS console. Amazon EKS Distro is the AWS distribution of the underlying Kubernetes components that power all Amazon EKS offerings. It includes the core components required for a functioning Kubernetes cluster such as Kubernetes control plane components (etcd, kube-apiserver, kube-scheduler, kubecontroller-manager) and networking components (CoreDNS, kube-proxy, CNI plugins). Amazon EKS Distro can be used to self-manage Kubernetes clusters with your choice of tooling. Amazon EKS Distro deployments are not covered by AWS Support Plans. Amazon EKS tooling 27 Amazon EKS User Guide Set up to use Amazon EKS To prepare for the command-line management of your Amazon EKS clusters, you need to install several tools. Use the following to set up credentials, create and modify clusters, and work with clusters once they are running: • Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl –"} -{"global_id": 594, "doc_id": "eks", "chunk_id": "36", "question_id": 3, "question": "What is included in Amazon EKS Distro?", "answer_span": "It includes the core components required for a functioning Kubernetes cluster such as Kubernetes control plane components (etcd, kube-apiserver, kube-scheduler, kubecontroller-manager) and networking components (CoreDNS, kube-proxy, CNI plugins).", "chunk": "add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can use the Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and view it in the Amazon EKS console. After a cluster is connected, you can see the status, configuration, and workloads for that cluster in the Amazon EKS console. You can use this feature to view connected clusters in Amazon EKS console, but the Amazon EKS Connector does not enable management or mutating operations for your connected clusters through the Amazon EKS console. Amazon EKS Distro is the AWS distribution of the underlying Kubernetes components that power all Amazon EKS offerings. It includes the core components required for a functioning Kubernetes cluster such as Kubernetes control plane components (etcd, kube-apiserver, kube-scheduler, kubecontroller-manager) and networking components (CoreDNS, kube-proxy, CNI plugins). Amazon EKS Distro can be used to self-manage Kubernetes clusters with your choice of tooling. Amazon EKS Distro deployments are not covered by AWS Support Plans. Amazon EKS tooling 27 Amazon EKS User Guide Set up to use Amazon EKS To prepare for the command-line management of your Amazon EKS clusters, you need to install several tools. Use the following to set up credentials, create and modify clusters, and work with clusters once they are running: • Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl –"} -{"global_id": 595, "doc_id": "eks", "chunk_id": "36", "question_id": 4, "question": "What tools do you need to set up for managing Amazon EKS clusters?", "answer_span": "Use the following to set up credentials, create and modify clusters, and work with clusters once they are running: • Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters.", "chunk": "add-ons, you can purchase Amazon EKS Anywhere Enterprise Subscriptions. Amazon EKS Anywhere Kuberenetes control plane management Customer-managed Kubernetes control plane location Customer data center or edge environment Kubernetes data plane Customer-managed physical or virtual machines Kubernetes data plane location Customer data center or edge environment Amazon EKS tooling You can use the Amazon EKS Connector to register and connect any conformant Kubernetes cluster to AWS and view it in the Amazon EKS console. After a cluster is connected, you can see the status, configuration, and workloads for that cluster in the Amazon EKS console. You can use this feature to view connected clusters in Amazon EKS console, but the Amazon EKS Connector does not enable management or mutating operations for your connected clusters through the Amazon EKS console. Amazon EKS Distro is the AWS distribution of the underlying Kubernetes components that power all Amazon EKS offerings. It includes the core components required for a functioning Kubernetes cluster such as Kubernetes control plane components (etcd, kube-apiserver, kube-scheduler, kubecontroller-manager) and networking components (CoreDNS, kube-proxy, CNI plugins). Amazon EKS Distro can be used to self-manage Kubernetes clusters with your choice of tooling. Amazon EKS Distro deployments are not covered by AWS Support Plans. Amazon EKS tooling 27 Amazon EKS User Guide Set up to use Amazon EKS To prepare for the command-line management of your Amazon EKS clusters, you need to install several tools. Use the following to set up credentials, create and modify clusters, and work with clusters once they are running: • Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl –"} -{"global_id": 596, "doc_id": "eks", "chunk_id": "37", "question_id": 1, "question": "What is the purpose of the AWS CLI?", "answer_span": "The AWS CLI is a command line tool for working with AWS services, including Amazon EKS.", "chunk": "• Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl – The eksctl CLI interacts with AWS to create, modify, and delete Amazon EKS clusters. Once a cluster is up, use the open source kubectl command to manage Kubernetes objects within your Amazon EKS clusters. • Set up a development environment (optional)– Consider adding the following tools: • Local deployment tool – If you’re new to Kubernetes, consider installing a local deployment tool like minikube or kind. These tools allow you to have an Amazon EKS cluster on your local machine for testing applications. • Package manager – helm is a popular package manager for Kubernetes that simplifies the installation and management of complex packages. With Helm, it’s easier to install and manage packages like the AWS Load Balancer Controller on your Amazon EKS cluster. Next steps • Set up AWS CLI • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up AWS CLI The AWS CLI is a command line tool for working with AWS services, including Amazon EKS. It is also used to authenticate IAM users or roles for access to the Amazon EKS cluster and other AWS resources from your local machine. To provision resources in AWS from the command line, you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the"} -{"global_id": 597, "doc_id": "eks", "chunk_id": "37", "question_id": 2, "question": "What tool is recommended for managing Kubernetes objects within Amazon EKS clusters?", "answer_span": "Once a cluster is up, use the open source kubectl command to manage Kubernetes objects within your Amazon EKS clusters.", "chunk": "• Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl – The eksctl CLI interacts with AWS to create, modify, and delete Amazon EKS clusters. Once a cluster is up, use the open source kubectl command to manage Kubernetes objects within your Amazon EKS clusters. • Set up a development environment (optional)– Consider adding the following tools: • Local deployment tool – If you’re new to Kubernetes, consider installing a local deployment tool like minikube or kind. These tools allow you to have an Amazon EKS cluster on your local machine for testing applications. • Package manager – helm is a popular package manager for Kubernetes that simplifies the installation and management of complex packages. With Helm, it’s easier to install and manage packages like the AWS Load Balancer Controller on your Amazon EKS cluster. Next steps • Set up AWS CLI • Set up kubectl and eksctl ��� Quickstart: Deploy a web app and store data Set up AWS CLI The AWS CLI is a command line tool for working with AWS services, including Amazon EKS. It is also used to authenticate IAM users or roles for access to the Amazon EKS cluster and other AWS resources from your local machine. To provision resources in AWS from the command line, you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the"} -{"global_id": 598, "doc_id": "eks", "chunk_id": "37", "question_id": 3, "question": "What is eksctl used for?", "answer_span": "The eksctl CLI interacts with AWS to create, modify, and delete Amazon EKS clusters.", "chunk": "• Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl – The eksctl CLI interacts with AWS to create, modify, and delete Amazon EKS clusters. Once a cluster is up, use the open source kubectl command to manage Kubernetes objects within your Amazon EKS clusters. • Set up a development environment (optional)– Consider adding the following tools: • Local deployment tool – If you’re new to Kubernetes, consider installing a local deployment tool like minikube or kind. These tools allow you to have an Amazon EKS cluster on your local machine for testing applications. • Package manager – helm is a popular package manager for Kubernetes that simplifies the installation and management of complex packages. With Helm, it’s easier to install and manage packages like the AWS Load Balancer Controller on your Amazon EKS cluster. Next steps • Set up AWS CLI • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up AWS CLI The AWS CLI is a command line tool for working with AWS services, including Amazon EKS. It is also used to authenticate IAM users or roles for access to the Amazon EKS cluster and other AWS resources from your local machine. To provision resources in AWS from the command line, you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the"} -{"global_id": 599, "doc_id": "eks", "chunk_id": "37", "question_id": 4, "question": "What do you need to configure in the AWS CLI to provision resources?", "answer_span": "Then you need to configure these credentials in the AWS CLI.", "chunk": "• Set up AWS CLI – Get the AWS CLI to set up and manage the services you need to work with Amazon EKS clusters. In particular, you need AWS CLI to configure credentials, but you also need it with other AWS services. • Set up kubectl and eksctl – The eksctl CLI interacts with AWS to create, modify, and delete Amazon EKS clusters. Once a cluster is up, use the open source kubectl command to manage Kubernetes objects within your Amazon EKS clusters. • Set up a development environment (optional)– Consider adding the following tools: • Local deployment tool – If you’re new to Kubernetes, consider installing a local deployment tool like minikube or kind. These tools allow you to have an Amazon EKS cluster on your local machine for testing applications. • Package manager – helm is a popular package manager for Kubernetes that simplifies the installation and management of complex packages. With Helm, it’s easier to install and manage packages like the AWS Load Balancer Controller on your Amazon EKS cluster. Next steps • Set up AWS CLI • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up AWS CLI The AWS CLI is a command line tool for working with AWS services, including Amazon EKS. It is also used to authenticate IAM users or roles for access to the Amazon EKS cluster and other AWS resources from your local machine. To provision resources in AWS from the command line, you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the"} -{"global_id": 600, "doc_id": "eks", "chunk_id": "38", "question_id": 1, "question": "What do you need to obtain to use the command line with AWS?", "answer_span": "you need to obtain an AWS access key ID and secret key to use in the command line.", "chunk": "you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the AWS Command Line Interface User Guide. Next steps 28 Amazon EKS User Guide To create an access key 1. Sign into the AWS Management Console. 2. For single-user or multiple-user accounts: • Single-user account –:: In the top right, choose your AWS user name to open the navigation menu. For example, choose webadmin . • Multiple-user account –:: Choose IAM from the list of services. From the IAM Dashboard, select Users, and choose the name of the user. 3. Choose Security credentials. 4. Under Access keys, choose Create access key. 5. Choose Command Line Interface (CLI), then choose Next. 6. Choose Create access key. 7. Choose Download .csv file. To configure the AWS CLI After installing the AWS CLI, do the following steps to configure it. For more information, see Configure the AWS CLI in the AWS Command Line Interface User Guide. 1. In a terminal window, enter the following command: aws configure Optionally, you can configure a named profile, such as --profile cluster-admin. If you configure a named profile in the AWS CLI, you must always pass this flag in subsequent commands. 2. Enter your AWS credentials. For example: Access Key ID [None]: AKIAIOSFODNN7EXAMPLE Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI"} -{"global_id": 601, "doc_id": "eks", "chunk_id": "38", "question_id": 2, "question": "Where can you find instructions to install or update the AWS CLI?", "answer_span": "see Install or update the latest version of the AWS CLI in the AWS Command Line Interface User Guide.", "chunk": "you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the AWS Command Line Interface User Guide. Next steps 28 Amazon EKS User Guide To create an access key 1. Sign into the AWS Management Console. 2. For single-user or multiple-user accounts: • Single-user account –:: In the top right, choose your AWS user name to open the navigation menu. For example, choose webadmin . • Multiple-user account –:: Choose IAM from the list of services. From the IAM Dashboard, select Users, and choose the name of the user. 3. Choose Security credentials. 4. Under Access keys, choose Create access key. 5. Choose Command Line Interface (CLI), then choose Next. 6. Choose Create access key. 7. Choose Download .csv file. To configure the AWS CLI After installing the AWS CLI, do the following steps to configure it. For more information, see Configure the AWS CLI in the AWS Command Line Interface User Guide. 1. In a terminal window, enter the following command: aws configure Optionally, you can configure a named profile, such as --profile cluster-admin. If you configure a named profile in the AWS CLI, you must always pass this flag in subsequent commands. 2. Enter your AWS credentials. For example: Access Key ID [None]: AKIAIOSFODNN7EXAMPLE Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI"} -{"global_id": 602, "doc_id": "eks", "chunk_id": "38", "question_id": 3, "question": "What is the first step to create an access key?", "answer_span": "Sign into the AWS Management Console.", "chunk": "you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the AWS Command Line Interface User Guide. Next steps 28 Amazon EKS User Guide To create an access key 1. Sign into the AWS Management Console. 2. For single-user or multiple-user accounts: • Single-user account –:: In the top right, choose your AWS user name to open the navigation menu. For example, choose webadmin . • Multiple-user account –:: Choose IAM from the list of services. From the IAM Dashboard, select Users, and choose the name of the user. 3. Choose Security credentials. 4. Under Access keys, choose Create access key. 5. Choose Command Line Interface (CLI), then choose Next. 6. Choose Create access key. 7. Choose Download .csv file. To configure the AWS CLI After installing the AWS CLI, do the following steps to configure it. For more information, see Configure the AWS CLI in the AWS Command Line Interface User Guide. 1. In a terminal window, enter the following command: aws configure Optionally, you can configure a named profile, such as --profile cluster-admin. If you configure a named profile in the AWS CLI, you must always pass this flag in subsequent commands. 2. Enter your AWS credentials. For example: Access Key ID [None]: AKIAIOSFODNN7EXAMPLE Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI"} -{"global_id": 603, "doc_id": "eks", "chunk_id": "38", "question_id": 4, "question": "What command do you enter in the terminal to configure the AWS CLI?", "answer_span": "In a terminal window, enter the following command: aws configure", "chunk": "you need to obtain an AWS access key ID and secret key to use in the command line. Then you need to configure these credentials in the AWS CLI. If you haven’t already installed the AWS CLI, see Install or update the latest version of the AWS CLI in the AWS Command Line Interface User Guide. Next steps 28 Amazon EKS User Guide To create an access key 1. Sign into the AWS Management Console. 2. For single-user or multiple-user accounts: • Single-user account –:: In the top right, choose your AWS user name to open the navigation menu. For example, choose webadmin . • Multiple-user account –:: Choose IAM from the list of services. From the IAM Dashboard, select Users, and choose the name of the user. 3. Choose Security credentials. 4. Under Access keys, choose Create access key. 5. Choose Command Line Interface (CLI), then choose Next. 6. Choose Create access key. 7. Choose Download .csv file. To configure the AWS CLI After installing the AWS CLI, do the following steps to configure it. For more information, see Configure the AWS CLI in the AWS Command Line Interface User Guide. 1. In a terminal window, enter the following command: aws configure Optionally, you can configure a named profile, such as --profile cluster-admin. If you configure a named profile in the AWS CLI, you must always pass this flag in subsequent commands. 2. Enter your AWS credentials. For example: Access Key ID [None]: AKIAIOSFODNN7EXAMPLE Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI"} -{"global_id": 604, "doc_id": "eks", "chunk_id": "39", "question_id": 1, "question": "What is the default validity period of the security token?", "answer_span": "By default, the token is valid for 15 minutes.", "chunk": "Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI Command Reference. By default, the token is valid for 15 minutes. To change the default session timeout, pass the -duration-seconds flag. For example: aws sts get-session-token --duration-seconds 3600 This command returns the temporary security credentials for an AWS CLI session. You should see the following response output: { \"Credentials\": { \"AccessKeyId\": \"ASIA5FTRU3LOEXAMPLE\", \"SecretAccessKey\": \"JnKgvwfqUD9mNsPoi9IbxAYEXAMPLE\", \"SessionToken\": \"VERYLONGSESSIONTOKENSTRING\", \"Expiration\": \"2023-02-17T03:14:24+00:00\" } } To verify the user identity If needed, run the following command to verify the AWS credentials for your IAM user identity (such as ClusterAdmin) for the terminal session. aws sts get-caller-identity This command returns the Amazon Resource Name (ARN) of the IAM entity that’s configured for the AWS CLI. You should see the following example response output: { \"UserId\": \"AKIAIOSFODNN7EXAMPLE\", \"Account\": \"01234567890\", \"Arn\": \"arn:aws:iam::01234567890:user/ClusterAdmin\" } To get a security token 30 Amazon EKS User Guide Next steps • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up kubectl and eksctl Once the AWS CLI is installed, there are two other tools you should install to create and manage your Kubernetes clusters: • kubectl: The kubectl command line tool is the main tool you will use to manage resources within your Kubernetes cluster. This page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying"} -{"global_id": 605, "doc_id": "eks", "chunk_id": "39", "question_id": 2, "question": "What command is used to get a new security token for the AWS CLI?", "answer_span": "If needed, run the following command to get a new security token for the AWS CLI.", "chunk": "Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI Command Reference. By default, the token is valid for 15 minutes. To change the default session timeout, pass the -duration-seconds flag. For example: aws sts get-session-token --duration-seconds 3600 This command returns the temporary security credentials for an AWS CLI session. You should see the following response output: { \"Credentials\": { \"AccessKeyId\": \"ASIA5FTRU3LOEXAMPLE\", \"SecretAccessKey\": \"JnKgvwfqUD9mNsPoi9IbxAYEXAMPLE\", \"SessionToken\": \"VERYLONGSESSIONTOKENSTRING\", \"Expiration\": \"2023-02-17T03:14:24+00:00\" } } To verify the user identity If needed, run the following command to verify the AWS credentials for your IAM user identity (such as ClusterAdmin) for the terminal session. aws sts get-caller-identity This command returns the Amazon Resource Name (ARN) of the IAM entity that’s configured for the AWS CLI. You should see the following example response output: { \"UserId\": \"AKIAIOSFODNN7EXAMPLE\", \"Account\": \"01234567890\", \"Arn\": \"arn:aws:iam::01234567890:user/ClusterAdmin\" } To get a security token 30 Amazon EKS User Guide Next steps • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up kubectl and eksctl Once the AWS CLI is installed, there are two other tools you should install to create and manage your Kubernetes clusters: • kubectl: The kubectl command line tool is the main tool you will use to manage resources within your Kubernetes cluster. This page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying"} -{"global_id": 606, "doc_id": "eks", "chunk_id": "39", "question_id": 3, "question": "What does the command 'aws sts get-caller-identity' return?", "answer_span": "This command returns the Amazon Resource Name (ARN) of the IAM entity that’s configured for the AWS CLI.", "chunk": "Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI Command Reference. By default, the token is valid for 15 minutes. To change the default session timeout, pass the -duration-seconds flag. For example: aws sts get-session-token --duration-seconds 3600 This command returns the temporary security credentials for an AWS CLI session. You should see the following response output: { \"Credentials\": { \"AccessKeyId\": \"ASIA5FTRU3LOEXAMPLE\", \"SecretAccessKey\": \"JnKgvwfqUD9mNsPoi9IbxAYEXAMPLE\", \"SessionToken\": \"VERYLONGSESSIONTOKENSTRING\", \"Expiration\": \"2023-02-17T03:14:24+00:00\" } } To verify the user identity If needed, run the following command to verify the AWS credentials for your IAM user identity (such as ClusterAdmin) for the terminal session. aws sts get-caller-identity This command returns the Amazon Resource Name (ARN) of the IAM entity that’s configured for the AWS CLI. You should see the following example response output: { \"UserId\": \"AKIAIOSFODNN7EXAMPLE\", \"Account\": \"01234567890\", \"Arn\": \"arn:aws:iam::01234567890:user/ClusterAdmin\" } To get a security token 30 Amazon EKS User Guide Next steps • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up kubectl and eksctl Once the AWS CLI is installed, there are two other tools you should install to create and manage your Kubernetes clusters: • kubectl: The kubectl command line tool is the main tool you will use to manage resources within your Kubernetes cluster. This page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying"} -{"global_id": 607, "doc_id": "eks", "chunk_id": "39", "question_id": 4, "question": "What are the two tools mentioned for managing Kubernetes clusters?", "answer_span": "Once the AWS CLI is installed, there are two other tools you should install to create and manage your Kubernetes clusters: • kubectl: The kubectl command line tool is the main tool you will use to manage resources within your Kubernetes cluster.", "chunk": "Default region name [None]: region-code Default output format [None]: json To create an access key 29 Amazon EKS User Guide To get a security token If needed, run the following command to get a new security token for the AWS CLI. For more information, see get-session-token in the AWS CLI Command Reference. By default, the token is valid for 15 minutes. To change the default session timeout, pass the -duration-seconds flag. For example: aws sts get-session-token --duration-seconds 3600 This command returns the temporary security credentials for an AWS CLI session. You should see the following response output: { \"Credentials\": { \"AccessKeyId\": \"ASIA5FTRU3LOEXAMPLE\", \"SecretAccessKey\": \"JnKgvwfqUD9mNsPoi9IbxAYEXAMPLE\", \"SessionToken\": \"VERYLONGSESSIONTOKENSTRING\", \"Expiration\": \"2023-02-17T03:14:24+00:00\" } } To verify the user identity If needed, run the following command to verify the AWS credentials for your IAM user identity (such as ClusterAdmin) for the terminal session. aws sts get-caller-identity This command returns the Amazon Resource Name (ARN) of the IAM entity that’s configured for the AWS CLI. You should see the following example response output: { \"UserId\": \"AKIAIOSFODNN7EXAMPLE\", \"Account\": \"01234567890\", \"Arn\": \"arn:aws:iam::01234567890:user/ClusterAdmin\" } To get a security token 30 Amazon EKS User Guide Next steps • Set up kubectl and eksctl • Quickstart: Deploy a web app and store data Set up kubectl and eksctl Once the AWS CLI is installed, there are two other tools you should install to create and manage your Kubernetes clusters: • kubectl: The kubectl command line tool is the main tool you will use to manage resources within your Kubernetes cluster. This page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying"} -{"global_id": 608, "doc_id": "eks", "chunk_id": "40", "question_id": 1, "question": "What does the page describe about kubectl?", "answer_span": "page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster.", "chunk": "page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying and deleting those clusters. See Install eksctl. Install or update kubectl This topic helps you to download and install, or update, the kubectl binary on your device. The binary is identical to the upstream community versions. The binary is not unique to Amazon EKS or AWS. Use the steps below to get the specific version of kubectl that you need, although many builders simply run brew install kubectl to install it. Note You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.32 kubectl client works with Kubernetes 1.31, 1.32, and 1.33 clusters. Step 1: Check if kubectl is installed Determine whether you already have kubectl installed on your device. kubectl version --client Next steps 31"} -{"global_id": 609, "doc_id": "eks", "chunk_id": "40", "question_id": 2, "question": "What is eksctl used for?", "answer_span": "The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying and deleting those clusters.", "chunk": "page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying and deleting those clusters. See Install eksctl. Install or update kubectl This topic helps you to download and install, or update, the kubectl binary on your device. The binary is identical to the upstream community versions. The binary is not unique to Amazon EKS or AWS. Use the steps below to get the specific version of kubectl that you need, although many builders simply run brew install kubectl to install it. Note You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.32 kubectl client works with Kubernetes 1.31, 1.32, and 1.33 clusters. Step 1: Check if kubectl is installed Determine whether you already have kubectl installed on your device. kubectl version --client Next steps 31"} -{"global_id": 610, "doc_id": "eks", "chunk_id": "40", "question_id": 3, "question": "What must you ensure about the kubectl version in relation to your Amazon EKS cluster?", "answer_span": "You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane.", "chunk": "page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying and deleting those clusters. See Install eksctl. Install or update kubectl This topic helps you to download and install, or update, the kubectl binary on your device. The binary is identical to the upstream community versions. The binary is not unique to Amazon EKS or AWS. Use the steps below to get the specific version of kubectl that you need, although many builders simply run brew install kubectl to install it. Note You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.32 kubectl client works with Kubernetes 1.31, 1.32, and 1.33 clusters. Step 1: Check if kubectl is installed Determine whether you already have kubectl installed on your device. kubectl version --client Next steps 31"} -{"global_id": 611, "doc_id": "eks", "chunk_id": "40", "question_id": 4, "question": "What command can you run to check if kubectl is installed?", "answer_span": "kubectl version --client", "chunk": "page describes how to download and set up the kubectl binary that matches the version of your Kubernetes cluster. See Install or update kubectl. • eksctl: The eksctl command line tool is made for creating EKS clusters in the AWS cloud or on-premises (with EKS Anywhere), as well as modifying and deleting those clusters. See Install eksctl. Install or update kubectl This topic helps you to download and install, or update, the kubectl binary on your device. The binary is identical to the upstream community versions. The binary is not unique to Amazon EKS or AWS. Use the steps below to get the specific version of kubectl that you need, although many builders simply run brew install kubectl to install it. Note You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.32 kubectl client works with Kubernetes 1.31, 1.32, and 1.33 clusters. Step 1: Check if kubectl is installed Determine whether you already have kubectl installed on your device. kubectl version --client Next steps 31"} -{"global_id": 612, "doc_id": "lambda", "chunk_id": "0", "question_id": 1, "question": "What does AWS Lambda allow you to do?", "answer_span": "You can use AWS Lambda to run code without provisioning or managing servers.", "chunk": "AWS Lambda Developer Guide What is AWS Lambda? You can use AWS Lambda to run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and manages all the computing resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging. You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. For pricing information, see AWS Lambda Pricing for details. When using Lambda, you are responsible only for your code. Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources to run your code. Because Lambda manages these resources, you cannot log in to compute instances or customize the operating system on provided runtimes. When to use Lambda Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand. For example, you can use Lambda for: • Stream processing: Use Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, clickstream analysis, data cleansing, log filtering, indexing, social media analysis, Internet of Things (IoT) device data telemetry, and metering. • Web applications: Combine Lambda with other AWS services to build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers. To build web applications with AWS services, developers can use infrastructure as code (IaC) and orchestration tools such as AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React"} -{"global_id": 613, "doc_id": "lambda", "chunk_id": "0", "question_id": 2, "question": "What is one of the responsibilities of the user when using AWS Lambda?", "answer_span": "When using Lambda, you are responsible only for your code.", "chunk": "AWS Lambda Developer Guide What is AWS Lambda? You can use AWS Lambda to run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and manages all the computing resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging. You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. For pricing information, see AWS Lambda Pricing for details. When using Lambda, you are responsible only for your code. Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources to run your code. Because Lambda manages these resources, you cannot log in to compute instances or customize the operating system on provided runtimes. When to use Lambda Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand. For example, you can use Lambda for: • Stream processing: Use Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, clickstream analysis, data cleansing, log filtering, indexing, social media analysis, Internet of Things (IoT) device data telemetry, and metering. • Web applications: Combine Lambda with other AWS services to build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers. To build web applications with AWS services, developers can use infrastructure as code (IaC) and orchestration tools such as AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React"} -{"global_id": 614, "doc_id": "lambda", "chunk_id": "0", "question_id": 3, "question": "What is an ideal application scenario for using AWS Lambda?", "answer_span": "Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand.", "chunk": "AWS Lambda Developer Guide What is AWS Lambda? You can use AWS Lambda to run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and manages all the computing resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging. You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. For pricing information, see AWS Lambda Pricing for details. When using Lambda, you are responsible only for your code. Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources to run your code. Because Lambda manages these resources, you cannot log in to compute instances or customize the operating system on provided runtimes. When to use Lambda Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand. For example, you can use Lambda for: • Stream processing: Use Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, clickstream analysis, data cleansing, log filtering, indexing, social media analysis, Internet of Things (IoT) device data telemetry, and metering. • Web applications: Combine Lambda with other AWS services to build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers. To build web applications with AWS services, developers can use infrastructure as code (IaC) and orchestration tools such as AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React"} -{"global_id": 615, "doc_id": "lambda", "chunk_id": "0", "question_id": 4, "question": "Which AWS services can be combined with Lambda to build web applications?", "answer_span": "Combine Lambda with other AWS services to build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers.", "chunk": "AWS Lambda Developer Guide What is AWS Lambda? You can use AWS Lambda to run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and manages all the computing resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging. You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. For pricing information, see AWS Lambda Pricing for details. When using Lambda, you are responsible only for your code. Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources to run your code. Because Lambda manages these resources, you cannot log in to compute instances or customize the operating system on provided runtimes. When to use Lambda Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand. For example, you can use Lambda for: • Stream processing: Use Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, clickstream analysis, data cleansing, log filtering, indexing, social media analysis, Internet of Things (IoT) device data telemetry, and metering. • Web applications: Combine Lambda with other AWS services to build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers. To build web applications with AWS services, developers can use infrastructure as code (IaC) and orchestration tools such as AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React"} -{"global_id": 616, "doc_id": "lambda", "chunk_id": "1", "question_id": 1, "question": "What services can be used to build mobile backends?", "answer_span": "Build backends using Lambda and Amazon API Gateway to authenticate and process API requests.", "chunk": "AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React Native frontends. • IoT backends: Build serverless backends using Lambda to handle web, mobile, IoT, and thirdparty API requests. • File processing: Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload. When to use Lambda 1 AWS Lambda Developer Guide • Database Operations and Integration: Use Lambda to process database interactions both reactively and proactively, from handling queue messages for Amazon RDS operations like user registrations and order submissions, to responding to DynamoDB changes for audit logging, data replication, and automated workflows. • Scheduled and Periodic Tasks: Use Lambda with EventBridge rules to execute time-based operations such as database maintenance, data archiving, report generation, and other scheduled business processes using cron-like expressions. How Lambda works Because Lambda is a serverless, event-driven compute service, it uses a different programming paradigm than traditional web applications. The following model illustrates how Lambda fundamentally works: 1. You write and organize your code in Lambda functions, which are the basic building blocks you use to create a Lambda application. 2. You control security and access through Lambda permissions, using execution roles to manage what AWS services your functions can interact with and what resource policies can interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and"} -{"global_id": 617, "doc_id": "lambda", "chunk_id": "1", "question_id": 2, "question": "What is the purpose of using Amazon Simple Storage Service (Amazon S3) with Lambda?", "answer_span": "Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload.", "chunk": "AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React Native frontends. • IoT backends: Build serverless backends using Lambda to handle web, mobile, IoT, and thirdparty API requests. • File processing: Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload. When to use Lambda 1 AWS Lambda Developer Guide • Database Operations and Integration: Use Lambda to process database interactions both reactively and proactively, from handling queue messages for Amazon RDS operations like user registrations and order submissions, to responding to DynamoDB changes for audit logging, data replication, and automated workflows. • Scheduled and Periodic Tasks: Use Lambda with EventBridge rules to execute time-based operations such as database maintenance, data archiving, report generation, and other scheduled business processes using cron-like expressions. How Lambda works Because Lambda is a serverless, event-driven compute service, it uses a different programming paradigm than traditional web applications. The following model illustrates how Lambda fundamentally works: 1. You write and organize your code in Lambda functions, which are the basic building blocks you use to create a Lambda application. 2. You control security and access through Lambda permissions, using execution roles to manage what AWS services your functions can interact with and what resource policies can interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and"} -{"global_id": 618, "doc_id": "lambda", "chunk_id": "1", "question_id": 3, "question": "How does Lambda handle database operations?", "answer_span": "Use Lambda to process database interactions both reactively and proactively, from handling queue messages for Amazon RDS operations like user registrations and order submissions, to responding to DynamoDB changes for audit logging, data replication, and automated workflows.", "chunk": "AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React Native frontends. • IoT backends: Build serverless backends using Lambda to handle web, mobile, IoT, and thirdparty API requests. • File processing: Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload. When to use Lambda 1 AWS Lambda Developer Guide • Database Operations and Integration: Use Lambda to process database interactions both reactively and proactively, from handling queue messages for Amazon RDS operations like user registrations and order submissions, to responding to DynamoDB changes for audit logging, data replication, and automated workflows. • Scheduled and Periodic Tasks: Use Lambda with EventBridge rules to execute time-based operations such as database maintenance, data archiving, report generation, and other scheduled business processes using cron-like expressions. How Lambda works Because Lambda is a serverless, event-driven compute service, it uses a different programming paradigm than traditional web applications. The following model illustrates how Lambda fundamentally works: 1. You write and organize your code in Lambda functions, which are the basic building blocks you use to create a Lambda application. 2. You control security and access through Lambda permissions, using execution roles to manage what AWS services your functions can interact with and what resource policies can interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and"} -{"global_id": 619, "doc_id": "lambda", "chunk_id": "1", "question_id": 4, "question": "What is the first step in how Lambda works?", "answer_span": "You write and organize your code in Lambda functions, which are the basic building blocks you use to create a Lambda application.", "chunk": "AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React Native frontends. • IoT backends: Build serverless backends using Lambda to handle web, mobile, IoT, and thirdparty API requests. • File processing: Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload. When to use Lambda 1 AWS Lambda Developer Guide • Database Operations and Integration: Use Lambda to process database interactions both reactively and proactively, from handling queue messages for Amazon RDS operations like user registrations and order submissions, to responding to DynamoDB changes for audit logging, data replication, and automated workflows. • Scheduled and Periodic Tasks: Use Lambda with EventBridge rules to execute time-based operations such as database maintenance, data archiving, report generation, and other scheduled business processes using cron-like expressions. How Lambda works Because Lambda is a serverless, event-driven compute service, it uses a different programming paradigm than traditional web applications. The following model illustrates how Lambda fundamentally works: 1. You write and organize your code in Lambda functions, which are the basic building blocks you use to create a Lambda application. 2. You control security and access through Lambda permissions, using execution roles to manage what AWS services your functions can interact with and what resource policies can interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and"} -{"global_id": 620, "doc_id": "lambda", "chunk_id": "2", "question_id": 1, "question": "What format is the event data passed to Lambda functions?", "answer_span": "passing event data in JSON format", "chunk": "interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and extensions. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Key features Configure, control, and deploy secure applications: • Environment variables modify application behavior without new code deployments. • Versions safely test new features while maintaining stable production environments. How Lambda works 2 AWS Lambda Developer Guide • Lambda layers optimize code reuse and maintenance by sharing common components across multiple functions. • Code signing enforce security compliance by ensuring only approved code reaches production systems. Scale and perform reliably: • Concurrency and scaling controls precisely manage application responsiveness and resource utilization during traffic spikes. • Lambda SnapStart significantly reduce cold start times. Lambda SnapStart can provide as low as sub-second startup performance, typically with no changes to your function code. • Response streaming optimize function performance by delivering large payloads incrementally for real-time processing. • Container images package functions with complex dependencies using container workflows. Connect and integrate seamlessly: • VPC networks secure sensitive resources and internal services. • File system integration that shares persistent data and manage stateful operations across function invocations. • Function URLs create public-facing APIs and endpoints without additional services. • Lambda extensions augment functions with monitoring, security, and operational tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks"} -{"global_id": 621, "doc_id": "lambda", "chunk_id": "2", "question_id": 2, "question": "What do Lambda layers optimize?", "answer_span": "Lambda layers optimize code reuse and maintenance by sharing common components across multiple functions.", "chunk": "interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and extensions. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Key features Configure, control, and deploy secure applications: • Environment variables modify application behavior without new code deployments. • Versions safely test new features while maintaining stable production environments. How Lambda works 2 AWS Lambda Developer Guide • Lambda layers optimize code reuse and maintenance by sharing common components across multiple functions. • Code signing enforce security compliance by ensuring only approved code reaches production systems. Scale and perform reliably: • Concurrency and scaling controls precisely manage application responsiveness and resource utilization during traffic spikes. • Lambda SnapStart significantly reduce cold start times. Lambda SnapStart can provide as low as sub-second startup performance, typically with no changes to your function code. • Response streaming optimize function performance by delivering large payloads incrementally for real-time processing. • Container images package functions with complex dependencies using container workflows. Connect and integrate seamlessly: • VPC networks secure sensitive resources and internal services. • File system integration that shares persistent data and manage stateful operations across function invocations. • Function URLs create public-facing APIs and endpoints without additional services. • Lambda extensions augment functions with monitoring, security, and operational tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks"} -{"global_id": 622, "doc_id": "lambda", "chunk_id": "2", "question_id": 3, "question": "What feature allows for safe testing of new features in Lambda?", "answer_span": "Versions safely test new features while maintaining stable production environments.", "chunk": "interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and extensions. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Key features Configure, control, and deploy secure applications: • Environment variables modify application behavior without new code deployments. • Versions safely test new features while maintaining stable production environments. How Lambda works 2 AWS Lambda Developer Guide • Lambda layers optimize code reuse and maintenance by sharing common components across multiple functions. • Code signing enforce security compliance by ensuring only approved code reaches production systems. Scale and perform reliably: • Concurrency and scaling controls precisely manage application responsiveness and resource utilization during traffic spikes. • Lambda SnapStart significantly reduce cold start times. Lambda SnapStart can provide as low as sub-second startup performance, typically with no changes to your function code. • Response streaming optimize function performance by delivering large payloads incrementally for real-time processing. • Container images package functions with complex dependencies using container workflows. Connect and integrate seamlessly: • VPC networks secure sensitive resources and internal services. • File system integration that shares persistent data and manage stateful operations across function invocations. • Function URLs create public-facing APIs and endpoints without additional services. • Lambda extensions augment functions with monitoring, security, and operational tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks"} -{"global_id": 623, "doc_id": "lambda", "chunk_id": "2", "question_id": 4, "question": "What do VPC networks provide in the context of Lambda?", "answer_span": "VPC networks secure sensitive resources and internal services.", "chunk": "interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and extensions. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Key features Configure, control, and deploy secure applications: • Environment variables modify application behavior without new code deployments. • Versions safely test new features while maintaining stable production environments. How Lambda works 2 AWS Lambda Developer Guide • Lambda layers optimize code reuse and maintenance by sharing common components across multiple functions. • Code signing enforce security compliance by ensuring only approved code reaches production systems. Scale and perform reliably: • Concurrency and scaling controls precisely manage application responsiveness and resource utilization during traffic spikes. • Lambda SnapStart significantly reduce cold start times. Lambda SnapStart can provide as low as sub-second startup performance, typically with no changes to your function code. • Response streaming optimize function performance by delivering large payloads incrementally for real-time processing. • Container images package functions with complex dependencies using container workflows. Connect and integrate seamlessly: • VPC networks secure sensitive resources and internal services. • File system integration that shares persistent data and manage stateful operations across function invocations. • Function URLs create public-facing APIs and endpoints without additional services. • Lambda extensions augment functions with monitoring, security, and operational tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks"} -{"global_id": 624, "doc_id": "lambda", "chunk_id": "3", "question_id": 1, "question": "What are Lambda functions used for?", "answer_span": "A Lambda function is a small block of code that runs in response to events.", "chunk": "tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks you use to build Lambda applications. To write functions, it's essential to understand the core concepts and components that make up the Lambda Related information 3 AWS Lambda Developer Guide programming model. This section will guide you through the fundamental elements you need to know to start building serverless applications with Lambda. • Lambda functions and function handlers - A Lambda function is a small block of code that runs in response to events. functions are the basic building blocks you use to build applications. Function handlers are the entry point for event objects that your Lambda function code processes. • Lambda execution environment and runtimes - Lambda execution environments manage the resources required to run your function. Run times are the language-specific environments your functions run in. • Events and triggers - how other AWS services invoke your functions in response to specific events. • Lambda permissions and roles - how you control who can access your functions and what other AWS services your functions can interact with. Tip If you want to start by understanding serverless development more generally, see Understanding the difference between traditional and serverless development in the AWS Serverless Developer Guide. Lambda functions and function handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon"} -{"global_id": 625, "doc_id": "lambda", "chunk_id": "3", "question_id": 2, "question": "What do function handlers represent in Lambda?", "answer_span": "Function handlers are the entry point for event objects that your Lambda function code processes.", "chunk": "tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks you use to build Lambda applications. To write functions, it's essential to understand the core concepts and components that make up the Lambda Related information 3 AWS Lambda Developer Guide programming model. This section will guide you through the fundamental elements you need to know to start building serverless applications with Lambda. • Lambda functions and function handlers - A Lambda function is a small block of code that runs in response to events. functions are the basic building blocks you use to build applications. Function handlers are the entry point for event objects that your Lambda function code processes. • Lambda execution environment and runtimes - Lambda execution environments manage the resources required to run your function. Run times are the language-specific environments your functions run in. • Events and triggers - how other AWS services invoke your functions in response to specific events. • Lambda permissions and roles - how you control who can access your functions and what other AWS services your functions can interact with. Tip If you want to start by understanding serverless development more generally, see Understanding the difference between traditional and serverless development in the AWS Serverless Developer Guide. Lambda functions and function handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon"} -{"global_id": 626, "doc_id": "lambda", "chunk_id": "3", "question_id": 3, "question": "What do Lambda execution environments manage?", "answer_span": "Lambda execution environments manage the resources required to run your function.", "chunk": "tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks you use to build Lambda applications. To write functions, it's essential to understand the core concepts and components that make up the Lambda Related information 3 AWS Lambda Developer Guide programming model. This section will guide you through the fundamental elements you need to know to start building serverless applications with Lambda. • Lambda functions and function handlers - A Lambda function is a small block of code that runs in response to events. functions are the basic building blocks you use to build applications. Function handlers are the entry point for event objects that your Lambda function code processes. • Lambda execution environment and runtimes - Lambda execution environments manage the resources required to run your function. Run times are the language-specific environments your functions run in. • Events and triggers - how other AWS services invoke your functions in response to specific events. • Lambda permissions and roles - how you control who can access your functions and what other AWS services your functions can interact with. Tip If you want to start by understanding serverless development more generally, see Understanding the difference between traditional and serverless development in the AWS Serverless Developer Guide. Lambda functions and function handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon"} -{"global_id": 627, "doc_id": "lambda", "chunk_id": "3", "question_id": 4, "question": "Where can you find information on how Lambda works?", "answer_span": "For information on how Lambda works, see How Lambda works.", "chunk": "tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks you use to build Lambda applications. To write functions, it's essential to understand the core concepts and components that make up the Lambda Related information 3 AWS Lambda Developer Guide programming model. This section will guide you through the fundamental elements you need to know to start building serverless applications with Lambda. • Lambda functions and function handlers - A Lambda function is a small block of code that runs in response to events. functions are the basic building blocks you use to build applications. Function handlers are the entry point for event objects that your Lambda function code processes. • Lambda execution environment and runtimes - Lambda execution environments manage the resources required to run your function. Run times are the language-specific environments your functions run in. • Events and triggers - how other AWS services invoke your functions in response to specific events. • Lambda permissions and roles - how you control who can access your functions and what other AWS services your functions can interact with. Tip If you want to start by understanding serverless development more generally, see Understanding the difference between traditional and serverless development in the AWS Serverless Developer Guide. Lambda functions and function handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon"} -{"global_id": 628, "doc_id": "lambda", "chunk_id": "4", "question_id": 1, "question": "What are Lambda functions used for?", "answer_span": "In Lambda, functions are the fundamental building blocks you use to create applications.", "chunk": "handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. You can think of a function as a kind of self-contained program with the following properties. A Lambda function handler is the method in your function code that processes events. When a function runs in response to an event, Lambda runs the function handler. Data about the event that caused the function to run is passed directly to the handler. While the code in a Lambda function can contain more than one method or function, Lambda functions can only have one handler. To create a Lambda function, you bundle your function code and its dependencies in a deployment package. Lambda supports two types of deployment package, .zip file archives and container images. Lambda functions and function handlers 4 AWS Lambda Developer Guide • A function has one specific job or purpose • They run only when needed in response to specific events • They automatically stop running when finished Lambda execution environment and runtimes Lambda functions run inside a secure, isolated execution environment which Lambda manages for you. This execution environment manages the processes and resources that are needed to run your function. When a function is first invoked, Lambda creates a new execution environment for the function to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and"} -{"global_id": 629, "doc_id": "lambda", "chunk_id": "4", "question_id": 2, "question": "What is a Lambda function handler?", "answer_span": "A Lambda function handler is the method in your function code that processes events.", "chunk": "handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. You can think of a function as a kind of self-contained program with the following properties. A Lambda function handler is the method in your function code that processes events. When a function runs in response to an event, Lambda runs the function handler. Data about the event that caused the function to run is passed directly to the handler. While the code in a Lambda function can contain more than one method or function, Lambda functions can only have one handler. To create a Lambda function, you bundle your function code and its dependencies in a deployment package. Lambda supports two types of deployment package, .zip file archives and container images. Lambda functions and function handlers 4 AWS Lambda Developer Guide • A function has one specific job or purpose • They run only when needed in response to specific events • They automatically stop running when finished Lambda execution environment and runtimes Lambda functions run inside a secure, isolated execution environment which Lambda manages for you. This execution environment manages the processes and resources that are needed to run your function. When a function is first invoked, Lambda creates a new execution environment for the function to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and"} -{"global_id": 630, "doc_id": "lambda", "chunk_id": "4", "question_id": 3, "question": "How many handlers can a Lambda function have?", "answer_span": "Lambda functions can only have one handler.", "chunk": "handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. You can think of a function as a kind of self-contained program with the following properties. A Lambda function handler is the method in your function code that processes events. When a function runs in response to an event, Lambda runs the function handler. Data about the event that caused the function to run is passed directly to the handler. While the code in a Lambda function can contain more than one method or function, Lambda functions can only have one handler. To create a Lambda function, you bundle your function code and its dependencies in a deployment package. Lambda supports two types of deployment package, .zip file archives and container images. Lambda functions and function handlers 4 AWS Lambda Developer Guide • A function has one specific job or purpose • They run only when needed in response to specific events • They automatically stop running when finished Lambda execution environment and runtimes Lambda functions run inside a secure, isolated execution environment which Lambda manages for you. This execution environment manages the processes and resources that are needed to run your function. When a function is first invoked, Lambda creates a new execution environment for the function to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and"} -{"global_id": 631, "doc_id": "lambda", "chunk_id": "4", "question_id": 4, "question": "What happens to the execution environment after a function has finished running?", "answer_span": "if the function is invoked again, Lambda can re-use the existing execution environment.", "chunk": "handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. You can think of a function as a kind of self-contained program with the following properties. A Lambda function handler is the method in your function code that processes events. When a function runs in response to an event, Lambda runs the function handler. Data about the event that caused the function to run is passed directly to the handler. While the code in a Lambda function can contain more than one method or function, Lambda functions can only have one handler. To create a Lambda function, you bundle your function code and its dependencies in a deployment package. Lambda supports two types of deployment package, .zip file archives and container images. Lambda functions and function handlers 4 AWS Lambda Developer Guide • A function has one specific job or purpose • They run only when needed in response to specific events • They automatically stop running when finished Lambda execution environment and runtimes Lambda functions run inside a secure, isolated execution environment which Lambda manages for you. This execution environment manages the processes and resources that are needed to run your function. When a function is first invoked, Lambda creates a new execution environment for the function to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and"} -{"global_id": 632, "doc_id": "lambda", "chunk_id": "5", "question_id": 1, "question": "What happens to the Lambda execution environment after a function has finished running?", "answer_span": "After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment.", "chunk": "to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and your function. Lambda provides a number of managed runtimes for the most popular programming languages, or you can create your own. For managed runtimes, Lambda automatically applies security updates and patches to functions using the runtime. Events and triggers You can also invoke a Lambda function directly by using the Lambda console, AWS CLI, or one of the AWS Software Development Kits (SDKs). It's more usual in a production application for your function to be invoked by another AWS service in response to a particular event. For example, you might want a function to run whenever an item is added to an Amazon DynamoDB table. To make your function respond to events, you set up a trigger. A trigger connects your function to an event source, and your function can have multiple triggers. When an event occurs, Lambda receives event data as a JSON document and converts it into an object that your code can process. You might define the following JSON format for your event and the Lambda runtime converts this JSON to an object before passing it to your function's handler. Example custom Lambda event { \"Location\": \"SEA\", \"WeatherData\":{ Lambda execution environment and runtimes 5 AWS Lambda Developer Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then"} -{"global_id": 633, "doc_id": "lambda", "chunk_id": "5", "question_id": 2, "question": "How does Lambda handle security updates for managed runtimes?", "answer_span": "For managed runtimes, Lambda automatically applies security updates and patches to functions using the runtime.", "chunk": "to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and your function. Lambda provides a number of managed runtimes for the most popular programming languages, or you can create your own. For managed runtimes, Lambda automatically applies security updates and patches to functions using the runtime. Events and triggers You can also invoke a Lambda function directly by using the Lambda console, AWS CLI, or one of the AWS Software Development Kits (SDKs). It's more usual in a production application for your function to be invoked by another AWS service in response to a particular event. For example, you might want a function to run whenever an item is added to an Amazon DynamoDB table. To make your function respond to events, you set up a trigger. A trigger connects your function to an event source, and your function can have multiple triggers. When an event occurs, Lambda receives event data as a JSON document and converts it into an object that your code can process. You might define the following JSON format for your event and the Lambda runtime converts this JSON to an object before passing it to your function's handler. Example custom Lambda event { \"Location\": \"SEA\", \"WeatherData\":{ Lambda execution environment and runtimes 5 AWS Lambda Developer Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then"} -{"global_id": 634, "doc_id": "lambda", "chunk_id": "5", "question_id": 3, "question": "What is a trigger in the context of AWS Lambda?", "answer_span": "A trigger connects your function to an event source, and your function can have multiple triggers.", "chunk": "to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and your function. Lambda provides a number of managed runtimes for the most popular programming languages, or you can create your own. For managed runtimes, Lambda automatically applies security updates and patches to functions using the runtime. Events and triggers You can also invoke a Lambda function directly by using the Lambda console, AWS CLI, or one of the AWS Software Development Kits (SDKs). It's more usual in a production application for your function to be invoked by another AWS service in response to a particular event. For example, you might want a function to run whenever an item is added to an Amazon DynamoDB table. To make your function respond to events, you set up a trigger. A trigger connects your function to an event source, and your function can have multiple triggers. When an event occurs, Lambda receives event data as a JSON document and converts it into an object that your code can process. You might define the following JSON format for your event and the Lambda runtime converts this JSON to an object before passing it to your function's handler. Example custom Lambda event { \"Location\": \"SEA\", \"WeatherData\":{ Lambda execution environment and runtimes 5 AWS Lambda Developer Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then"} -{"global_id": 635, "doc_id": "lambda", "chunk_id": "5", "question_id": 4, "question": "What format does Lambda receive event data in?", "answer_span": "When an event occurs, Lambda receives event data as a JSON document and converts it into an object that your code can process.", "chunk": "to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and your function. Lambda provides a number of managed runtimes for the most popular programming languages, or you can create your own. For managed runtimes, Lambda automatically applies security updates and patches to functions using the runtime. Events and triggers You can also invoke a Lambda function directly by using the Lambda console, AWS CLI, or one of the AWS Software Development Kits (SDKs). It's more usual in a production application for your function to be invoked by another AWS service in response to a particular event. For example, you might want a function to run whenever an item is added to an Amazon DynamoDB table. To make your function respond to events, you set up a trigger. A trigger connects your function to an event source, and your function can have multiple triggers. When an event occurs, Lambda receives event data as a JSON document and converts it into an object that your code can process. You might define the following JSON format for your event and the Lambda runtime converts this JSON to an object before passing it to your function's handler. Example custom Lambda event { \"Location\": \"SEA\", \"WeatherData\":{ Lambda execution environment and runtimes 5 AWS Lambda Developer Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then"} -{"global_id": 636, "doc_id": "lambda", "chunk_id": "6", "question_id": 1, "question": "What are the two main types of permissions that need to be configured for Lambda?", "answer_span": "For Lambda, there are two main types of permissions that you need to configure: • Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function", "chunk": "Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then invoke your function with the batched events. For more information, see How event source mappings differ from direct triggers. To understand how a trigger works, start by completing the Use an Amazon S3 trigger tutorial, or for a general overview of using triggers and instructions on creating a trigger using the Lambda console, see Integrating other services. Lambda permissions and roles For Lambda, there are two main types of permissions that you need to configure: • Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function The following sections describe both of these permission types and discuss best practices for applying least-privilege permissions. Permissions for functions to access other AWS resources Lambda functions often need to access other AWS resources and perform actions on them. For example, a function might read items from a DynamoDB table, store an object in an S3 bucket, or write to an Amazon SQS queue. To give functions the permissions they need to perform these actions, you use an execution role. A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the"} -{"global_id": 637, "doc_id": "lambda", "chunk_id": "6", "question_id": 2, "question": "What is a Lambda execution role?", "answer_span": "A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy.", "chunk": "Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then invoke your function with the batched events. For more information, see How event source mappings differ from direct triggers. To understand how a trigger works, start by completing the Use an Amazon S3 trigger tutorial, or for a general overview of using triggers and instructions on creating a trigger using the Lambda console, see Integrating other services. Lambda permissions and roles For Lambda, there are two main types of permissions that you need to configure: • Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function The following sections describe both of these permission types and discuss best practices for applying least-privilege permissions. Permissions for functions to access other AWS resources Lambda functions often need to access other AWS resources and perform actions on them. For example, a function might read items from a DynamoDB table, store an object in an S3 bucket, or write to an Amazon SQS queue. To give functions the permissions they need to perform these actions, you use an execution role. A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the"} -{"global_id": 638, "doc_id": "lambda", "chunk_id": "6", "question_id": 3, "question": "What actions might a Lambda function perform on other AWS resources?", "answer_span": "For example, a function might read items from a DynamoDB table, store an object in an S3 bucket, or write to an Amazon SQS queue.", "chunk": "Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then invoke your function with the batched events. For more information, see How event source mappings differ from direct triggers. To understand how a trigger works, start by completing the Use an Amazon S3 trigger tutorial, or for a general overview of using triggers and instructions on creating a trigger using the Lambda console, see Integrating other services. Lambda permissions and roles For Lambda, there are two main types of permissions that you need to configure: • Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function The following sections describe both of these permission types and discuss best practices for applying least-privilege permissions. Permissions for functions to access other AWS resources Lambda functions often need to access other AWS resources and perform actions on them. For example, a function might read items from a DynamoDB table, store an object in an S3 bucket, or write to an Amazon SQS queue. To give functions the permissions they need to perform these actions, you use an execution role. A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the"} -{"global_id": 639, "doc_id": "lambda", "chunk_id": "6", "question_id": 4, "question": "Can a single Lambda execution role be used by more than one function?", "answer_span": "Every Lambda function must have an execution role, and a single role can be used by more than one function.", "chunk": "Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then invoke your function with the batched events. For more information, see How event source mappings differ from direct triggers. To understand how a trigger works, start by completing the Use an Amazon S3 trigger tutorial, or for a general overview of using triggers and instructions on creating a trigger using the Lambda console, see Integrating other services. Lambda permissions and roles For Lambda, there are two main types of permissions that you need to configure: • Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function The following sections describe both of these permission types and discuss best practices for applying least-privilege permissions. Permissions for functions to access other AWS resources Lambda functions often need to access other AWS resources and perform actions on them. For example, a function might read items from a DynamoDB table, store an object in an S3 bucket, or write to an Amazon SQS queue. To give functions the permissions they need to perform these actions, you use an execution role. A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the"} -{"global_id": 640, "doc_id": "lambda", "chunk_id": "7", "question_id": 1, "question": "What must every Lambda function have?", "answer_span": "Every Lambda function must have an execution role, and a single role can be used by more than one function.", "chunk": "your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the function's execution role and is granted permission to take the actions defined in the role's policy. When you create a function in the Lambda console, Lambda automatically creates an execution role for your function. The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs. To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions. The easiest way to add permissions is to use an AWS managed policy. Managed policies are created and administered by AWS and provide permissions for many common use cases. For example, if your function performs CRUD operations on a DynamoDB table, you can add the AmazonDynamoDBFullAccess policy to your role. Permissions for other users and resources to access your function To grant other AWS service permission to access your Lambda function, you use a resourcebased policy. In IAM, resource-based policies are attached to a resource (in this case, your Lambda function) and define who can access the resource and what actions they are allowed to take. For another AWS service to invoke your function through a trigger, your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or"} -{"global_id": 641, "doc_id": "lambda", "chunk_id": "7", "question_id": 2, "question": "What does the role's policy give your function permission to do?", "answer_span": "The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs.", "chunk": "your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the function's execution role and is granted permission to take the actions defined in the role's policy. When you create a function in the Lambda console, Lambda automatically creates an execution role for your function. The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs. To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions. The easiest way to add permissions is to use an AWS managed policy. Managed policies are created and administered by AWS and provide permissions for many common use cases. For example, if your function performs CRUD operations on a DynamoDB table, you can add the AmazonDynamoDBFullAccess policy to your role. Permissions for other users and resources to access your function To grant other AWS service permission to access your Lambda function, you use a resourcebased policy. In IAM, resource-based policies are attached to a resource (in this case, your Lambda function) and define who can access the resource and what actions they are allowed to take. For another AWS service to invoke your function through a trigger, your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or"} -{"global_id": 642, "doc_id": "lambda", "chunk_id": "7", "question_id": 3, "question": "How can you add extra permissions to your Lambda function's role?", "answer_span": "To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions.", "chunk": "your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the function's execution role and is granted permission to take the actions defined in the role's policy. When you create a function in the Lambda console, Lambda automatically creates an execution role for your function. The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs. To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions. The easiest way to add permissions is to use an AWS managed policy. Managed policies are created and administered by AWS and provide permissions for many common use cases. For example, if your function performs CRUD operations on a DynamoDB table, you can add the AmazonDynamoDBFullAccess policy to your role. Permissions for other users and resources to access your function To grant other AWS service permission to access your Lambda function, you use a resourcebased policy. In IAM, resource-based policies are attached to a resource (in this case, your Lambda function) and define who can access the resource and what actions they are allowed to take. For another AWS service to invoke your function through a trigger, your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or"} -{"global_id": 643, "doc_id": "lambda", "chunk_id": "7", "question_id": 4, "question": "What must your function's resource-based policy grant for another AWS service to invoke your function?", "answer_span": "your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action.", "chunk": "your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the function's execution role and is granted permission to take the actions defined in the role's policy. When you create a function in the Lambda console, Lambda automatically creates an execution role for your function. The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs. To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions. The easiest way to add permissions is to use an AWS managed policy. Managed policies are created and administered by AWS and provide permissions for many common use cases. For example, if your function performs CRUD operations on a DynamoDB table, you can add the AmazonDynamoDBFullAccess policy to your role. Permissions for other users and resources to access your function To grant other AWS service permission to access your Lambda function, you use a resourcebased policy. In IAM, resource-based policies are attached to a resource (in this case, your Lambda function) and define who can access the resource and what actions they are allowed to take. For another AWS service to invoke your function through a trigger, your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or"} -{"global_id": 644, "doc_id": "lambda", "chunk_id": "8", "question_id": 1, "question": "What action is used to invoke a Lambda function?", "answer_span": "use the lambda:InvokeFunction action.", "chunk": "use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or resource. You can also use an identity-based policy that's associated with the user. Best practices for Lambda permissions When you set permissions using IAM policies, security best practice is to grant only the permissions required to perform a task. This is known as the principle of least privilege. To get started granting permissions for your function, you might choose to use an AWS managed policy. Managed policies can be the quickest and easiest way to grant permissions to perform a task, but they might also include other permissions you don't need. As you move from early development through test and production, we recommend you reduce permissions to only those needed by defining your own customer-managed policies. The same principle applies when granting permissions to access your function using a resourcebased policy. For example, if you want to give permission to Amazon S3 to invoke your function, Lambda permissions and roles 7 AWS Lambda Developer Guide best practice is to limit access to individual buckets, or buckets in particular AWS accounts, rather than giving blanket permissions to the S3 service. Lambda permissions and roles 8 AWS Lambda Developer Guide Running code with Lambda When you write a Lambda function, you are creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model"} -{"global_id": 645, "doc_id": "lambda", "chunk_id": "8", "question_id": 2, "question": "What is the principle of least privilege in the context of Lambda permissions?", "answer_span": "security best practice is to grant only the permissions required to perform a task.", "chunk": "use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or resource. You can also use an identity-based policy that's associated with the user. Best practices for Lambda permissions When you set permissions using IAM policies, security best practice is to grant only the permissions required to perform a task. This is known as the principle of least privilege. To get started granting permissions for your function, you might choose to use an AWS managed policy. Managed policies can be the quickest and easiest way to grant permissions to perform a task, but they might also include other permissions you don't need. As you move from early development through test and production, we recommend you reduce permissions to only those needed by defining your own customer-managed policies. The same principle applies when granting permissions to access your function using a resourcebased policy. For example, if you want to give permission to Amazon S3 to invoke your function, Lambda permissions and roles 7 AWS Lambda Developer Guide best practice is to limit access to individual buckets, or buckets in particular AWS accounts, rather than giving blanket permissions to the S3 service. Lambda permissions and roles 8 AWS Lambda Developer Guide Running code with Lambda When you write a Lambda function, you are creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model"} -{"global_id": 646, "doc_id": "lambda", "chunk_id": "8", "question_id": 3, "question": "What is recommended as you move from early development through test and production regarding permissions?", "answer_span": "we recommend you reduce permissions to only those needed by defining your own customer-managed policies.", "chunk": "use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or resource. You can also use an identity-based policy that's associated with the user. Best practices for Lambda permissions When you set permissions using IAM policies, security best practice is to grant only the permissions required to perform a task. This is known as the principle of least privilege. To get started granting permissions for your function, you might choose to use an AWS managed policy. Managed policies can be the quickest and easiest way to grant permissions to perform a task, but they might also include other permissions you don't need. As you move from early development through test and production, we recommend you reduce permissions to only those needed by defining your own customer-managed policies. The same principle applies when granting permissions to access your function using a resourcebased policy. For example, if you want to give permission to Amazon S3 to invoke your function, Lambda permissions and roles 7 AWS Lambda Developer Guide best practice is to limit access to individual buckets, or buckets in particular AWS accounts, rather than giving blanket permissions to the S3 service. Lambda permissions and roles 8 AWS Lambda Developer Guide Running code with Lambda When you write a Lambda function, you are creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model"} -{"global_id": 647, "doc_id": "lambda", "chunk_id": "8", "question_id": 4, "question": "What should you limit access to when granting permissions to Amazon S3 to invoke your function?", "answer_span": "best practice is to limit access to individual buckets, or buckets in particular AWS accounts, rather than giving blanket permissions to the S3 service.", "chunk": "use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or resource. You can also use an identity-based policy that's associated with the user. Best practices for Lambda permissions When you set permissions using IAM policies, security best practice is to grant only the permissions required to perform a task. This is known as the principle of least privilege. To get started granting permissions for your function, you might choose to use an AWS managed policy. Managed policies can be the quickest and easiest way to grant permissions to perform a task, but they might also include other permissions you don't need. As you move from early development through test and production, we recommend you reduce permissions to only those needed by defining your own customer-managed policies. The same principle applies when granting permissions to access your function using a resourcebased policy. For example, if you want to give permission to Amazon S3 to invoke your function, Lambda permissions and roles 7 AWS Lambda Developer Guide best practice is to limit access to individual buckets, or buckets in particular AWS accounts, rather than giving blanket permissions to the S3 service. Lambda permissions and roles 8 AWS Lambda Developer Guide Running code with Lambda When you write a Lambda function, you are creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model"} -{"global_id": 648, "doc_id": "lambda", "chunk_id": "9", "question_id": 1, "question": "What are the two key aspects involved in understanding how Lambda runs your code?", "answer_span": "Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment.", "chunk": "creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model Programming model functions as a common set of rules for how Lambda works with your code, regardless of whether you're writing in Python, Java, or any other supported language. The programming model includes your runtime and handler. 1. Lambda receives an event. 2. Lambda uses the runtime (like Python or Java) to prepare the event in a format your code can use. 3. The runtime sends the formatted event to your handler. 4. Your handler processes the event using the code you've written in your Lambda function. Essential to this model is the handler, where Lambda sends events to be processed by your code. Think of it as the entry point to your code. When Lambda receives an event, it passes this event and some context information to your handler. The handler then runs your code to process these events - for example, it might read a file when it's uploaded to Amazon S3, analyze an image, or update a database. Once your code finishes processing an event, the handler is ready to process the next one. The Lambda execution model While the programming model defines how Lambda interacts with your code, Execution environment is where Lambda actually runs your function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code"} -{"global_id": 649, "doc_id": "lambda", "chunk_id": "9", "question_id": 2, "question": "What does the Lambda programming model include?", "answer_span": "The programming model includes your runtime and handler.", "chunk": "creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model Programming model functions as a common set of rules for how Lambda works with your code, regardless of whether you're writing in Python, Java, or any other supported language. The programming model includes your runtime and handler. 1. Lambda receives an event. 2. Lambda uses the runtime (like Python or Java) to prepare the event in a format your code can use. 3. The runtime sends the formatted event to your handler. 4. Your handler processes the event using the code you've written in your Lambda function. Essential to this model is the handler, where Lambda sends events to be processed by your code. Think of it as the entry point to your code. When Lambda receives an event, it passes this event and some context information to your handler. The handler then runs your code to process these events - for example, it might read a file when it's uploaded to Amazon S3, analyze an image, or update a database. Once your code finishes processing an event, the handler is ready to process the next one. The Lambda execution model While the programming model defines how Lambda interacts with your code, Execution environment is where Lambda actually runs your function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code"} -{"global_id": 650, "doc_id": "lambda", "chunk_id": "9", "question_id": 3, "question": "What is the role of the handler in the Lambda programming model?", "answer_span": "Essential to this model is the handler, where Lambda sends events to be processed by your code.", "chunk": "creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model Programming model functions as a common set of rules for how Lambda works with your code, regardless of whether you're writing in Python, Java, or any other supported language. The programming model includes your runtime and handler. 1. Lambda receives an event. 2. Lambda uses the runtime (like Python or Java) to prepare the event in a format your code can use. 3. The runtime sends the formatted event to your handler. 4. Your handler processes the event using the code you've written in your Lambda function. Essential to this model is the handler, where Lambda sends events to be processed by your code. Think of it as the entry point to your code. When Lambda receives an event, it passes this event and some context information to your handler. The handler then runs your code to process these events - for example, it might read a file when it's uploaded to Amazon S3, analyze an image, or update a database. Once your code finishes processing an event, the handler is ready to process the next one. The Lambda execution model While the programming model defines how Lambda interacts with your code, Execution environment is where Lambda actually runs your function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code"} -{"global_id": 651, "doc_id": "lambda", "chunk_id": "9", "question_id": 4, "question": "What are the three phases of the Lambda execution environment lifecycle?", "answer_span": "Each environment follows a lifecycle of three phases.", "chunk": "creating code that will run in a unique serverless environment. Understanding how Lambda actually runs your code involves two key aspects: the programming model that defines how your code interacts with Lambda, and the execution environment lifecycle that determines how Lambda manages your code's runtime environment. The Lambda programming model Programming model functions as a common set of rules for how Lambda works with your code, regardless of whether you're writing in Python, Java, or any other supported language. The programming model includes your runtime and handler. 1. Lambda receives an event. 2. Lambda uses the runtime (like Python or Java) to prepare the event in a format your code can use. 3. The runtime sends the formatted event to your handler. 4. Your handler processes the event using the code you've written in your Lambda function. Essential to this model is the handler, where Lambda sends events to be processed by your code. Think of it as the entry point to your code. When Lambda receives an event, it passes this event and some context information to your handler. The handler then runs your code to process these events - for example, it might read a file when it's uploaded to Amazon S3, analyze an image, or update a database. Once your code finishes processing an event, the handler is ready to process the next one. The Lambda execution model While the programming model defines how Lambda interacts with your code, Execution environment is where Lambda actually runs your function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code"} -{"global_id": 652, "doc_id": "lambda", "chunk_id": "10", "question_id": 1, "question": "What are the three phases of the Lambda environment lifecycle?", "answer_span": "Each environment follows a lifecycle of three phases.", "chunk": "function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code you've written. 2. Invocation: When events arrive, Lambda uses this environment to run your function. The environment can process many events over time, one after another. As more events come in, Running code 9 AWS Lambda Developer Guide Lambda creates additional environments to handle the increased demand. When demand drops, Lambda stops environments that are no longer needed. 3. Shutdown: Eventually, Lambda will shut down environments. Before doing this, it gives your function a chance to clean up any remaining tasks. This environment handles important aspects of running your function. It provides your function with memory and a /tmp directory for temporary storage. It maintains resources like database connections between invocations, so your function can reuse them. It offers features like provisioned concurrency, where Lambda prepares environments in advance to improve performance. Understanding the Lambda programming model Lambda provides a programming model that is common to all of the runtimes. The programming model defines the interface between your code and the Lambda system. You tell Lambda the entry point to your function by defining a handler in the function configuration. The runtime passes in objects to the handler that contain the invocation event and the context, such as the function name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable"} -{"global_id": 653, "doc_id": "lambda", "chunk_id": "10", "question_id": 2, "question": "What happens during the Initialization phase?", "answer_span": "Lambda creates the environment and gets everything ready to run your function.", "chunk": "function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code you've written. 2. Invocation: When events arrive, Lambda uses this environment to run your function. The environment can process many events over time, one after another. As more events come in, Running code 9 AWS Lambda Developer Guide Lambda creates additional environments to handle the increased demand. When demand drops, Lambda stops environments that are no longer needed. 3. Shutdown: Eventually, Lambda will shut down environments. Before doing this, it gives your function a chance to clean up any remaining tasks. This environment handles important aspects of running your function. It provides your function with memory and a /tmp directory for temporary storage. It maintains resources like database connections between invocations, so your function can reuse them. It offers features like provisioned concurrency, where Lambda prepares environments in advance to improve performance. Understanding the Lambda programming model Lambda provides a programming model that is common to all of the runtimes. The programming model defines the interface between your code and the Lambda system. You tell Lambda the entry point to your function by defining a handler in the function configuration. The runtime passes in objects to the handler that contain the invocation event and the context, such as the function name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable"} -{"global_id": 654, "doc_id": "lambda", "chunk_id": "10", "question_id": 3, "question": "How does Lambda handle increased demand for function execution?", "answer_span": "As more events come in, Lambda creates additional environments to handle the increased demand.", "chunk": "function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code you've written. 2. Invocation: When events arrive, Lambda uses this environment to run your function. The environment can process many events over time, one after another. As more events come in, Running code 9 AWS Lambda Developer Guide Lambda creates additional environments to handle the increased demand. When demand drops, Lambda stops environments that are no longer needed. 3. Shutdown: Eventually, Lambda will shut down environments. Before doing this, it gives your function a chance to clean up any remaining tasks. This environment handles important aspects of running your function. It provides your function with memory and a /tmp directory for temporary storage. It maintains resources like database connections between invocations, so your function can reuse them. It offers features like provisioned concurrency, where Lambda prepares environments in advance to improve performance. Understanding the Lambda programming model Lambda provides a programming model that is common to all of the runtimes. The programming model defines the interface between your code and the Lambda system. You tell Lambda the entry point to your function by defining a handler in the function configuration. The runtime passes in objects to the handler that contain the invocation event and the context, such as the function name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable"} -{"global_id": 655, "doc_id": "lambda", "chunk_id": "10", "question_id": 4, "question": "What does the Lambda programming model define?", "answer_span": "The programming model defines the interface between your code and the Lambda system.", "chunk": "function — it's a secure, isolated compute space created specifically for your function. Each environment follows a lifecycle of three phases. 1. Initialization: Lambda creates the environment and gets everything ready to run your function. This includes setting up your chosen runtime, loading your code, and running any startup code you've written. 2. Invocation: When events arrive, Lambda uses this environment to run your function. The environment can process many events over time, one after another. As more events come in, Running code 9 AWS Lambda Developer Guide Lambda creates additional environments to handle the increased demand. When demand drops, Lambda stops environments that are no longer needed. 3. Shutdown: Eventually, Lambda will shut down environments. Before doing this, it gives your function a chance to clean up any remaining tasks. This environment handles important aspects of running your function. It provides your function with memory and a /tmp directory for temporary storage. It maintains resources like database connections between invocations, so your function can reuse them. It offers features like provisioned concurrency, where Lambda prepares environments in advance to improve performance. Understanding the Lambda programming model Lambda provides a programming model that is common to all of the runtimes. The programming model defines the interface between your code and the Lambda system. You tell Lambda the entry point to your function by defining a handler in the function configuration. The runtime passes in objects to the handler that contain the invocation event and the context, such as the function name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable"} -{"global_id": 656, "doc_id": "lambda", "chunk_id": "11", "question_id": 1, "question": "What happens when the handler finishes processing the first event?", "answer_span": "When the handler finishes processing the first event, the runtime sends it another.", "chunk": "name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable resources like AWS SDK clients during initialization. Once initialized, each instance of your function can process thousands of requests. Your function also has access to local storage in the /tmp directory, a transient cache that can be used for multiple invocations. For more information, see Execution environment. When AWS X-Ray tracing is enabled, the runtime records separate subsegments for initialization and execution. The runtime captures logging output from your function and sends it to Amazon CloudWatch Logs. In addition to logging your function's output, the runtime also logs entries when function invocation starts and ends. This includes a report log with the request ID, billed duration, initialization duration, and other details. If your function throws an error, the runtime returns that error to the invoker. Running code 10 AWS Lambda Developer Guide Note Logging is subject to CloudWatch Logs quotas. Log data can be lost due to throttling or, in some cases, when an instance of your function is stopped. Lambda scales your function by running additional instances of it as demand increases, and by stopping instances as demand decreases. This model leads to variations in application architecture, such as: • Unless noted otherwise, incoming requests might be processed out of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you"} -{"global_id": 657, "doc_id": "lambda", "chunk_id": "11", "question_id": 2, "question": "What can be reused in the function's class?", "answer_span": "The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused.", "chunk": "name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable resources like AWS SDK clients during initialization. Once initialized, each instance of your function can process thousands of requests. Your function also has access to local storage in the /tmp directory, a transient cache that can be used for multiple invocations. For more information, see Execution environment. When AWS X-Ray tracing is enabled, the runtime records separate subsegments for initialization and execution. The runtime captures logging output from your function and sends it to Amazon CloudWatch Logs. In addition to logging your function's output, the runtime also logs entries when function invocation starts and ends. This includes a report log with the request ID, billed duration, initialization duration, and other details. If your function throws an error, the runtime returns that error to the invoker. Running code 10 AWS Lambda Developer Guide Note Logging is subject to CloudWatch Logs quotas. Log data can be lost due to throttling or, in some cases, when an instance of your function is stopped. Lambda scales your function by running additional instances of it as demand increases, and by stopping instances as demand decreases. This model leads to variations in application architecture, such as: • Unless noted otherwise, incoming requests might be processed out of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you"} -{"global_id": 658, "doc_id": "lambda", "chunk_id": "11", "question_id": 3, "question": "What does the runtime do when AWS X-Ray tracing is enabled?", "answer_span": "When AWS X-Ray tracing is enabled, the runtime records separate subsegments for initialization and execution.", "chunk": "name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable resources like AWS SDK clients during initialization. Once initialized, each instance of your function can process thousands of requests. Your function also has access to local storage in the /tmp directory, a transient cache that can be used for multiple invocations. For more information, see Execution environment. When AWS X-Ray tracing is enabled, the runtime records separate subsegments for initialization and execution. The runtime captures logging output from your function and sends it to Amazon CloudWatch Logs. In addition to logging your function's output, the runtime also logs entries when function invocation starts and ends. This includes a report log with the request ID, billed duration, initialization duration, and other details. If your function throws an error, the runtime returns that error to the invoker. Running code 10 AWS Lambda Developer Guide Note Logging is subject to CloudWatch Logs quotas. Log data can be lost due to throttling or, in some cases, when an instance of your function is stopped. Lambda scales your function by running additional instances of it as demand increases, and by stopping instances as demand decreases. This model leads to variations in application architecture, such as: • Unless noted otherwise, incoming requests might be processed out of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you"} -{"global_id": 659, "doc_id": "lambda", "chunk_id": "11", "question_id": 4, "question": "What happens to log data due to CloudWatch Logs quotas?", "answer_span": "Log data can be lost due to throttling or, in some cases, when an instance of your function is stopped.", "chunk": "name and request ID. When the handler finishes processing the first event, the runtime sends it another. The function's class stays in memory, so clients and variables that are declared outside of the handler method in initialization code can be reused. To save processing time on subsequent events, create reusable resources like AWS SDK clients during initialization. Once initialized, each instance of your function can process thousands of requests. Your function also has access to local storage in the /tmp directory, a transient cache that can be used for multiple invocations. For more information, see Execution environment. When AWS X-Ray tracing is enabled, the runtime records separate subsegments for initialization and execution. The runtime captures logging output from your function and sends it to Amazon CloudWatch Logs. In addition to logging your function's output, the runtime also logs entries when function invocation starts and ends. This includes a report log with the request ID, billed duration, initialization duration, and other details. If your function throws an error, the runtime returns that error to the invoker. Running code 10 AWS Lambda Developer Guide Note Logging is subject to CloudWatch Logs quotas. Log data can be lost due to throttling or, in some cases, when an instance of your function is stopped. Lambda scales your function by running additional instances of it as demand increases, and by stopping instances as demand decreases. This model leads to variations in application architecture, such as: • Unless noted otherwise, incoming requests might be processed out of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you"} -{"global_id": 660, "doc_id": "lambda", "chunk_id": "12", "question_id": 1, "question": "What should you not rely on regarding instances of your function?", "answer_span": "Do not rely on instances of your function being long lived, instead store your application's state elsewhere.", "chunk": "of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you transfer onto the execution environment. For a hands-on introduction to the programming model in your preferred programming language, see the following chapters. • Building Lambda functions with Node.js • Building Lambda functions with Python • Building Lambda functions with Ruby • Building Lambda functions with Java • Building Lambda functions with Go • Building Lambda functions with C# • Building Lambda functions with PowerShell Understanding the Lambda execution environment lifecycle Lambda invokes your function in an execution environment, which provides a secure and isolated runtime environment. The execution environment manages the resources required to run your function. The execution environment also provides lifecycle support for the function's runtime and any external extensions associated with your function. Running code 11 AWS Lambda Developer Guide The function's runtime communicates with Lambda using the Runtime API. Extensions communicate with Lambda using the Extensions API. Extensions can also receive log messages and other telemetry from the function by using the Telemetry API. When you create your Lambda function, you specify configuration information, such as the amount of memory available and the maximum execution time allowed for your function. Lambda uses this information to set up the execution environment. The function's runtime and each external extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running"} -{"global_id": 661, "doc_id": "lambda", "chunk_id": "12", "question_id": 2, "question": "What does the execution environment manage?", "answer_span": "The execution environment manages the resources required to run your function.", "chunk": "of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you transfer onto the execution environment. For a hands-on introduction to the programming model in your preferred programming language, see the following chapters. • Building Lambda functions with Node.js • Building Lambda functions with Python • Building Lambda functions with Ruby • Building Lambda functions with Java • Building Lambda functions with Go • Building Lambda functions with C# • Building Lambda functions with PowerShell Understanding the Lambda execution environment lifecycle Lambda invokes your function in an execution environment, which provides a secure and isolated runtime environment. The execution environment manages the resources required to run your function. The execution environment also provides lifecycle support for the function's runtime and any external extensions associated with your function. Running code 11 AWS Lambda Developer Guide The function's runtime communicates with Lambda using the Runtime API. Extensions communicate with Lambda using the Extensions API. Extensions can also receive log messages and other telemetry from the function by using the Telemetry API. When you create your Lambda function, you specify configuration information, such as the amount of memory available and the maximum execution time allowed for your function. Lambda uses this information to set up the execution environment. The function's runtime and each external extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running"} -{"global_id": 662, "doc_id": "lambda", "chunk_id": "12", "question_id": 3, "question": "What type of information do you specify when creating your Lambda function?", "answer_span": "When you create your Lambda function, you specify configuration information, such as the amount of memory available and the maximum execution time allowed for your function.", "chunk": "of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you transfer onto the execution environment. For a hands-on introduction to the programming model in your preferred programming language, see the following chapters. • Building Lambda functions with Node.js • Building Lambda functions with Python • Building Lambda functions with Ruby • Building Lambda functions with Java • Building Lambda functions with Go • Building Lambda functions with C# • Building Lambda functions with PowerShell Understanding the Lambda execution environment lifecycle Lambda invokes your function in an execution environment, which provides a secure and isolated runtime environment. The execution environment manages the resources required to run your function. The execution environment also provides lifecycle support for the function's runtime and any external extensions associated with your function. Running code 11 AWS Lambda Developer Guide The function's runtime communicates with Lambda using the Runtime API. Extensions communicate with Lambda using the Extensions API. Extensions can also receive log messages and other telemetry from the function by using the Telemetry API. When you create your Lambda function, you specify configuration information, such as the amount of memory available and the maximum execution time allowed for your function. Lambda uses this information to set up the execution environment. The function's runtime and each external extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running"} -{"global_id": 663, "doc_id": "lambda", "chunk_id": "12", "question_id": 4, "question": "What APIs do the function's runtime and extensions communicate with Lambda using?", "answer_span": "The function's runtime communicates with Lambda using the Runtime API. Extensions communicate with Lambda using the Extensions API.", "chunk": "of order or concurrently. • Do not rely on instances of your function being long lived, instead store your application's state elsewhere. • Use local storage and class-level objects to increase performance, but keep to a minimum the size of your deployment package and the amount of data that you transfer onto the execution environment. For a hands-on introduction to the programming model in your preferred programming language, see the following chapters. • Building Lambda functions with Node.js • Building Lambda functions with Python • Building Lambda functions with Ruby • Building Lambda functions with Java • Building Lambda functions with Go • Building Lambda functions with C# • Building Lambda functions with PowerShell Understanding the Lambda execution environment lifecycle Lambda invokes your function in an execution environment, which provides a secure and isolated runtime environment. The execution environment manages the resources required to run your function. The execution environment also provides lifecycle support for the function's runtime and any external extensions associated with your function. Running code 11 AWS Lambda Developer Guide The function's runtime communicates with Lambda using the Runtime API. Extensions communicate with Lambda using the Extensions API. Extensions can also receive log messages and other telemetry from the function by using the Telemetry API. When you create your Lambda function, you specify configuration information, such as the amount of memory available and the maximum execution time allowed for your function. Lambda uses this information to set up the execution environment. The function's runtime and each external extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running"} -{"global_id": 664, "doc_id": "lambda", "chunk_id": "13", "question_id": 1, "question": "What are the three tasks performed during the Init phase?", "answer_span": "In the Init phase, Lambda performs three tasks: • Start all extensions (Extension init) • Bootstrap the runtime (Runtime init) • Run the function's static code (Function init)", "chunk": "extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running code 12 AWS Lambda Developer Guide Each phase starts with an event that Lambda sends to the runtime and to all registered extensions. The runtime and each extension indicate completion by sending a Next API request. Lambda freezes the execution environment when the runtime and each extension have completed and there are no pending events. Topics • Init phase • Failures during the Init phase • Restore phase (Lambda SnapStart only) • Invoke phase • Failures during the invoke phase • Shutdown phase Init phase In the Init phase, Lambda performs three tasks: • Start all extensions (Extension init) • Bootstrap the runtime (Runtime init) • Run the function's static code (Function init) • Run any before-checkpoint runtime hooks (Lambda SnapStart only) The Init phase ends when the runtime and all extensions signal that they are ready by sending a Next API request. The Init phase is limited to 10 seconds. If all three tasks do not complete within 10 seconds, Lambda retries the Init phase at the time of the first function invocation with the configured function timeout. When Lambda SnapStart is activated, the Init phase happens when you publish a function version. Lambda saves a snapshot of the memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or"} -{"global_id": 665, "doc_id": "lambda", "chunk_id": "13", "question_id": 2, "question": "What happens if the tasks in the Init phase do not complete within 10 seconds?", "answer_span": "If all three tasks do not complete within 10 seconds, Lambda retries the Init phase at the time of the first function invocation with the configured function timeout.", "chunk": "extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running code 12 AWS Lambda Developer Guide Each phase starts with an event that Lambda sends to the runtime and to all registered extensions. The runtime and each extension indicate completion by sending a Next API request. Lambda freezes the execution environment when the runtime and each extension have completed and there are no pending events. Topics • Init phase • Failures during the Init phase • Restore phase (Lambda SnapStart only) • Invoke phase • Failures during the invoke phase • Shutdown phase Init phase In the Init phase, Lambda performs three tasks: • Start all extensions (Extension init) ��� Bootstrap the runtime (Runtime init) • Run the function's static code (Function init) • Run any before-checkpoint runtime hooks (Lambda SnapStart only) The Init phase ends when the runtime and all extensions signal that they are ready by sending a Next API request. The Init phase is limited to 10 seconds. If all three tasks do not complete within 10 seconds, Lambda retries the Init phase at the time of the first function invocation with the configured function timeout. When Lambda SnapStart is activated, the Init phase happens when you publish a function version. Lambda saves a snapshot of the memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or"} -{"global_id": 666, "doc_id": "lambda", "chunk_id": "13", "question_id": 3, "question": "What does Lambda do when SnapStart is activated?", "answer_span": "When Lambda SnapStart is activated, the Init phase happens when you publish a function version.", "chunk": "extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running code 12 AWS Lambda Developer Guide Each phase starts with an event that Lambda sends to the runtime and to all registered extensions. The runtime and each extension indicate completion by sending a Next API request. Lambda freezes the execution environment when the runtime and each extension have completed and there are no pending events. Topics • Init phase • Failures during the Init phase • Restore phase (Lambda SnapStart only) • Invoke phase • Failures during the invoke phase • Shutdown phase Init phase In the Init phase, Lambda performs three tasks: • Start all extensions (Extension init) • Bootstrap the runtime (Runtime init) • Run the function's static code (Function init) • Run any before-checkpoint runtime hooks (Lambda SnapStart only) The Init phase ends when the runtime and all extensions signal that they are ready by sending a Next API request. The Init phase is limited to 10 seconds. If all three tasks do not complete within 10 seconds, Lambda retries the Init phase at the time of the first function invocation with the configured function timeout. When Lambda SnapStart is activated, the Init phase happens when you publish a function version. Lambda saves a snapshot of the memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or"} -{"global_id": 667, "doc_id": "lambda", "chunk_id": "13", "question_id": 4, "question": "What indicates the completion of the Init phase?", "answer_span": "The Init phase ends when the runtime and all extensions signal that they are ready by sending a Next API request.", "chunk": "extension are processes that run within the execution environment. Permissions, resources, credentials, and environment variables are shared between the function and the extensions. Topics • Lambda execution environment lifecycle • Cold starts and latency • Reducing cold starts with Provisioned Concurrency • Optimizing static initialization Lambda execution environment lifecycle Running code 12 AWS Lambda Developer Guide Each phase starts with an event that Lambda sends to the runtime and to all registered extensions. The runtime and each extension indicate completion by sending a Next API request. Lambda freezes the execution environment when the runtime and each extension have completed and there are no pending events. Topics • Init phase • Failures during the Init phase • Restore phase (Lambda SnapStart only) • Invoke phase • Failures during the invoke phase • Shutdown phase Init phase In the Init phase, Lambda performs three tasks: • Start all extensions (Extension init) • Bootstrap the runtime (Runtime init) • Run the function's static code (Function init) • Run any before-checkpoint runtime hooks (Lambda SnapStart only) The Init phase ends when the runtime and all extensions signal that they are ready by sending a Next API request. The Init phase is limited to 10 seconds. If all three tasks do not complete within 10 seconds, Lambda retries the Init phase at the time of the first function invocation with the configured function timeout. When Lambda SnapStart is activated, the Init phase happens when you publish a function version. Lambda saves a snapshot of the memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or"} -{"global_id": 668, "doc_id": "lambda", "chunk_id": "14", "question_id": 1, "question": "What happens if a function crashes or times out during the Init phase?", "answer_span": "If a function crashes or times out during the Init phase, Lambda emits error information in the INIT_REPORT log.", "chunk": "memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or SnapStart. For provisioned concurrency and SnapStart functions, your initialization code Running code 13 AWS Lambda Developer Guide can run for up to 15 minutes. The time limit is 130 seconds or the configured function timeout (maximum 900 seconds), whichever is higher. When you use provisioned concurrency, Lambda initializes the execution environment when you configure the PC settings for a function. Lambda also ensures that initialized execution environments are always available in advance of invocations. You may see gaps between your function's invocation and initialization phases. Depending on your function's runtime and memory configuration, you may also see variable latency on the first invocation on an initialized execution environment. For functions using on-demand concurrency, Lambda may occasionally initialize execution environments ahead of invocation requests. When this happens, you may also observe a time gap between your function's initialization and invocation phases. We recommend you to not take a dependency on this behavior. Failures during the Init phase If a function crashes or times out during the Init phase, Lambda emits error information in the INIT_REPORT log. Example — INIT_REPORT log for timeout INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: timeout Example — INIT_REPORT log for extension failure INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When"} -{"global_id": 669, "doc_id": "lambda", "chunk_id": "14", "question_id": 2, "question": "How long can initialization code run for functions using provisioned concurrency or SnapStart?", "answer_span": "For provisioned concurrency and SnapStart functions, your initialization code can run for up to 15 minutes.", "chunk": "memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or SnapStart. For provisioned concurrency and SnapStart functions, your initialization code Running code 13 AWS Lambda Developer Guide can run for up to 15 minutes. The time limit is 130 seconds or the configured function timeout (maximum 900 seconds), whichever is higher. When you use provisioned concurrency, Lambda initializes the execution environment when you configure the PC settings for a function. Lambda also ensures that initialized execution environments are always available in advance of invocations. You may see gaps between your function's invocation and initialization phases. Depending on your function's runtime and memory configuration, you may also see variable latency on the first invocation on an initialized execution environment. For functions using on-demand concurrency, Lambda may occasionally initialize execution environments ahead of invocation requests. When this happens, you may also observe a time gap between your function's initialization and invocation phases. We recommend you to not take a dependency on this behavior. Failures during the Init phase If a function crashes or times out during the Init phase, Lambda emits error information in the INIT_REPORT log. Example — INIT_REPORT log for timeout INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: timeout Example — INIT_REPORT log for extension failure INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When"} -{"global_id": 670, "doc_id": "lambda", "chunk_id": "14", "question_id": 3, "question": "What does Lambda do when you configure the PC settings for a function?", "answer_span": "When you use provisioned concurrency, Lambda initializes the execution environment when you configure the PC settings for a function.", "chunk": "memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or SnapStart. For provisioned concurrency and SnapStart functions, your initialization code Running code 13 AWS Lambda Developer Guide can run for up to 15 minutes. The time limit is 130 seconds or the configured function timeout (maximum 900 seconds), whichever is higher. When you use provisioned concurrency, Lambda initializes the execution environment when you configure the PC settings for a function. Lambda also ensures that initialized execution environments are always available in advance of invocations. You may see gaps between your function's invocation and initialization phases. Depending on your function's runtime and memory configuration, you may also see variable latency on the first invocation on an initialized execution environment. For functions using on-demand concurrency, Lambda may occasionally initialize execution environments ahead of invocation requests. When this happens, you may also observe a time gap between your function's initialization and invocation phases. We recommend you to not take a dependency on this behavior. Failures during the Init phase If a function crashes or times out during the Init phase, Lambda emits error information in the INIT_REPORT log. Example — INIT_REPORT log for timeout INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: timeout Example — INIT_REPORT log for extension failure INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When"} -{"global_id": 671, "doc_id": "lambda", "chunk_id": "14", "question_id": 4, "question": "What is emitted in the INIT_REPORT log if the Init phase is successful?", "answer_span": "If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled.", "chunk": "memory and disk state of the initialized execution environment, persists the encrypted snapshot, and caches it for low-latency access. If you have a before-checkpoint runtime hook, then the code runs at the end of Init phase. Note The 10-second timeout doesn't apply to functions that are using provisioned concurrency or SnapStart. For provisioned concurrency and SnapStart functions, your initialization code Running code 13 AWS Lambda Developer Guide can run for up to 15 minutes. The time limit is 130 seconds or the configured function timeout (maximum 900 seconds), whichever is higher. When you use provisioned concurrency, Lambda initializes the execution environment when you configure the PC settings for a function. Lambda also ensures that initialized execution environments are always available in advance of invocations. You may see gaps between your function's invocation and initialization phases. Depending on your function's runtime and memory configuration, you may also see variable latency on the first invocation on an initialized execution environment. For functions using on-demand concurrency, Lambda may occasionally initialize execution environments ahead of invocation requests. When this happens, you may also observe a time gap between your function's initialization and invocation phases. We recommend you to not take a dependency on this behavior. Failures during the Init phase If a function crashes or times out during the Init phase, Lambda emits error information in the INIT_REPORT log. Example — INIT_REPORT log for timeout INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: timeout Example — INIT_REPORT log for extension failure INIT_REPORT Init Duration: 1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When"} -{"global_id": 672, "doc_id": "lambda", "chunk_id": "15", "question_id": 1, "question": "What happens if the Init phase is successful?", "answer_span": "If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled.", "chunk": "1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When you first invoke a SnapStart function and as the function scales up, Lambda resumes new execution environments from the persisted snapshot instead of initializing the function from scratch. If you have an after-restore runtime hook, the code runs at the end of the Restore phase. You are charged for the duration of after-restore runtime hooks. The runtime must load and afterRunning code 14 AWS Lambda Developer Guide restore runtime hooks must complete within the timeout limit (10 seconds). Otherwise, you'll get a SnapStartTimeoutException. When the Restore phase completes, Lambda invokes the function handler (the Invoke phase). Failures during the Restore phase If the Restore phase fails, Lambda emits error information in the RESTORE_REPORT log. Example — RESTORE_REPORT log for timeout RESTORE_REPORT Restore Duration: 1236.04 ms Status: timeout Example — RESTORE_REPORT log for runtime hook failure RESTORE_REPORT Restore Duration: 1236.04 ms Status: error Error Type: Runtime.ExitError For more information about the RESTORE_REPORT log, see Monitoring for Lambda SnapStart. Invoke phase When a Lambda function is invoked in response to a Next API request, Lambda sends an Invoke event to the runtime and to each extension. The function's timeout setting limits the duration of the entire Invoke phase. For example, if you set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished"} -{"global_id": 673, "doc_id": "lambda", "chunk_id": "15", "question_id": 2, "question": "What does Lambda do during the Restore phase for SnapStart functions?", "answer_span": "When you first invoke a SnapStart function and as the function scales up, Lambda resumes new execution environments from the persisted snapshot instead of initializing the function from scratch.", "chunk": "1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When you first invoke a SnapStart function and as the function scales up, Lambda resumes new execution environments from the persisted snapshot instead of initializing the function from scratch. If you have an after-restore runtime hook, the code runs at the end of the Restore phase. You are charged for the duration of after-restore runtime hooks. The runtime must load and afterRunning code 14 AWS Lambda Developer Guide restore runtime hooks must complete within the timeout limit (10 seconds). Otherwise, you'll get a SnapStartTimeoutException. When the Restore phase completes, Lambda invokes the function handler (the Invoke phase). Failures during the Restore phase If the Restore phase fails, Lambda emits error information in the RESTORE_REPORT log. Example — RESTORE_REPORT log for timeout RESTORE_REPORT Restore Duration: 1236.04 ms Status: timeout Example — RESTORE_REPORT log for runtime hook failure RESTORE_REPORT Restore Duration: 1236.04 ms Status: error Error Type: Runtime.ExitError For more information about the RESTORE_REPORT log, see Monitoring for Lambda SnapStart. Invoke phase When a Lambda function is invoked in response to a Next API request, Lambda sends an Invoke event to the runtime and to each extension. The function's timeout setting limits the duration of the entire Invoke phase. For example, if you set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished"} -{"global_id": 674, "doc_id": "lambda", "chunk_id": "15", "question_id": 3, "question": "What is the timeout limit for restore runtime hooks?", "answer_span": "restore runtime hooks must complete within the timeout limit (10 seconds).", "chunk": "1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When you first invoke a SnapStart function and as the function scales up, Lambda resumes new execution environments from the persisted snapshot instead of initializing the function from scratch. If you have an after-restore runtime hook, the code runs at the end of the Restore phase. You are charged for the duration of after-restore runtime hooks. The runtime must load and afterRunning code 14 AWS Lambda Developer Guide restore runtime hooks must complete within the timeout limit (10 seconds). Otherwise, you'll get a SnapStartTimeoutException. When the Restore phase completes, Lambda invokes the function handler (the Invoke phase). Failures during the Restore phase If the Restore phase fails, Lambda emits error information in the RESTORE_REPORT log. Example — RESTORE_REPORT log for timeout RESTORE_REPORT Restore Duration: 1236.04 ms Status: timeout Example — RESTORE_REPORT log for runtime hook failure RESTORE_REPORT Restore Duration: 1236.04 ms Status: error Error Type: Runtime.ExitError For more information about the RESTORE_REPORT log, see Monitoring for Lambda SnapStart. Invoke phase When a Lambda function is invoked in response to a Next API request, Lambda sends an Invoke event to the runtime and to each extension. The function's timeout setting limits the duration of the entire Invoke phase. For example, if you set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished"} -{"global_id": 675, "doc_id": "lambda", "chunk_id": "15", "question_id": 4, "question": "What is emitted in the RESTORE_REPORT log if the Restore phase fails?", "answer_span": "If the Restore phase fails, Lambda emits error information in the RESTORE_REPORT log.", "chunk": "1236.04 ms Phase: init Status: error Error Type: Extension.Crash If the Init phase is successful, Lambda doesn't emit the INIT_REPORT log unless SnapStart or provisioned concurrency is enabled. SnapStart and provisioned concurrency functions always emit INIT_REPORT. For more information, see Monitoring for Lambda SnapStart. Restore phase (Lambda SnapStart only) When you first invoke a SnapStart function and as the function scales up, Lambda resumes new execution environments from the persisted snapshot instead of initializing the function from scratch. If you have an after-restore runtime hook, the code runs at the end of the Restore phase. You are charged for the duration of after-restore runtime hooks. The runtime must load and afterRunning code 14 AWS Lambda Developer Guide restore runtime hooks must complete within the timeout limit (10 seconds). Otherwise, you'll get a SnapStartTimeoutException. When the Restore phase completes, Lambda invokes the function handler (the Invoke phase). Failures during the Restore phase If the Restore phase fails, Lambda emits error information in the RESTORE_REPORT log. Example — RESTORE_REPORT log for timeout RESTORE_REPORT Restore Duration: 1236.04 ms Status: timeout Example — RESTORE_REPORT log for runtime hook failure RESTORE_REPORT Restore Duration: 1236.04 ms Status: error Error Type: Runtime.ExitError For more information about the RESTORE_REPORT log, see Monitoring for Lambda SnapStart. Invoke phase When a Lambda function is invoked in response to a Next API request, Lambda sends an Invoke event to the runtime and to each extension. The function's timeout setting limits the duration of the entire Invoke phase. For example, if you set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished"} -{"global_id": 676, "doc_id": "lambda", "chunk_id": "16", "question_id": 1, "question": "What is the function timeout set to?", "answer_span": "set the function timeout as 360 seconds", "chunk": "set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished executing. The invoke phase ends after the runtime and all extensions signal that they are done by sending a Next API request. Failures during the invoke phase If the Lambda function crashes or times out during the Invoke phase, Lambda resets the execution environment. The following diagram illustrates Lambda execution environment behavior when there's an invoke failure: Running code 15 AWS Lambda Developer Guide In the previous diagram: • The first phase is the INIT phase, which runs without errors. • The second phase is the INVOKE phase, which runs without errors. • At some point, suppose your function runs into an invoke failure (common causes include function timeouts, runtime errors, memory exhaustion, VPC connectivity issues, permission errors, concurrency limits, and various configuration problems). For a complete list of possible invocation failures, see the section called “Invocation”. The third phase, labeled INVOKE WITH ERROR , illustrates this scenario. When this happens, the Lambda service performs a reset. The reset behaves like a Shutdown event. First, Lambda shuts down the runtime, then sends a Shutdown event to each registered external extension. The event includes the reason for the shutdown. If this environment is used for a new invocation, Lambda re-initializes the extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you"} -{"global_id": 677, "doc_id": "lambda", "chunk_id": "16", "question_id": 2, "question": "What happens if the Lambda function crashes or times out during the Invoke phase?", "answer_span": "If the Lambda function crashes or times out during the Invoke phase, Lambda resets the execution environment.", "chunk": "set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished executing. The invoke phase ends after the runtime and all extensions signal that they are done by sending a Next API request. Failures during the invoke phase If the Lambda function crashes or times out during the Invoke phase, Lambda resets the execution environment. The following diagram illustrates Lambda execution environment behavior when there's an invoke failure: Running code 15 AWS Lambda Developer Guide In the previous diagram: • The first phase is the INIT phase, which runs without errors. • The second phase is the INVOKE phase, which runs without errors. • At some point, suppose your function runs into an invoke failure (common causes include function timeouts, runtime errors, memory exhaustion, VPC connectivity issues, permission errors, concurrency limits, and various configuration problems). For a complete list of possible invocation failures, see the section called “Invocation”. The third phase, labeled INVOKE WITH ERROR , illustrates this scenario. When this happens, the Lambda service performs a reset. The reset behaves like a Shutdown event. First, Lambda shuts down the runtime, then sends a Shutdown event to each registered external extension. The event includes the reason for the shutdown. If this environment is used for a new invocation, Lambda re-initializes the extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you"} -{"global_id": 678, "doc_id": "lambda", "chunk_id": "16", "question_id": 3, "question": "What does the reset behave like when an invoke failure occurs?", "answer_span": "The reset behaves like a Shutdown event.", "chunk": "set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished executing. The invoke phase ends after the runtime and all extensions signal that they are done by sending a Next API request. Failures during the invoke phase If the Lambda function crashes or times out during the Invoke phase, Lambda resets the execution environment. The following diagram illustrates Lambda execution environment behavior when there's an invoke failure: Running code 15 AWS Lambda Developer Guide In the previous diagram: • The first phase is the INIT phase, which runs without errors. • The second phase is the INVOKE phase, which runs without errors. • At some point, suppose your function runs into an invoke failure (common causes include function timeouts, runtime errors, memory exhaustion, VPC connectivity issues, permission errors, concurrency limits, and various configuration problems). For a complete list of possible invocation failures, see the section called “Invocation”. The third phase, labeled INVOKE WITH ERROR , illustrates this scenario. When this happens, the Lambda service performs a reset. The reset behaves like a Shutdown event. First, Lambda shuts down the runtime, then sends a Shutdown event to each registered external extension. The event includes the reason for the shutdown. If this environment is used for a new invocation, Lambda re-initializes the extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you"} -{"global_id": 679, "doc_id": "lambda", "chunk_id": "16", "question_id": 4, "question": "Does the Lambda reset clear the /tmp directory content prior to the next init phase?", "answer_span": "Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase.", "chunk": "set the function timeout as 360 seconds, the function and all extensions need to complete within 360 seconds. Note that there is no independent post-invoke phase. The duration is the sum of all invocation time (runtime + extensions) and is not calculated until the function and all extensions have finished executing. The invoke phase ends after the runtime and all extensions signal that they are done by sending a Next API request. Failures during the invoke phase If the Lambda function crashes or times out during the Invoke phase, Lambda resets the execution environment. The following diagram illustrates Lambda execution environment behavior when there's an invoke failure: Running code 15 AWS Lambda Developer Guide In the previous diagram: • The first phase is the INIT phase, which runs without errors. • The second phase is the INVOKE phase, which runs without errors. • At some point, suppose your function runs into an invoke failure (common causes include function timeouts, runtime errors, memory exhaustion, VPC connectivity issues, permission errors, concurrency limits, and various configuration problems). For a complete list of possible invocation failures, see the section called “Invocation”. The third phase, labeled INVOKE WITH ERROR , illustrates this scenario. When this happens, the Lambda service performs a reset. The reset behaves like a Shutdown event. First, Lambda shuts down the runtime, then sends a Shutdown event to each registered external extension. The event includes the reason for the shutdown. If this environment is used for a new invocation, Lambda re-initializes the extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you"} -{"global_id": 680, "doc_id": "lambda", "chunk_id": "17", "question_id": 1, "question": "What does the Lambda reset not clear before the next init phase?", "answer_span": "Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase.", "chunk": "extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you may see minor differences between the structure and content of system log messages and trace segments emitted by different Lambda functions in your AWS account. If your function's system log configuration is set to plain text, this change affects the log messages captured in CloudWatch Logs when your function experiences an invoke failure. The following examples show log outputs in both old and new formats. These changes will be implemented during the coming weeks, and all functions in all AWS Regions except the China and GovCloud regions will transition to use the newformat log messages and trace segments. Example CloudWatch Logs log output (runtime or extension crash) - old style START RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Version: $LATEST RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Error: Runtime exited without providing a reason Runtime.ExitError Running code 16 AWS Lambda Developer Guide END RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 REPORT RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Duration: 933.59 ms Billed Duration: 934 ms Memory Size: 128 MB Max Memory Used: 9 MB Example CloudWatch Logs log output (function timeout) - old style START RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Version: $LATEST 2024-03-04T17:22:38.033Z b70435cc-261c-4438-b9b6-efe4c8f04b21 Task timed out after 3.00 seconds END RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 REPORT RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Duration: 3004.92 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 33 MB Init Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version:"} -{"global_id": 681, "doc_id": "lambda", "chunk_id": "17", "question_id": 2, "question": "What may you see due to the changes being implemented in the Lambda service?", "answer_span": "Due to these changes, you may see minor differences between the structure and content of system log messages and trace segments emitted by different Lambda functions in your AWS account.", "chunk": "extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you may see minor differences between the structure and content of system log messages and trace segments emitted by different Lambda functions in your AWS account. If your function's system log configuration is set to plain text, this change affects the log messages captured in CloudWatch Logs when your function experiences an invoke failure. The following examples show log outputs in both old and new formats. These changes will be implemented during the coming weeks, and all functions in all AWS Regions except the China and GovCloud regions will transition to use the newformat log messages and trace segments. Example CloudWatch Logs log output (runtime or extension crash) - old style START RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Version: $LATEST RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Error: Runtime exited without providing a reason Runtime.ExitError Running code 16 AWS Lambda Developer Guide END RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 REPORT RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Duration: 933.59 ms Billed Duration: 934 ms Memory Size: 128 MB Max Memory Used: 9 MB Example CloudWatch Logs log output (function timeout) - old style START RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Version: $LATEST 2024-03-04T17:22:38.033Z b70435cc-261c-4438-b9b6-efe4c8f04b21 Task timed out after 3.00 seconds END RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 REPORT RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Duration: 3004.92 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 33 MB Init Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version:"} -{"global_id": 682, "doc_id": "lambda", "chunk_id": "17", "question_id": 3, "question": "What happens if your function's system log configuration is set to plain text?", "answer_span": "this change affects the log messages captured in CloudWatch Logs when your function experiences an invoke failure.", "chunk": "extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you may see minor differences between the structure and content of system log messages and trace segments emitted by different Lambda functions in your AWS account. If your function's system log configuration is set to plain text, this change affects the log messages captured in CloudWatch Logs when your function experiences an invoke failure. The following examples show log outputs in both old and new formats. These changes will be implemented during the coming weeks, and all functions in all AWS Regions except the China and GovCloud regions will transition to use the newformat log messages and trace segments. Example CloudWatch Logs log output (runtime or extension crash) - old style START RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Version: $LATEST RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Error: Runtime exited without providing a reason Runtime.ExitError Running code 16 AWS Lambda Developer Guide END RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 REPORT RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Duration: 933.59 ms Billed Duration: 934 ms Memory Size: 128 MB Max Memory Used: 9 MB Example CloudWatch Logs log output (function timeout) - old style START RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Version: $LATEST 2024-03-04T17:22:38.033Z b70435cc-261c-4438-b9b6-efe4c8f04b21 Task timed out after 3.00 seconds END RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 REPORT RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Duration: 3004.92 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 33 MB Init Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version:"} -{"global_id": 683, "doc_id": "lambda", "chunk_id": "17", "question_id": 4, "question": "What will the new format for CloudWatch logs include in the REPORT line?", "answer_span": "The new format for CloudWatch logs includes an additional statusfield in the REPORT line.", "chunk": "extension and runtime together with the next invocation. Note that the Lambda reset does not clear the /tmp directory content prior to the next init phase. This behavior is consistent with the regular shutdown phase. Note AWS is currently implementing changes to the Lambda service. Due to these changes, you may see minor differences between the structure and content of system log messages and trace segments emitted by different Lambda functions in your AWS account. If your function's system log configuration is set to plain text, this change affects the log messages captured in CloudWatch Logs when your function experiences an invoke failure. The following examples show log outputs in both old and new formats. These changes will be implemented during the coming weeks, and all functions in all AWS Regions except the China and GovCloud regions will transition to use the newformat log messages and trace segments. Example CloudWatch Logs log output (runtime or extension crash) - old style START RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Version: $LATEST RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Error: Runtime exited without providing a reason Runtime.ExitError Running code 16 AWS Lambda Developer Guide END RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 REPORT RequestId: c3252230-c73d-49f6-8844-968c01d1e2e1 Duration: 933.59 ms Billed Duration: 934 ms Memory Size: 128 MB Max Memory Used: 9 MB Example CloudWatch Logs log output (function timeout) - old style START RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Version: $LATEST 2024-03-04T17:22:38.033Z b70435cc-261c-4438-b9b6-efe4c8f04b21 Task timed out after 3.00 seconds END RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 REPORT RequestId: b70435cc-261c-4438-b9b6-efe4c8f04b21 Duration: 3004.92 ms Billed Duration: 3000 ms Memory Size: 128 MB Max Memory Used: 33 MB Init Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version:"} -{"global_id": 684, "doc_id": "lambda", "chunk_id": "18", "question_id": 1, "question": "What is included in the new format for CloudWatch logs in the REPORT line?", "answer_span": "The new format for CloudWatch logs includes an additional status field in the REPORT line.", "chunk": "Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version: $LATEST END RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd REPORT RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Duration: 133.61 ms Billed Duration: 133 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 80.00 ms Status: error Error Type: Runtime.ExitError Example CloudWatch Logs log output (function timeout) - new style START RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Version: $LATEST END RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda REPORT RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Duration: 3016.78 ms Billed Duration: 3016 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 84.00 ms Status: timeout • The fourth phase represents the INVOKE phase immediately following an invoke failure. Here, Lambda initializes the environment again by re-running the INIT phase. This is called a suppressed init. When suppressed inits occur, Lambda doesn't explicitly report an additional INIT phase in CloudWatch Logs. Instead, you may notice that the duration in the REPORT line includes an additional INIT duration + the INVOKE duration. For example, suppose you see the following logs in CloudWatch: Running code 17 AWS Lambda Developer Guide 2022-12-20T01:00:00.000-08:00 START RequestId: XXX Version: $LATEST 2022-12-20T01:00:02.500-08:00 END RequestId: XXX 2022-12-20T01:00:02.500-08:00 REPORT RequestId: XXX Duration: 3022.91 ms Billed Duration: 3000 ms Memory Size: 512 MB Max Memory Used: 157 MB In this example, the difference between the REPORT and START timestamps is 2.5 seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time"} -{"global_id": 685, "doc_id": "lambda", "chunk_id": "18", "question_id": 2, "question": "What does the REPORT line include in the case of a runtime or extension crash?", "answer_span": "In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType.", "chunk": "Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version: $LATEST END RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd REPORT RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Duration: 133.61 ms Billed Duration: 133 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 80.00 ms Status: error Error Type: Runtime.ExitError Example CloudWatch Logs log output (function timeout) - new style START RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Version: $LATEST END RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda REPORT RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Duration: 3016.78 ms Billed Duration: 3016 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 84.00 ms Status: timeout • The fourth phase represents the INVOKE phase immediately following an invoke failure. Here, Lambda initializes the environment again by re-running the INIT phase. This is called a suppressed init. When suppressed inits occur, Lambda doesn't explicitly report an additional INIT phase in CloudWatch Logs. Instead, you may notice that the duration in the REPORT line includes an additional INIT duration + the INVOKE duration. For example, suppose you see the following logs in CloudWatch: Running code 17 AWS Lambda Developer Guide 2022-12-20T01:00:00.000-08:00 START RequestId: XXX Version: $LATEST 2022-12-20T01:00:02.500-08:00 END RequestId: XXX 2022-12-20T01:00:02.500-08:00 REPORT RequestId: XXX Duration: 3022.91 ms Billed Duration: 3000 ms Memory Size: 512 MB Max Memory Used: 157 MB In this example, the difference between the REPORT and START timestamps is 2.5 seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time"} -{"global_id": 686, "doc_id": "lambda", "chunk_id": "18", "question_id": 3, "question": "What happens during the fourth phase following an invoke failure?", "answer_span": "The fourth phase represents the INVOKE phase immediately following an invoke failure.", "chunk": "Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version: $LATEST END RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd REPORT RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Duration: 133.61 ms Billed Duration: 133 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 80.00 ms Status: error Error Type: Runtime.ExitError Example CloudWatch Logs log output (function timeout) - new style START RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Version: $LATEST END RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda REPORT RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Duration: 3016.78 ms Billed Duration: 3016 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 84.00 ms Status: timeout • The fourth phase represents the INVOKE phase immediately following an invoke failure. Here, Lambda initializes the environment again by re-running the INIT phase. This is called a suppressed init. When suppressed inits occur, Lambda doesn't explicitly report an additional INIT phase in CloudWatch Logs. Instead, you may notice that the duration in the REPORT line includes an additional INIT duration + the INVOKE duration. For example, suppose you see the following logs in CloudWatch: Running code 17 AWS Lambda Developer Guide 2022-12-20T01:00:00.000-08:00 START RequestId: XXX Version: $LATEST 2022-12-20T01:00:02.500-08:00 END RequestId: XXX 2022-12-20T01:00:02.500-08:00 REPORT RequestId: XXX Duration: 3022.91 ms Billed Duration: 3000 ms Memory Size: 512 MB Max Memory Used: 157 MB In this example, the difference between the REPORT and START timestamps is 2.5 seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time"} -{"global_id": 687, "doc_id": "lambda", "chunk_id": "18", "question_id": 4, "question": "What is a suppressed init in the context of Lambda?", "answer_span": "When suppressed inits occur, Lambda doesn't explicitly report an additional INIT phase in CloudWatch Logs.", "chunk": "Duration: 111.23 ms The new format for CloudWatch logs includes an additional statusfield in the REPORT line. In the case of a runtime or extension crash, the REPORT line also includes a field ErrorType. Example CloudWatch Logs log output (runtime or extension crash) - new style START RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Version: $LATEST END RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd REPORT RequestId: 5b866fb1-7154-4af6-8078-6ef6ca4c2ddd Duration: 133.61 ms Billed Duration: 133 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 80.00 ms Status: error Error Type: Runtime.ExitError Example CloudWatch Logs log output (function timeout) - new style START RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Version: $LATEST END RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda REPORT RequestId: 527cb862-4f5e-49a9-9ae4-a7edc90f0fda Duration: 3016.78 ms Billed Duration: 3016 ms Memory Size: 128 MB Max Memory Used: 31 MB Init Duration: 84.00 ms Status: timeout • The fourth phase represents the INVOKE phase immediately following an invoke failure. Here, Lambda initializes the environment again by re-running the INIT phase. This is called a suppressed init. When suppressed inits occur, Lambda doesn't explicitly report an additional INIT phase in CloudWatch Logs. Instead, you may notice that the duration in the REPORT line includes an additional INIT duration + the INVOKE duration. For example, suppose you see the following logs in CloudWatch: Running code 17 AWS Lambda Developer Guide 2022-12-20T01:00:00.000-08:00 START RequestId: XXX Version: $LATEST 2022-12-20T01:00:02.500-08:00 END RequestId: XXX 2022-12-20T01:00:02.500-08:00 REPORT RequestId: XXX Duration: 3022.91 ms Billed Duration: 3000 ms Memory Size: 512 MB Max Memory Used: 157 MB In this example, the difference between the REPORT and START timestamps is 2.5 seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time"} -{"global_id": 688, "doc_id": "lambda", "chunk_id": "19", "question_id": 1, "question": "What is the reported duration mentioned in the text?", "answer_span": "This doesn't match the reported duration of 3022.91 millseconds", "chunk": "seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time telemetry data for extensions using the Telemetry API. The Telemetry API emits INIT_START, INIT_RUNTIME_DONE, and INIT_REPORT events with phase=invoke whenever suppressed inits occur in between invoke phases. • The fifth phase represents the SHUTDOWN phase, which runs without errors. Shutdown phase When Lambda is about to shut down the runtime, it sends a Shutdown event to each registered external extension. Extensions can use this time for final cleanup tasks. The Shutdown event is a response to a Next API request. Duration limit: The maximum duration of the Shutdown phase depends on the configuration of registered extensions: • 0 ms – A function with no registered extensions • 500 ms – A function with a registered internal extension • 2,000 ms – A function with one or more registered external extensions If the runtime or an extension does not respond to the Shutdown event within the limit, Lambda ends the process using a SIGKILL signal. After the function and all extensions have completed, Lambda maintains the execution environment for some time in anticipation of another function invocation. However, Lambda terminates execution environments every few hours to allow for runtime updates and maintenance —even for functions that are invoked continuously. You should not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the"} -{"global_id": 689, "doc_id": "lambda", "chunk_id": "19", "question_id": 2, "question": "What does the Telemetry API emit during the invoke phases?", "answer_span": "The Telemetry API emits INIT_START, INIT_RUNTIME_DONE, and INIT_REPORT events with phase=invoke", "chunk": "seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time telemetry data for extensions using the Telemetry API. The Telemetry API emits INIT_START, INIT_RUNTIME_DONE, and INIT_REPORT events with phase=invoke whenever suppressed inits occur in between invoke phases. • The fifth phase represents the SHUTDOWN phase, which runs without errors. Shutdown phase When Lambda is about to shut down the runtime, it sends a Shutdown event to each registered external extension. Extensions can use this time for final cleanup tasks. The Shutdown event is a response to a Next API request. Duration limit: The maximum duration of the Shutdown phase depends on the configuration of registered extensions: • 0 ms – A function with no registered extensions • 500 ms – A function with a registered internal extension • 2,000 ms – A function with one or more registered external extensions If the runtime or an extension does not respond to the Shutdown event within the limit, Lambda ends the process using a SIGKILL signal. After the function and all extensions have completed, Lambda maintains the execution environment for some time in anticipation of another function invocation. However, Lambda terminates execution environments every few hours to allow for runtime updates and maintenance —even for functions that are invoked continuously. You should not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the"} -{"global_id": 690, "doc_id": "lambda", "chunk_id": "19", "question_id": 3, "question": "What happens if the runtime or an extension does not respond to the Shutdown event within the limit?", "answer_span": "Lambda ends the process using a SIGKILL signal.", "chunk": "seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time telemetry data for extensions using the Telemetry API. The Telemetry API emits INIT_START, INIT_RUNTIME_DONE, and INIT_REPORT events with phase=invoke whenever suppressed inits occur in between invoke phases. • The fifth phase represents the SHUTDOWN phase, which runs without errors. Shutdown phase When Lambda is about to shut down the runtime, it sends a Shutdown event to each registered external extension. Extensions can use this time for final cleanup tasks. The Shutdown event is a response to a Next API request. Duration limit: The maximum duration of the Shutdown phase depends on the configuration of registered extensions: • 0 ms – A function with no registered extensions • 500 ms – A function with a registered internal extension • 2,000 ms – A function with one or more registered external extensions If the runtime or an extension does not respond to the Shutdown event within the limit, Lambda ends the process using a SIGKILL signal. After the function and all extensions have completed, Lambda maintains the execution environment for some time in anticipation of another function invocation. However, Lambda terminates execution environments every few hours to allow for runtime updates and maintenance —even for functions that are invoked continuously. You should not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the"} -{"global_id": 691, "doc_id": "lambda", "chunk_id": "19", "question_id": 4, "question": "What should you not assume about the execution environment?", "answer_span": "You should not assume that the execution environment will persist indefinitely.", "chunk": "seconds. This doesn't match the reported duration of 3022.91 millseconds, because it doesn't take into account the extra INIT (suppressed init) that Lambda performed. In this example, you can infer that the actual INVOKE phase took 2.5 seconds. For more insight into this behavior, you can use the Accessing real-time telemetry data for extensions using the Telemetry API. The Telemetry API emits INIT_START, INIT_RUNTIME_DONE, and INIT_REPORT events with phase=invoke whenever suppressed inits occur in between invoke phases. • The fifth phase represents the SHUTDOWN phase, which runs without errors. Shutdown phase When Lambda is about to shut down the runtime, it sends a Shutdown event to each registered external extension. Extensions can use this time for final cleanup tasks. The Shutdown event is a response to a Next API request. Duration limit: The maximum duration of the Shutdown phase depends on the configuration of registered extensions: • 0 ms – A function with no registered extensions • 500 ms – A function with a registered internal extension • 2,000 ms – A function with one or more registered external extensions If the runtime or an extension does not respond to the Shutdown event within the limit, Lambda ends the process using a SIGKILL signal. After the function and all extensions have completed, Lambda maintains the execution environment for some time in anticipation of another function invocation. However, Lambda terminates execution environments every few hours to allow for runtime updates and maintenance —even for functions that are invoked continuously. You should not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the"} -{"global_id": 692, "doc_id": "lambda", "chunk_id": "20", "question_id": 1, "question": "What should you not assume about the execution environment in AWS Lambda?", "answer_span": "not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely.", "chunk": "not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the function's handler method remain initialized, providing additional optimization when the function is invoked again. For example, if your Lambda function establishes a database connection, instead of reestablishing the connection, the original connection is used in subsequent invocations. We recommend adding logic in your code to check if a connection exists before creating a new one. • Each execution environment provides between 512 MB and 10,240 MB, in 1-MB increments, of disk space in the /tmp directory. The directory content remains when the execution environment is frozen, providing a transient cache that can be used for multiple invocations. You can add extra code to check if the cache has the data that you stored. For more information on deployment size limits, see Lambda quotas. • Background processes or callbacks that were initiated by your Lambda function and did not complete when the function ended resume if Lambda reuses the execution environment. Make sure that any background processes or callbacks in your code are complete before the code exits. Cold starts and latency When Lambda receives a request to run a function via the Lambda API, the service first prepares an execution environment. During this initialization phase, the service downloads your code, starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time,"} -{"global_id": 693, "doc_id": "lambda", "chunk_id": "20", "question_id": 2, "question": "What happens when the function is invoked again in AWS Lambda?", "answer_span": "When the function is invoked again, Lambda thaws the environment for reuse.", "chunk": "not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the function's handler method remain initialized, providing additional optimization when the function is invoked again. For example, if your Lambda function establishes a database connection, instead of reestablishing the connection, the original connection is used in subsequent invocations. We recommend adding logic in your code to check if a connection exists before creating a new one. • Each execution environment provides between 512 MB and 10,240 MB, in 1-MB increments, of disk space in the /tmp directory. The directory content remains when the execution environment is frozen, providing a transient cache that can be used for multiple invocations. You can add extra code to check if the cache has the data that you stored. For more information on deployment size limits, see Lambda quotas. • Background processes or callbacks that were initiated by your Lambda function and did not complete when the function ended resume if Lambda reuses the execution environment. Make sure that any background processes or callbacks in your code are complete before the code exits. Cold starts and latency When Lambda receives a request to run a function via the Lambda API, the service first prepares an execution environment. During this initialization phase, the service downloads your code, starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time,"} -{"global_id": 694, "doc_id": "lambda", "chunk_id": "20", "question_id": 3, "question": "What is the range of disk space provided by each execution environment in AWS Lambda?", "answer_span": "Each execution environment provides between 512 MB and 10,240 MB, in 1-MB increments, of disk space in the /tmp directory.", "chunk": "not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the function's handler method remain initialized, providing additional optimization when the function is invoked again. For example, if your Lambda function establishes a database connection, instead of reestablishing the connection, the original connection is used in subsequent invocations. We recommend adding logic in your code to check if a connection exists before creating a new one. • Each execution environment provides between 512 MB and 10,240 MB, in 1-MB increments, of disk space in the /tmp directory. The directory content remains when the execution environment is frozen, providing a transient cache that can be used for multiple invocations. You can add extra code to check if the cache has the data that you stored. For more information on deployment size limits, see Lambda quotas. • Background processes or callbacks that were initiated by your Lambda function and did not complete when the function ended resume if Lambda reuses the execution environment. Make sure that any background processes or callbacks in your code are complete before the code exits. Cold starts and latency When Lambda receives a request to run a function via the Lambda API, the service first prepares an execution environment. During this initialization phase, the service downloads your code, starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time,"} -{"global_id": 695, "doc_id": "lambda", "chunk_id": "20", "question_id": 4, "question": "What is referred to as a 'cold start' in AWS Lambda?", "answer_span": "During this initialization phase, the service downloads your code, starts the environment, and runs any initialization code outside of the main handler.", "chunk": "not assume that the execution Running code 18 AWS Lambda Developer Guide environment will persist indefinitely. For more information, see Implement statelessness in functions. When the function is invoked again, Lambda thaws the environment for reuse. Reusing the execution environment has the following implications: • Objects declared outside of the function's handler method remain initialized, providing additional optimization when the function is invoked again. For example, if your Lambda function establishes a database connection, instead of reestablishing the connection, the original connection is used in subsequent invocations. We recommend adding logic in your code to check if a connection exists before creating a new one. • Each execution environment provides between 512 MB and 10,240 MB, in 1-MB increments, of disk space in the /tmp directory. The directory content remains when the execution environment is frozen, providing a transient cache that can be used for multiple invocations. You can add extra code to check if the cache has the data that you stored. For more information on deployment size limits, see Lambda quotas. • Background processes or callbacks that were initiated by your Lambda function and did not complete when the function ended resume if Lambda reuses the execution environment. Make sure that any background processes or callbacks in your code are complete before the code exits. Cold starts and latency When Lambda receives a request to run a function via the Lambda API, the service first prepares an execution environment. During this initialization phase, the service downloads your code, starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time,"} -{"global_id": 696, "doc_id": "lambda", "chunk_id": "21", "question_id": 1, "question": "What are the first two steps referred to as in the Lambda execution process?", "answer_span": "the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”", "chunk": "starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time, but it does add latency to your overall invocation duration. Running code 19 AWS Lambda Developer Guide After the invocation completes, the execution environment is frozen. To improve resource management and performance, Lambda retains the execution environment for a period of time. During this time, if another request arrives for the same function, Lambda can reuse the environment. This second request typically finishes more quickly, since the execution environment is already fully set up. This is called a “warm start”. Cold starts typically occur in under 1% of invocations. The duration of a cold start varies from under 100 ms to over 1 second. In general, cold starts are typically more common in development and test functions than production workloads. This is because development and test functions are usually invoked less frequently. Reducing cold starts with Provisioned Concurrency If you need predictable function start times for your workload, provisioned concurrency is the recommended solution to ensure the lowest possible latency. This feature pre-initializes execution environments, reducing cold starts. For example, a function with a provisioned concurrency of 6 has 6 execution environments prewarmed. Optimizing static initialization Static initialization happens before the handler code starts running in a function. This is the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating"} -{"global_id": 697, "doc_id": "lambda", "chunk_id": "21", "question_id": 2, "question": "What happens to the execution environment after the invocation completes?", "answer_span": "the execution environment is frozen", "chunk": "starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time, but it does add latency to your overall invocation duration. Running code 19 AWS Lambda Developer Guide After the invocation completes, the execution environment is frozen. To improve resource management and performance, Lambda retains the execution environment for a period of time. During this time, if another request arrives for the same function, Lambda can reuse the environment. This second request typically finishes more quickly, since the execution environment is already fully set up. This is called a “warm start”. Cold starts typically occur in under 1% of invocations. The duration of a cold start varies from under 100 ms to over 1 second. In general, cold starts are typically more common in development and test functions than production workloads. This is because development and test functions are usually invoked less frequently. Reducing cold starts with Provisioned Concurrency If you need predictable function start times for your workload, provisioned concurrency is the recommended solution to ensure the lowest possible latency. This feature pre-initializes execution environments, reducing cold starts. For example, a function with a provisioned concurrency of 6 has 6 execution environments prewarmed. Optimizing static initialization Static initialization happens before the handler code starts running in a function. This is the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating"} -{"global_id": 698, "doc_id": "lambda", "chunk_id": "21", "question_id": 3, "question": "What is the recommended solution to reduce cold starts for predictable function start times?", "answer_span": "provisioned concurrency is the recommended solution to ensure the lowest possible latency", "chunk": "starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time, but it does add latency to your overall invocation duration. Running code 19 AWS Lambda Developer Guide After the invocation completes, the execution environment is frozen. To improve resource management and performance, Lambda retains the execution environment for a period of time. During this time, if another request arrives for the same function, Lambda can reuse the environment. This second request typically finishes more quickly, since the execution environment is already fully set up. This is called a “warm start”. Cold starts typically occur in under 1% of invocations. The duration of a cold start varies from under 100 ms to over 1 second. In general, cold starts are typically more common in development and test functions than production workloads. This is because development and test functions are usually invoked less frequently. Reducing cold starts with Provisioned Concurrency If you need predictable function start times for your workload, provisioned concurrency is the recommended solution to ensure the lowest possible latency. This feature pre-initializes execution environments, reducing cold starts. For example, a function with a provisioned concurrency of 6 has 6 execution environments prewarmed. Optimizing static initialization Static initialization happens before the handler code starts running in a function. This is the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating"} -{"global_id": 699, "doc_id": "lambda", "chunk_id": "21", "question_id": 4, "question": "What does static initialization involve in the context of Lambda functions?", "answer_span": "Static initialization happens before the handler code starts running in a function", "chunk": "starts the environment, and runs any initialization code outside of the main handler. Finally, Lambda runs the handler code. In this diagram, the first two steps of downloading the code and setting up the environment are frequently referred to as a “cold start”. You are not charged for this time, but it does add latency to your overall invocation duration. Running code 19 AWS Lambda Developer Guide After the invocation completes, the execution environment is frozen. To improve resource management and performance, Lambda retains the execution environment for a period of time. During this time, if another request arrives for the same function, Lambda can reuse the environment. This second request typically finishes more quickly, since the execution environment is already fully set up. This is called a “warm start”. Cold starts typically occur in under 1% of invocations. The duration of a cold start varies from under 100 ms to over 1 second. In general, cold starts are typically more common in development and test functions than production workloads. This is because development and test functions are usually invoked less frequently. Reducing cold starts with Provisioned Concurrency If you need predictable function start times for your workload, provisioned concurrency is the recommended solution to ensure the lowest possible latency. This feature pre-initializes execution environments, reducing cold starts. For example, a function with a provisioned concurrency of 6 has 6 execution environments prewarmed. Optimizing static initialization Static initialization happens before the handler code starts running in a function. This is the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating"} -{"global_id": 700, "doc_id": "lambda", "chunk_id": "22", "question_id": 1, "question": "What is the purpose of the initialization code in AWS Lambda?", "answer_span": "This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services.", "chunk": "the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating the Amazon S3 client during the initialization phase, before the lambda_handler function runs during invoke. import os import json import cv2 import logging import boto3 s3 = boto3.client('s3') logger = logging.getLogger() logger.setLevel(logging.INFO) def lambda_handler(event, context): # Handler logic... The largest contributor of latency before function execution comes from initialization code. This code runs when a new execution environment is created for the first time. The initialization code is not run again if an invocation uses a warm execution environment. Factors that affect initialization code latency include: • The size of the function package, in terms of imported libraries and dependencies, and Lambda layers. • The amount of code and initialization work. • The performance of libraries and other services in setting up connections and other resources. There are a number of steps that developers can take to optimize static initialization latency. If a function has many objects and connections, you may be able to rearchitect a single function into multiple, specialized functions. These are individually smaller and each have less initialization code. It’s important that functions only import the libraries and dependencies that they need. For example, if you only use Amazon DynamoDB in the AWS SDK, you can require an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} -{"global_id": 701, "doc_id": "lambda", "chunk_id": "22", "question_id": 2, "question": "When does the initialization code run in AWS Lambda?", "answer_span": "The initialization code runs when a new execution environment is created for the first time.", "chunk": "the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating the Amazon S3 client during the initialization phase, before the lambda_handler function runs during invoke. import os import json import cv2 import logging import boto3 s3 = boto3.client('s3') logger = logging.getLogger() logger.setLevel(logging.INFO) def lambda_handler(event, context): # Handler logic... The largest contributor of latency before function execution comes from initialization code. This code runs when a new execution environment is created for the first time. The initialization code is not run again if an invocation uses a warm execution environment. Factors that affect initialization code latency include: • The size of the function package, in terms of imported libraries and dependencies, and Lambda layers. • The amount of code and initialization work. • The performance of libraries and other services in setting up connections and other resources. There are a number of steps that developers can take to optimize static initialization latency. If a function has many objects and connections, you may be able to rearchitect a single function into multiple, specialized functions. These are individually smaller and each have less initialization code. It’s important that functions only import the libraries and dependencies that they need. For example, if you only use Amazon DynamoDB in the AWS SDK, you can require an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} -{"global_id": 702, "doc_id": "lambda", "chunk_id": "22", "question_id": 3, "question": "What factors affect the latency of initialization code?", "answer_span": "Factors that affect initialization code latency include: • The size of the function package, in terms of imported libraries and dependencies, and Lambda layers.", "chunk": "the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating the Amazon S3 client during the initialization phase, before the lambda_handler function runs during invoke. import os import json import cv2 import logging import boto3 s3 = boto3.client('s3') logger = logging.getLogger() logger.setLevel(logging.INFO) def lambda_handler(event, context): # Handler logic... The largest contributor of latency before function execution comes from initialization code. This code runs when a new execution environment is created for the first time. The initialization code is not run again if an invocation uses a warm execution environment. Factors that affect initialization code latency include: • The size of the function package, in terms of imported libraries and dependencies, and Lambda layers. • The amount of code and initialization work. • The performance of libraries and other services in setting up connections and other resources. There are a number of steps that developers can take to optimize static initialization latency. If a function has many objects and connections, you may be able to rearchitect a single function into multiple, specialized functions. These are individually smaller and each have less initialization code. It’s important that functions only import the libraries and dependencies that they need. For example, if you only use Amazon DynamoDB in the AWS SDK, you can require an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} -{"global_id": 703, "doc_id": "lambda", "chunk_id": "22", "question_id": 4, "question": "What can developers do to optimize static initialization latency?", "answer_span": "There are a number of steps that developers can take to optimize static initialization latency.", "chunk": "the initialization code that you provide, that is outside of the main handler. This code is often used to import libraries and dependencies, set up configurations, and initialize connections to other services. Running code 20 AWS Lambda Developer Guide The following Python example shows importing, and configuring modules, and creating the Amazon S3 client during the initialization phase, before the lambda_handler function runs during invoke. import os import json import cv2 import logging import boto3 s3 = boto3.client('s3') logger = logging.getLogger() logger.setLevel(logging.INFO) def lambda_handler(event, context): # Handler logic... The largest contributor of latency before function execution comes from initialization code. This code runs when a new execution environment is created for the first time. The initialization code is not run again if an invocation uses a warm execution environment. Factors that affect initialization code latency include: • The size of the function package, in terms of imported libraries and dependencies, and Lambda layers. • The amount of code and initialization work. • The performance of libraries and other services in setting up connections and other resources. There are a number of steps that developers can take to optimize static initialization latency. If a function has many objects and connections, you may be able to rearchitect a single function into multiple, specialized functions. These are individually smaller and each have less initialization code. It’s important that functions only import the libraries and dependencies that they need. For example, if you only use Amazon DynamoDB in the AWS SDK, you can require an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} -{"global_id": 704, "doc_id": "lambda", "chunk_id": "23", "question_id": 1, "question": "What is the alternative to using the entire SDK?", "answer_span": "an individual service instead of the entire SDK.", "chunk": "an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} -{"global_id": 705, "doc_id": "lambda", "chunk_id": "23", "question_id": 2, "question": "What is the correct way to require DynamoDB?", "answer_span": "const DynamoDB = require('aws-sdk/clients/dynamodb')", "chunk": "an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} -{"global_id": 706, "doc_id": "lambda", "chunk_id": "23", "question_id": 3, "question": "What should be used instead of 'const AWS = require('aws-sdk')'?", "answer_span": "use: const DynamoDB = require('aws-sdk/clients/dynamodb')", "chunk": "an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} -{"global_id": 707, "doc_id": "lambda", "chunk_id": "23", "question_id": 4, "question": "What is the context of the examples provided?", "answer_span": "Compare the following three examples:", "chunk": "an individual service instead of the entire SDK. Compare the following three examples: // Instead of const AWS = require('aws-sdk'), use: const DynamoDB = require('aws-sdk/clients/dynamodb') Running code 21"} +{"global_id": 0, "doc_id": "iam", "chunk_id": "0", "question_id": 1, "question": "What is IAM?", "answer_span": "AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources.", "chunk": "AWS Identity and Access Management User Guide What is IAM? AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. IAM provides the infrastructure necessary to control authentication and authorization for your AWS accounts. Identities When you create an AWS account, you begin with one sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account. We strongly recommend that you don't use the root user for your everyday tasks. Safeguard your root user credentials and use them to perform the tasks that only the root user can perform. For the complete list of tasks that require you to sign in as the root user, see Tasks that require root user credentials in the IAM User Guide. Use IAM to set up other identities in addition to your root user, such as administrators, analysts, and developers, and grant them access to the resources they need to succeed in their tasks. Access management After a user is set up in IAM, they use their sign-in credentials to authenticate with AWS. Authentication is provided by matching the sign-in credentials to a principal (an IAM user, AWS STS federated principal, IAM role, or application) trusted by the AWS account. Next, a request is made to grant the principal access to resources. Access is granted in response to an authorization request if the user has been given permission to the resource. For"} +{"global_id": 1, "doc_id": "iam", "chunk_id": "0", "question_id": 2, "question": "What is the AWS account root user?", "answer_span": "This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account.", "chunk": "AWS Identity and Access Management User Guide What is IAM? AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. IAM provides the infrastructure necessary to control authentication and authorization for your AWS accounts. Identities When you create an AWS account, you begin with one sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account. We strongly recommend that you don't use the root user for your everyday tasks. Safeguard your root user credentials and use them to perform the tasks that only the root user can perform. For the complete list of tasks that require you to sign in as the root user, see Tasks that require root user credentials in the IAM User Guide. Use IAM to set up other identities in addition to your root user, such as administrators, analysts, and developers, and grant them access to the resources they need to succeed in their tasks. Access management After a user is set up in IAM, they use their sign-in credentials to authenticate with AWS. Authentication is provided by matching the sign-in credentials to a principal (an IAM user, AWS STS federated principal, IAM role, or application) trusted by the AWS account. Next, a request is made to grant the principal access to resources. Access is granted in response to an authorization request if the user has been given permission to the resource. For"} +{"global_id": 2, "doc_id": "iam", "chunk_id": "0", "question_id": 3, "question": "What should you do with your root user credentials?", "answer_span": "We strongly recommend that you don't use the root user for your everyday tasks. Safeguard your root user credentials and use them to perform the tasks that only the root user can perform.", "chunk": "AWS Identity and Access Management User Guide What is IAM? AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. IAM provides the infrastructure necessary to control authentication and authorization for your AWS accounts. Identities When you create an AWS account, you begin with one sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account. We strongly recommend that you don't use the root user for your everyday tasks. Safeguard your root user credentials and use them to perform the tasks that only the root user can perform. For the complete list of tasks that require you to sign in as the root user, see Tasks that require root user credentials in the IAM User Guide. Use IAM to set up other identities in addition to your root user, such as administrators, analysts, and developers, and grant them access to the resources they need to succeed in their tasks. Access management After a user is set up in IAM, they use their sign-in credentials to authenticate with AWS. Authentication is provided by matching the sign-in credentials to a principal (an IAM user, AWS STS federated principal, IAM role, or application) trusted by the AWS account. Next, a request is made to grant the principal access to resources. Access is granted in response to an authorization request if the user has been given permission to the resource. For"} +{"global_id": 3, "doc_id": "iam", "chunk_id": "0", "question_id": 4, "question": "How do users authenticate with AWS after being set up in IAM?", "answer_span": "After a user is set up in IAM, they use their sign-in credentials to authenticate with AWS.", "chunk": "AWS Identity and Access Management User Guide What is IAM? AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. With IAM, you can manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. IAM provides the infrastructure necessary to control authentication and authorization for your AWS accounts. Identities When you create an AWS account, you begin with one sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account. We strongly recommend that you don't use the root user for your everyday tasks. Safeguard your root user credentials and use them to perform the tasks that only the root user can perform. For the complete list of tasks that require you to sign in as the root user, see Tasks that require root user credentials in the IAM User Guide. Use IAM to set up other identities in addition to your root user, such as administrators, analysts, and developers, and grant them access to the resources they need to succeed in their tasks. Access management After a user is set up in IAM, they use their sign-in credentials to authenticate with AWS. Authentication is provided by matching the sign-in credentials to a principal (an IAM user, AWS STS federated principal, IAM role, or application) trusted by the AWS account. Next, a request is made to grant the principal access to resources. Access is granted in response to an authorization request if the user has been given permission to the resource. For"} +{"global_id": 4, "doc_id": "iam", "chunk_id": "1", "question_id": 1, "question": "What is a principal in the context of AWS?", "answer_span": "a principal (an IAM user, AWS STS federated principal, IAM role, or application) trusted by the AWS account.", "chunk": "a principal (an IAM user, AWS STS federated principal, IAM role, or application) trusted by the AWS account. Next, a request is made to grant the principal access to resources. Access is granted in response to an authorization request if the user has been given permission to the resource. For example, when you first sign in to the console and are on the console Home page, you aren't accessing a specific service. When you select a service, the request for authorization is sent to that service and it looks to see if your identity is on the list of authorized users, what policies are being enforced to control the level of access granted, and any other policies that might be in effect. Authorization requests can be made by principals within your AWS account or from another AWS account that you trust. Once authorized, the principal can take action or perform operations on resources in your AWS account. For example, the principal could launch a new Amazon Elastic Compute Cloud instance, modify IAM group membership, or delete Amazon Simple Storage Service buckets. 1 AWS Identity and Access Management User Guide Tip AWS Training and Certification provides a 10-minute video introduction to IAM: Introduction to AWS Identity and Access Management. Service availability IAM, like many other AWS services, is eventually consistent. IAM achieves high availability by replicating data across multiple servers within Amazon's data centers around the world. If a request to change some data is successful, the change is committed and safely stored. However, the change must be replicated across IAM, which can take some time. Such changes include creating or updating users, groups, roles, or policies. We recommend that you do not include such IAM changes in the critical, high-availability code paths of your application. Instead, make IAM changes in"} +{"global_id": 5, "doc_id": "iam", "chunk_id": "1", "question_id": 2, "question": "What happens when a request is made to grant the principal access to resources?", "answer_span": "Access is granted in response to an authorization request if the user has been given permission to the resource.", "chunk": "a principal (an IAM user, AWS STS federated principal, IAM role, or application) trusted by the AWS account. Next, a request is made to grant the principal access to resources. Access is granted in response to an authorization request if the user has been given permission to the resource. For example, when you first sign in to the console and are on the console Home page, you aren't accessing a specific service. When you select a service, the request for authorization is sent to that service and it looks to see if your identity is on the list of authorized users, what policies are being enforced to control the level of access granted, and any other policies that might be in effect. Authorization requests can be made by principals within your AWS account or from another AWS account that you trust. Once authorized, the principal can take action or perform operations on resources in your AWS account. For example, the principal could launch a new Amazon Elastic Compute Cloud instance, modify IAM group membership, or delete Amazon Simple Storage Service buckets. 1 AWS Identity and Access Management User Guide Tip AWS Training and Certification provides a 10-minute video introduction to IAM: Introduction to AWS Identity and Access Management. Service availability IAM, like many other AWS services, is eventually consistent. IAM achieves high availability by replicating data across multiple servers within Amazon's data centers around the world. If a request to change some data is successful, the change is committed and safely stored. However, the change must be replicated across IAM, which can take some time. Such changes include creating or updating users, groups, roles, or policies. We recommend that you do not include such IAM changes in the critical, high-availability code paths of your application. Instead, make IAM changes in"} +{"global_id": 6, "doc_id": "iam", "chunk_id": "1", "question_id": 3, "question": "What can the principal do once authorized?", "answer_span": "the principal can take action or perform operations on resources in your AWS account.", "chunk": "a principal (an IAM user, AWS STS federated principal, IAM role, or application) trusted by the AWS account. Next, a request is made to grant the principal access to resources. Access is granted in response to an authorization request if the user has been given permission to the resource. For example, when you first sign in to the console and are on the console Home page, you aren't accessing a specific service. When you select a service, the request for authorization is sent to that service and it looks to see if your identity is on the list of authorized users, what policies are being enforced to control the level of access granted, and any other policies that might be in effect. Authorization requests can be made by principals within your AWS account or from another AWS account that you trust. Once authorized, the principal can take action or perform operations on resources in your AWS account. For example, the principal could launch a new Amazon Elastic Compute Cloud instance, modify IAM group membership, or delete Amazon Simple Storage Service buckets. 1 AWS Identity and Access Management User Guide Tip AWS Training and Certification provides a 10-minute video introduction to IAM: Introduction to AWS Identity and Access Management. Service availability IAM, like many other AWS services, is eventually consistent. IAM achieves high availability by replicating data across multiple servers within Amazon's data centers around the world. If a request to change some data is successful, the change is committed and safely stored. However, the change must be replicated across IAM, which can take some time. Such changes include creating or updating users, groups, roles, or policies. We recommend that you do not include such IAM changes in the critical, high-availability code paths of your application. Instead, make IAM changes in"} +{"global_id": 7, "doc_id": "iam", "chunk_id": "1", "question_id": 4, "question": "How does IAM achieve high availability?", "answer_span": "IAM achieves high availability by replicating data across multiple servers within Amazon's data centers around the world.", "chunk": "a principal (an IAM user, AWS STS federated principal, IAM role, or application) trusted by the AWS account. Next, a request is made to grant the principal access to resources. Access is granted in response to an authorization request if the user has been given permission to the resource. For example, when you first sign in to the console and are on the console Home page, you aren't accessing a specific service. When you select a service, the request for authorization is sent to that service and it looks to see if your identity is on the list of authorized users, what policies are being enforced to control the level of access granted, and any other policies that might be in effect. Authorization requests can be made by principals within your AWS account or from another AWS account that you trust. Once authorized, the principal can take action or perform operations on resources in your AWS account. For example, the principal could launch a new Amazon Elastic Compute Cloud instance, modify IAM group membership, or delete Amazon Simple Storage Service buckets. 1 AWS Identity and Access Management User Guide Tip AWS Training and Certification provides a 10-minute video introduction to IAM: Introduction to AWS Identity and Access Management. Service availability IAM, like many other AWS services, is eventually consistent. IAM achieves high availability by replicating data across multiple servers within Amazon's data centers around the world. If a request to change some data is successful, the change is committed and safely stored. However, the change must be replicated across IAM, which can take some time. Such changes include creating or updating users, groups, roles, or policies. We recommend that you do not include such IAM changes in the critical, high-availability code paths of your application. Instead, make IAM changes in"} +{"global_id": 8, "doc_id": "iam", "chunk_id": "2", "question_id": 1, "question": "What must be replicated across IAM?", "answer_span": "the change must be replicated across IAM", "chunk": "safely stored. However, the change must be replicated across IAM, which can take some time. Such changes include creating or updating users, groups, roles, or policies. We recommend that you do not include such IAM changes in the critical, high-availability code paths of your application. Instead, make IAM changes in a separate initialization or setup routine that you run less frequently. Also, be sure to verify that the changes have been propagated before production workflows depend on them. For more information, see Changes that I make are not always immediately visible. Service cost information AWS Identity and Access Management (IAM), AWS IAM Identity Center and AWS Security Token Service (AWS STS) are features of your AWS account offered at no additional charge. You are charged only when you access other AWS services using your IAM users or AWS STS temporary security credentials. IAM Access Analyzer external access analysis is offered at no additional charge. However, you will incur charges for unused access analysis and customer policy checks. For a complete list of charges and prices for IAM Access Analyzer, see IAM Access Analyzer pricing. For information about the pricing of other AWS products, see the Amazon Web Services pricing page. Integration with other AWS services IAM is integrated with many AWS services. For a list of AWS services that work with IAM and the IAM features the services support, see AWS services that work with IAM. 2 AWS Identity and Access Management User Guide Why should I use IAM? AWS Identity and Access Management is a powerful tool for securely managing access to your AWS resources. One of the primary benefits of using IAM is the ability to grant shared access to your AWS account. Additionally, IAM allows you to assign granular permissions, enabling you to control exactly what actions"} +{"global_id": 9, "doc_id": "iam", "chunk_id": "2", "question_id": 2, "question": "What should you not include in the critical, high-availability code paths of your application?", "answer_span": "do not include such IAM changes in the critical, high-availability code paths of your application", "chunk": "safely stored. However, the change must be replicated across IAM, which can take some time. Such changes include creating or updating users, groups, roles, or policies. We recommend that you do not include such IAM changes in the critical, high-availability code paths of your application. Instead, make IAM changes in a separate initialization or setup routine that you run less frequently. Also, be sure to verify that the changes have been propagated before production workflows depend on them. For more information, see Changes that I make are not always immediately visible. Service cost information AWS Identity and Access Management (IAM), AWS IAM Identity Center and AWS Security Token Service (AWS STS) are features of your AWS account offered at no additional charge. You are charged only when you access other AWS services using your IAM users or AWS STS temporary security credentials. IAM Access Analyzer external access analysis is offered at no additional charge. However, you will incur charges for unused access analysis and customer policy checks. For a complete list of charges and prices for IAM Access Analyzer, see IAM Access Analyzer pricing. For information about the pricing of other AWS products, see the Amazon Web Services pricing page. Integration with other AWS services IAM is integrated with many AWS services. For a list of AWS services that work with IAM and the IAM features the services support, see AWS services that work with IAM. 2 AWS Identity and Access Management User Guide Why should I use IAM? AWS Identity and Access Management is a powerful tool for securely managing access to your AWS resources. One of the primary benefits of using IAM is the ability to grant shared access to your AWS account. Additionally, IAM allows you to assign granular permissions, enabling you to control exactly what actions"} +{"global_id": 10, "doc_id": "iam", "chunk_id": "2", "question_id": 3, "question": "What is a primary benefit of using IAM?", "answer_span": "the ability to grant shared access to your AWS account", "chunk": "safely stored. However, the change must be replicated across IAM, which can take some time. Such changes include creating or updating users, groups, roles, or policies. We recommend that you do not include such IAM changes in the critical, high-availability code paths of your application. Instead, make IAM changes in a separate initialization or setup routine that you run less frequently. Also, be sure to verify that the changes have been propagated before production workflows depend on them. For more information, see Changes that I make are not always immediately visible. Service cost information AWS Identity and Access Management (IAM), AWS IAM Identity Center and AWS Security Token Service (AWS STS) are features of your AWS account offered at no additional charge. You are charged only when you access other AWS services using your IAM users or AWS STS temporary security credentials. IAM Access Analyzer external access analysis is offered at no additional charge. However, you will incur charges for unused access analysis and customer policy checks. For a complete list of charges and prices for IAM Access Analyzer, see IAM Access Analyzer pricing. For information about the pricing of other AWS products, see the Amazon Web Services pricing page. Integration with other AWS services IAM is integrated with many AWS services. For a list of AWS services that work with IAM and the IAM features the services support, see AWS services that work with IAM. 2 AWS Identity and Access Management User Guide Why should I use IAM? AWS Identity and Access Management is a powerful tool for securely managing access to your AWS resources. One of the primary benefits of using IAM is the ability to grant shared access to your AWS account. Additionally, IAM allows you to assign granular permissions, enabling you to control exactly what actions"} +{"global_id": 11, "doc_id": "iam", "chunk_id": "2", "question_id": 4, "question": "What is offered at no additional charge in AWS IAM?", "answer_span": "AWS Identity and Access Management (IAM), AWS IAM Identity Center and AWS Security Token Service (AWS STS) are features of your AWS account offered at no additional charge", "chunk": "safely stored. However, the change must be replicated across IAM, which can take some time. Such changes include creating or updating users, groups, roles, or policies. We recommend that you do not include such IAM changes in the critical, high-availability code paths of your application. Instead, make IAM changes in a separate initialization or setup routine that you run less frequently. Also, be sure to verify that the changes have been propagated before production workflows depend on them. For more information, see Changes that I make are not always immediately visible. Service cost information AWS Identity and Access Management (IAM), AWS IAM Identity Center and AWS Security Token Service (AWS STS) are features of your AWS account offered at no additional charge. You are charged only when you access other AWS services using your IAM users or AWS STS temporary security credentials. IAM Access Analyzer external access analysis is offered at no additional charge. However, you will incur charges for unused access analysis and customer policy checks. For a complete list of charges and prices for IAM Access Analyzer, see IAM Access Analyzer pricing. For information about the pricing of other AWS products, see the Amazon Web Services pricing page. Integration with other AWS services IAM is integrated with many AWS services. For a list of AWS services that work with IAM and the IAM features the services support, see AWS services that work with IAM. 2 AWS Identity and Access Management User Guide Why should I use IAM? AWS Identity and Access Management is a powerful tool for securely managing access to your AWS resources. One of the primary benefits of using IAM is the ability to grant shared access to your AWS account. Additionally, IAM allows you to assign granular permissions, enabling you to control exactly what actions"} +{"global_id": 12, "doc_id": "iam", "chunk_id": "3", "question_id": 1, "question": "What is one of the primary benefits of using IAM?", "answer_span": "One of the primary benefits of using IAM is the ability to grant shared access to your AWS account.", "chunk": "Identity and Access Management is a powerful tool for securely managing access to your AWS resources. One of the primary benefits of using IAM is the ability to grant shared access to your AWS account. Additionally, IAM allows you to assign granular permissions, enabling you to control exactly what actions different users can perform on specific resources. This level of access control is crucial for maintaining the security of your AWS environment. IAM also provides several other security features. You can add multi-factor authentication (MFA) for an extra layer of protection, and leverage identity federation to seamlessly integrate users from your corporate network or other identity providers. IAM also integrates with AWS CloudTrail, providing detailed logging and identity information to support auditing and compliance requirements. By taking advantage of these capabilities, you can help ensure that access to your critical AWS resources is tightly controlled and secure. Shared access to your AWS account You can grant other people permission to administer and use resources in your AWS account without having to share your password or access key. Granular permissions You can grant different permissions to different people for different resources. For example, you might allow some users complete access to Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon Redshift, and other AWS services. For other users, you can allow read-only access to just some Amazon S3 buckets, or permission to administer just some Amazon EC2 instances, or to access your billing information but nothing else. Secure access to AWS resources for applications that run on Amazon EC2 You can use IAM features to securely provide credentials for applications that run on EC2 instances. These credentials provide permissions for your application to access other AWS resources. Examples include S3 buckets and DynamoDB tables. Multi-factor"} +{"global_id": 13, "doc_id": "iam", "chunk_id": "3", "question_id": 2, "question": "What does IAM allow you to assign?", "answer_span": "Additionally, IAM allows you to assign granular permissions, enabling you to control exactly what actions different users can perform on specific resources.", "chunk": "Identity and Access Management is a powerful tool for securely managing access to your AWS resources. One of the primary benefits of using IAM is the ability to grant shared access to your AWS account. Additionally, IAM allows you to assign granular permissions, enabling you to control exactly what actions different users can perform on specific resources. This level of access control is crucial for maintaining the security of your AWS environment. IAM also provides several other security features. You can add multi-factor authentication (MFA) for an extra layer of protection, and leverage identity federation to seamlessly integrate users from your corporate network or other identity providers. IAM also integrates with AWS CloudTrail, providing detailed logging and identity information to support auditing and compliance requirements. By taking advantage of these capabilities, you can help ensure that access to your critical AWS resources is tightly controlled and secure. Shared access to your AWS account You can grant other people permission to administer and use resources in your AWS account without having to share your password or access key. Granular permissions You can grant different permissions to different people for different resources. For example, you might allow some users complete access to Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon Redshift, and other AWS services. For other users, you can allow read-only access to just some Amazon S3 buckets, or permission to administer just some Amazon EC2 instances, or to access your billing information but nothing else. Secure access to AWS resources for applications that run on Amazon EC2 You can use IAM features to securely provide credentials for applications that run on EC2 instances. These credentials provide permissions for your application to access other AWS resources. Examples include S3 buckets and DynamoDB tables. Multi-factor"} +{"global_id": 14, "doc_id": "iam", "chunk_id": "3", "question_id": 3, "question": "What can you add for an extra layer of protection?", "answer_span": "You can add multi-factor authentication (MFA) for an extra layer of protection.", "chunk": "Identity and Access Management is a powerful tool for securely managing access to your AWS resources. One of the primary benefits of using IAM is the ability to grant shared access to your AWS account. Additionally, IAM allows you to assign granular permissions, enabling you to control exactly what actions different users can perform on specific resources. This level of access control is crucial for maintaining the security of your AWS environment. IAM also provides several other security features. You can add multi-factor authentication (MFA) for an extra layer of protection, and leverage identity federation to seamlessly integrate users from your corporate network or other identity providers. IAM also integrates with AWS CloudTrail, providing detailed logging and identity information to support auditing and compliance requirements. By taking advantage of these capabilities, you can help ensure that access to your critical AWS resources is tightly controlled and secure. Shared access to your AWS account You can grant other people permission to administer and use resources in your AWS account without having to share your password or access key. Granular permissions You can grant different permissions to different people for different resources. For example, you might allow some users complete access to Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon Redshift, and other AWS services. For other users, you can allow read-only access to just some Amazon S3 buckets, or permission to administer just some Amazon EC2 instances, or to access your billing information but nothing else. Secure access to AWS resources for applications that run on Amazon EC2 You can use IAM features to securely provide credentials for applications that run on EC2 instances. These credentials provide permissions for your application to access other AWS resources. Examples include S3 buckets and DynamoDB tables. Multi-factor"} +{"global_id": 15, "doc_id": "iam", "chunk_id": "3", "question_id": 4, "question": "How can you securely provide credentials for applications that run on EC2 instances?", "answer_span": "You can use IAM features to securely provide credentials for applications that run on EC2 instances.", "chunk": "Identity and Access Management is a powerful tool for securely managing access to your AWS resources. One of the primary benefits of using IAM is the ability to grant shared access to your AWS account. Additionally, IAM allows you to assign granular permissions, enabling you to control exactly what actions different users can perform on specific resources. This level of access control is crucial for maintaining the security of your AWS environment. IAM also provides several other security features. You can add multi-factor authentication (MFA) for an extra layer of protection, and leverage identity federation to seamlessly integrate users from your corporate network or other identity providers. IAM also integrates with AWS CloudTrail, providing detailed logging and identity information to support auditing and compliance requirements. By taking advantage of these capabilities, you can help ensure that access to your critical AWS resources is tightly controlled and secure. Shared access to your AWS account You can grant other people permission to administer and use resources in your AWS account without having to share your password or access key. Granular permissions You can grant different permissions to different people for different resources. For example, you might allow some users complete access to Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon Redshift, and other AWS services. For other users, you can allow read-only access to just some Amazon S3 buckets, or permission to administer just some Amazon EC2 instances, or to access your billing information but nothing else. Secure access to AWS resources for applications that run on Amazon EC2 You can use IAM features to securely provide credentials for applications that run on EC2 instances. These credentials provide permissions for your application to access other AWS resources. Examples include S3 buckets and DynamoDB tables. Multi-factor"} +{"global_id": 16, "doc_id": "iam", "chunk_id": "4", "question_id": 1, "question": "What can you use IAM features for?", "answer_span": "You can use IAM features to securely provide credentials for applications that run on EC2 instances.", "chunk": "nothing else. Secure access to AWS resources for applications that run on Amazon EC2 You can use IAM features to securely provide credentials for applications that run on EC2 instances. These credentials provide permissions for your application to access other AWS resources. Examples include S3 buckets and DynamoDB tables. Multi-factor authentication (MFA) You can add two-factor authentication to your account and to individual users for extra security. With MFA you or your users must provide not only a password or access key to work with your Why should I use IAM? 3 AWS Identity and Access Management User Guide account, but also a code from a specially configured device. If you already use a FIDO security key with other services, and it has an AWS supported configuration, you can use WebAuthn for MFA security. For more information, see Supported configurations for using passkeys and security keys Identity federation You can allow users who already have passwords elsewhere—for example, in your corporate network or with an internet identity provider—to access your AWS account. These users are granted temporary credentials that comply with IAM best practice recommendations. Using identity federation enhances the security of your AWS account. Identity information for assurance If you use AWS CloudTrail, you receive log records that include information about those who made requests for resources in your account. That information is based on IAM identities. PCI DSS Compliance IAM supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. When do I use IAM? AWS Identity and Access Management is a core"} +{"global_id": 17, "doc_id": "iam", "chunk_id": "4", "question_id": 2, "question": "What does multi-factor authentication (MFA) require?", "answer_span": "With MFA you or your users must provide not only a password or access key to work with your account, but also a code from a specially configured device.", "chunk": "nothing else. Secure access to AWS resources for applications that run on Amazon EC2 You can use IAM features to securely provide credentials for applications that run on EC2 instances. These credentials provide permissions for your application to access other AWS resources. Examples include S3 buckets and DynamoDB tables. Multi-factor authentication (MFA) You can add two-factor authentication to your account and to individual users for extra security. With MFA you or your users must provide not only a password or access key to work with your Why should I use IAM? 3 AWS Identity and Access Management User Guide account, but also a code from a specially configured device. If you already use a FIDO security key with other services, and it has an AWS supported configuration, you can use WebAuthn for MFA security. For more information, see Supported configurations for using passkeys and security keys Identity federation You can allow users who already have passwords elsewhere—for example, in your corporate network or with an internet identity provider—to access your AWS account. These users are granted temporary credentials that comply with IAM best practice recommendations. Using identity federation enhances the security of your AWS account. Identity information for assurance If you use AWS CloudTrail, you receive log records that include information about those who made requests for resources in your account. That information is based on IAM identities. PCI DSS Compliance IAM supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. When do I use IAM? AWS Identity and Access Management is a core"} +{"global_id": 18, "doc_id": "iam", "chunk_id": "4", "question_id": 3, "question": "What does identity federation allow users to do?", "answer_span": "You can allow users who already have passwords elsewhere—to access your AWS account.", "chunk": "nothing else. Secure access to AWS resources for applications that run on Amazon EC2 You can use IAM features to securely provide credentials for applications that run on EC2 instances. These credentials provide permissions for your application to access other AWS resources. Examples include S3 buckets and DynamoDB tables. Multi-factor authentication (MFA) You can add two-factor authentication to your account and to individual users for extra security. With MFA you or your users must provide not only a password or access key to work with your Why should I use IAM? 3 AWS Identity and Access Management User Guide account, but also a code from a specially configured device. If you already use a FIDO security key with other services, and it has an AWS supported configuration, you can use WebAuthn for MFA security. For more information, see Supported configurations for using passkeys and security keys Identity federation You can allow users who already have passwords elsewhere—for example, in your corporate network or with an internet identity provider—to access your AWS account. These users are granted temporary credentials that comply with IAM best practice recommendations. Using identity federation enhances the security of your AWS account. Identity information for assurance If you use AWS CloudTrail, you receive log records that include information about those who made requests for resources in your account. That information is based on IAM identities. PCI DSS Compliance IAM supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. When do I use IAM? AWS Identity and Access Management is a core"} +{"global_id": 19, "doc_id": "iam", "chunk_id": "4", "question_id": 4, "question": "What does IAM support regarding credit card data?", "answer_span": "IAM supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS).", "chunk": "nothing else. Secure access to AWS resources for applications that run on Amazon EC2 You can use IAM features to securely provide credentials for applications that run on EC2 instances. These credentials provide permissions for your application to access other AWS resources. Examples include S3 buckets and DynamoDB tables. Multi-factor authentication (MFA) You can add two-factor authentication to your account and to individual users for extra security. With MFA you or your users must provide not only a password or access key to work with your Why should I use IAM? 3 AWS Identity and Access Management User Guide account, but also a code from a specially configured device. If you already use a FIDO security key with other services, and it has an AWS supported configuration, you can use WebAuthn for MFA security. For more information, see Supported configurations for using passkeys and security keys Identity federation You can allow users who already have passwords elsewhere—for example, in your corporate network or with an internet identity provider—to access your AWS account. These users are granted temporary credentials that comply with IAM best practice recommendations. Using identity federation enhances the security of your AWS account. Identity information for assurance If you use AWS CloudTrail, you receive log records that include information about those who made requests for resources in your account. That information is based on IAM identities. PCI DSS Compliance IAM supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. When do I use IAM? AWS Identity and Access Management is a core"} +{"global_id": 20, "doc_id": "iam", "chunk_id": "5", "question_id": 1, "question": "What standard has been validated for compliance?", "answer_span": "been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS).", "chunk": "been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. When do I use IAM? AWS Identity and Access Management is a core infrastructure service that provides the foundation for access control based on identities within AWS. You use IAM every time you access your AWS account. The way you use IAM will depend on the specific responsibilities and job functions within your organization. Users of AWS services use IAM to access the AWS resources required for their day-to-day work, with administrators granting the appropriate permissions. IAM administrators, on the other hand, are responsible for managing IAM identities and writing policies to control access to resources. Regardless of your role, you interact with IAM whenever you authenticate and authorize access to AWS resources. This could involve signing in as an IAM user, assuming an IAM role, or leveraging identity federation for seamless access. Understanding the various IAM capabilities and use cases is crucial for effectively managing secure access to your AWS environment. When it comes to creating policies and permissions, IAM provides a flexible and granular approach. You can define trust policies to control which principals can assume a role, in addition to identity-based policies that specify the actions and resources a user or role can Identity federation 4 AWS Identity and Access Management User Guide access. By configuring these IAM policies, you can help ensure that users and applications have the appropriate level of permissions to perform their required tasks. When you are performing different job functions AWS Identity and Access Management is a core infrastructure service that provides the foundation for access control based on identities within AWS. You use IAM"} +{"global_id": 21, "doc_id": "iam", "chunk_id": "5", "question_id": 2, "question": "What is AWS Identity and Access Management (IAM)?", "answer_span": "AWS Identity and Access Management is a core infrastructure service that provides the foundation for access control based on identities within AWS.", "chunk": "been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. When do I use IAM? AWS Identity and Access Management is a core infrastructure service that provides the foundation for access control based on identities within AWS. You use IAM every time you access your AWS account. The way you use IAM will depend on the specific responsibilities and job functions within your organization. Users of AWS services use IAM to access the AWS resources required for their day-to-day work, with administrators granting the appropriate permissions. IAM administrators, on the other hand, are responsible for managing IAM identities and writing policies to control access to resources. Regardless of your role, you interact with IAM whenever you authenticate and authorize access to AWS resources. This could involve signing in as an IAM user, assuming an IAM role, or leveraging identity federation for seamless access. Understanding the various IAM capabilities and use cases is crucial for effectively managing secure access to your AWS environment. When it comes to creating policies and permissions, IAM provides a flexible and granular approach. You can define trust policies to control which principals can assume a role, in addition to identity-based policies that specify the actions and resources a user or role can Identity federation 4 AWS Identity and Access Management User Guide access. By configuring these IAM policies, you can help ensure that users and applications have the appropriate level of permissions to perform their required tasks. When you are performing different job functions AWS Identity and Access Management is a core infrastructure service that provides the foundation for access control based on identities within AWS. You use IAM"} +{"global_id": 22, "doc_id": "iam", "chunk_id": "5", "question_id": 3, "question": "When do you use IAM?", "answer_span": "You use IAM every time you access your AWS account.", "chunk": "been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. When do I use IAM? AWS Identity and Access Management is a core infrastructure service that provides the foundation for access control based on identities within AWS. You use IAM every time you access your AWS account. The way you use IAM will depend on the specific responsibilities and job functions within your organization. Users of AWS services use IAM to access the AWS resources required for their day-to-day work, with administrators granting the appropriate permissions. IAM administrators, on the other hand, are responsible for managing IAM identities and writing policies to control access to resources. Regardless of your role, you interact with IAM whenever you authenticate and authorize access to AWS resources. This could involve signing in as an IAM user, assuming an IAM role, or leveraging identity federation for seamless access. Understanding the various IAM capabilities and use cases is crucial for effectively managing secure access to your AWS environment. When it comes to creating policies and permissions, IAM provides a flexible and granular approach. You can define trust policies to control which principals can assume a role, in addition to identity-based policies that specify the actions and resources a user or role can Identity federation 4 AWS Identity and Access Management User Guide access. By configuring these IAM policies, you can help ensure that users and applications have the appropriate level of permissions to perform their required tasks. When you are performing different job functions AWS Identity and Access Management is a core infrastructure service that provides the foundation for access control based on identities within AWS. You use IAM"} +{"global_id": 23, "doc_id": "iam", "chunk_id": "5", "question_id": 4, "question": "What do IAM administrators manage?", "answer_span": "IAM administrators, on the other hand, are responsible for managing IAM identities and writing policies to control access to resources.", "chunk": "been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. When do I use IAM? AWS Identity and Access Management is a core infrastructure service that provides the foundation for access control based on identities within AWS. You use IAM every time you access your AWS account. The way you use IAM will depend on the specific responsibilities and job functions within your organization. Users of AWS services use IAM to access the AWS resources required for their day-to-day work, with administrators granting the appropriate permissions. IAM administrators, on the other hand, are responsible for managing IAM identities and writing policies to control access to resources. Regardless of your role, you interact with IAM whenever you authenticate and authorize access to AWS resources. This could involve signing in as an IAM user, assuming an IAM role, or leveraging identity federation for seamless access. Understanding the various IAM capabilities and use cases is crucial for effectively managing secure access to your AWS environment. When it comes to creating policies and permissions, IAM provides a flexible and granular approach. You can define trust policies to control which principals can assume a role, in addition to identity-based policies that specify the actions and resources a user or role can Identity federation 4 AWS Identity and Access Management User Guide access. By configuring these IAM policies, you can help ensure that users and applications have the appropriate level of permissions to perform their required tasks. When you are performing different job functions AWS Identity and Access Management is a core infrastructure service that provides the foundation for access control based on identities within AWS. You use IAM"} +{"global_id": 24, "doc_id": "iam", "chunk_id": "6", "question_id": 1, "question": "What is AWS Identity and Access Management?", "answer_span": "AWS Identity and Access Management is a core infrastructure service that provides the foundation for access control based on identities within AWS.", "chunk": "can help ensure that users and applications have the appropriate level of permissions to perform their required tasks. When you are performing different job functions AWS Identity and Access Management is a core infrastructure service that provides the foundation for access control based on identities within AWS. You use IAM every time you access your AWS account. How you use IAM differs, depending on the work that you do in AWS. • Service user – If you use an AWS service to do your job, then your administrator provides you with the credentials and permissions that you need. As you use more advanced features to do your work, you might need additional permissions. Understanding how access is managed can help you request the right permissions from your administrator. • Service administrator – If you're in charge of an AWS resource at your company, you probably have full access to IAM. It's your job to determine which IAM features and resources your service users should access. You must then submit requests to your IAM administrator to change the permissions of your service users. Review the information on this page to understand the basic concepts of IAM. • IAM administrator – If you're an IAM administrator, you manage IAM identities and write policies to manage access to IAM. When you are authorized to access AWS resources Authentication is how you sign in to AWS using your identity credentials. You must be authenticated (signed in to AWS) as the AWS account root user, as an IAM user, or by assuming an IAM role. You can sign in to AWS as a federated identity by using credentials provided through an identity source. AWS IAM Identity Center (IAM Identity Center) users, your company's single sign-on authentication, and your Google or Facebook credentials are examples of"} +{"global_id": 25, "doc_id": "iam", "chunk_id": "6", "question_id": 2, "question": "Who provides credentials and permissions to a service user?", "answer_span": "your administrator provides you with the credentials and permissions that you need.", "chunk": "can help ensure that users and applications have the appropriate level of permissions to perform their required tasks. When you are performing different job functions AWS Identity and Access Management is a core infrastructure service that provides the foundation for access control based on identities within AWS. You use IAM every time you access your AWS account. How you use IAM differs, depending on the work that you do in AWS. • Service user – If you use an AWS service to do your job, then your administrator provides you with the credentials and permissions that you need. As you use more advanced features to do your work, you might need additional permissions. Understanding how access is managed can help you request the right permissions from your administrator. • Service administrator – If you're in charge of an AWS resource at your company, you probably have full access to IAM. It's your job to determine which IAM features and resources your service users should access. You must then submit requests to your IAM administrator to change the permissions of your service users. Review the information on this page to understand the basic concepts of IAM. • IAM administrator – If you're an IAM administrator, you manage IAM identities and write policies to manage access to IAM. When you are authorized to access AWS resources Authentication is how you sign in to AWS using your identity credentials. You must be authenticated (signed in to AWS) as the AWS account root user, as an IAM user, or by assuming an IAM role. You can sign in to AWS as a federated identity by using credentials provided through an identity source. AWS IAM Identity Center (IAM Identity Center) users, your company's single sign-on authentication, and your Google or Facebook credentials are examples of"} +{"global_id": 26, "doc_id": "iam", "chunk_id": "6", "question_id": 3, "question": "What is the role of a service administrator?", "answer_span": "It's your job to determine which IAM features and resources your service users should access.", "chunk": "can help ensure that users and applications have the appropriate level of permissions to perform their required tasks. When you are performing different job functions AWS Identity and Access Management is a core infrastructure service that provides the foundation for access control based on identities within AWS. You use IAM every time you access your AWS account. How you use IAM differs, depending on the work that you do in AWS. • Service user – If you use an AWS service to do your job, then your administrator provides you with the credentials and permissions that you need. As you use more advanced features to do your work, you might need additional permissions. Understanding how access is managed can help you request the right permissions from your administrator. • Service administrator – If you're in charge of an AWS resource at your company, you probably have full access to IAM. It's your job to determine which IAM features and resources your service users should access. You must then submit requests to your IAM administrator to change the permissions of your service users. Review the information on this page to understand the basic concepts of IAM. • IAM administrator – If you're an IAM administrator, you manage IAM identities and write policies to manage access to IAM. When you are authorized to access AWS resources Authentication is how you sign in to AWS using your identity credentials. You must be authenticated (signed in to AWS) as the AWS account root user, as an IAM user, or by assuming an IAM role. You can sign in to AWS as a federated identity by using credentials provided through an identity source. AWS IAM Identity Center (IAM Identity Center) users, your company's single sign-on authentication, and your Google or Facebook credentials are examples of"} +{"global_id": 27, "doc_id": "iam", "chunk_id": "6", "question_id": 4, "question": "What must you be to access AWS resources?", "answer_span": "You must be authenticated (signed in to AWS) as the AWS account root user, as an IAM user, or by assuming an IAM role.", "chunk": "can help ensure that users and applications have the appropriate level of permissions to perform their required tasks. When you are performing different job functions AWS Identity and Access Management is a core infrastructure service that provides the foundation for access control based on identities within AWS. You use IAM every time you access your AWS account. How you use IAM differs, depending on the work that you do in AWS. • Service user – If you use an AWS service to do your job, then your administrator provides you with the credentials and permissions that you need. As you use more advanced features to do your work, you might need additional permissions. Understanding how access is managed can help you request the right permissions from your administrator. • Service administrator – If you're in charge of an AWS resource at your company, you probably have full access to IAM. It's your job to determine which IAM features and resources your service users should access. You must then submit requests to your IAM administrator to change the permissions of your service users. Review the information on this page to understand the basic concepts of IAM. • IAM administrator – If you're an IAM administrator, you manage IAM identities and write policies to manage access to IAM. When you are authorized to access AWS resources Authentication is how you sign in to AWS using your identity credentials. You must be authenticated (signed in to AWS) as the AWS account root user, as an IAM user, or by assuming an IAM role. You can sign in to AWS as a federated identity by using credentials provided through an identity source. AWS IAM Identity Center (IAM Identity Center) users, your company's single sign-on authentication, and your Google or Facebook credentials are examples of"} +{"global_id": 28, "doc_id": "iam", "chunk_id": "7", "question_id": 1, "question": "What can you use to sign in to AWS as a federated identity?", "answer_span": "You can sign in to AWS as a federated identity by using credentials provided through an identity source.", "chunk": "as an IAM user, or by assuming an IAM role. You can sign in to AWS as a federated identity by using credentials provided through an identity source. AWS IAM Identity Center (IAM Identity Center) users, your company's single sign-on authentication, and your Google or Facebook credentials are examples of federated identities. When you sign in as a federated identity, your administrator previously set up identity federation using IAM roles. When you access AWS by using federation, you are indirectly assuming a role. Depending on the type of user you are, you can sign in to the AWS Management Console or the AWS access portal. For more information about signing in to AWS, see How to sign in to your AWS account in the AWS Sign-In User Guide. When you are performing different job functions 5 AWS Identity and Access Management User Guide If you access AWS programmatically, AWS provides a software development kit (SDK) and a command line interface (CLI) to cryptographically sign your requests by using your credentials. If you don't use AWS tools, you must sign requests yourself. For more information about using the recommended method to sign requests yourself, see AWS Signature Version 4 for API requests in the IAM User Guide. Regardless of the authentication method that you use, you might be required to provide additional security information. For example, AWS recommends that you use multi-factor authentication (MFA) to increase the security of your account. To learn more, see Multi-factor authentication in the AWS IAM Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. When you sign-in as an IAM user An IAM user is an identity within your AWS account that has specific permissions for a single person or application. Where possible, we recommend relying on temporary credentials"} +{"global_id": 29, "doc_id": "iam", "chunk_id": "7", "question_id": 2, "question": "What are examples of federated identities?", "answer_span": "AWS IAM Identity Center (IAM Identity Center) users, your company's single sign-on authentication, and your Google or Facebook credentials are examples of federated identities.", "chunk": "as an IAM user, or by assuming an IAM role. You can sign in to AWS as a federated identity by using credentials provided through an identity source. AWS IAM Identity Center (IAM Identity Center) users, your company's single sign-on authentication, and your Google or Facebook credentials are examples of federated identities. When you sign in as a federated identity, your administrator previously set up identity federation using IAM roles. When you access AWS by using federation, you are indirectly assuming a role. Depending on the type of user you are, you can sign in to the AWS Management Console or the AWS access portal. For more information about signing in to AWS, see How to sign in to your AWS account in the AWS Sign-In User Guide. When you are performing different job functions 5 AWS Identity and Access Management User Guide If you access AWS programmatically, AWS provides a software development kit (SDK) and a command line interface (CLI) to cryptographically sign your requests by using your credentials. If you don't use AWS tools, you must sign requests yourself. For more information about using the recommended method to sign requests yourself, see AWS Signature Version 4 for API requests in the IAM User Guide. Regardless of the authentication method that you use, you might be required to provide additional security information. For example, AWS recommends that you use multi-factor authentication (MFA) to increase the security of your account. To learn more, see Multi-factor authentication in the AWS IAM Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. When you sign-in as an IAM user An IAM user is an identity within your AWS account that has specific permissions for a single person or application. Where possible, we recommend relying on temporary credentials"} +{"global_id": 30, "doc_id": "iam", "chunk_id": "7", "question_id": 3, "question": "What does AWS provide if you access AWS programmatically?", "answer_span": "AWS provides a software development kit (SDK) and a command line interface (CLI) to cryptographically sign your requests by using your credentials.", "chunk": "as an IAM user, or by assuming an IAM role. You can sign in to AWS as a federated identity by using credentials provided through an identity source. AWS IAM Identity Center (IAM Identity Center) users, your company's single sign-on authentication, and your Google or Facebook credentials are examples of federated identities. When you sign in as a federated identity, your administrator previously set up identity federation using IAM roles. When you access AWS by using federation, you are indirectly assuming a role. Depending on the type of user you are, you can sign in to the AWS Management Console or the AWS access portal. For more information about signing in to AWS, see How to sign in to your AWS account in the AWS Sign-In User Guide. When you are performing different job functions 5 AWS Identity and Access Management User Guide If you access AWS programmatically, AWS provides a software development kit (SDK) and a command line interface (CLI) to cryptographically sign your requests by using your credentials. If you don't use AWS tools, you must sign requests yourself. For more information about using the recommended method to sign requests yourself, see AWS Signature Version 4 for API requests in the IAM User Guide. Regardless of the authentication method that you use, you might be required to provide additional security information. For example, AWS recommends that you use multi-factor authentication (MFA) to increase the security of your account. To learn more, see Multi-factor authentication in the AWS IAM Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. When you sign-in as an IAM user An IAM user is an identity within your AWS account that has specific permissions for a single person or application. Where possible, we recommend relying on temporary credentials"} +{"global_id": 31, "doc_id": "iam", "chunk_id": "7", "question_id": 4, "question": "What does AWS recommend to increase the security of your account?", "answer_span": "AWS recommends that you use multi-factor authentication (MFA) to increase the security of your account.", "chunk": "as an IAM user, or by assuming an IAM role. You can sign in to AWS as a federated identity by using credentials provided through an identity source. AWS IAM Identity Center (IAM Identity Center) users, your company's single sign-on authentication, and your Google or Facebook credentials are examples of federated identities. When you sign in as a federated identity, your administrator previously set up identity federation using IAM roles. When you access AWS by using federation, you are indirectly assuming a role. Depending on the type of user you are, you can sign in to the AWS Management Console or the AWS access portal. For more information about signing in to AWS, see How to sign in to your AWS account in the AWS Sign-In User Guide. When you are performing different job functions 5 AWS Identity and Access Management User Guide If you access AWS programmatically, AWS provides a software development kit (SDK) and a command line interface (CLI) to cryptographically sign your requests by using your credentials. If you don't use AWS tools, you must sign requests yourself. For more information about using the recommended method to sign requests yourself, see AWS Signature Version 4 for API requests in the IAM User Guide. Regardless of the authentication method that you use, you might be required to provide additional security information. For example, AWS recommends that you use multi-factor authentication (MFA) to increase the security of your account. To learn more, see Multi-factor authentication in the AWS IAM Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. When you sign-in as an IAM user An IAM user is an identity within your AWS account that has specific permissions for a single person or application. Where possible, we recommend relying on temporary credentials"} +{"global_id": 32, "doc_id": "iam", "chunk_id": "8", "question_id": 1, "question": "What is an IAM user?", "answer_span": "An IAM user is an identity within your AWS account that has specific permissions for a single person or application.", "chunk": "Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. When you sign-in as an IAM user An IAM user is an identity within your AWS account that has specific permissions for a single person or application. Where possible, we recommend relying on temporary credentials instead of creating IAM users who have long-term credentials such as passwords and access keys. However, if you have specific use cases that require long-term credentials with IAM users, we recommend that you rotate access keys. For more information, see Rotate access keys regularly for use cases that require longterm credentials in the IAM User Guide. An IAM group is an identity that specifies a collection of IAM users. You can't sign in as a group. You can use groups to specify permissions for multiple users at a time. Groups make permissions easier to manage for large sets of users. For example, you could have a group named IAMAdmins and give that group permissions to administer IAM resources. Users are different from roles. A user is uniquely associated with one person or application, but a role is intended to be assumable by anyone who needs it. Users have permanent long-term credentials, but roles provide temporary credentials. To learn more, see Use cases for IAM users in the IAM User Guide. When you assume an IAM role An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM user, but is not associated with a specific person. To temporarily assume an IAM role in the AWS Management Console, you can switch from a user to an IAM role (console). You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information"} +{"global_id": 33, "doc_id": "iam", "chunk_id": "8", "question_id": 2, "question": "What do we recommend instead of creating IAM users with long-term credentials?", "answer_span": "Where possible, we recommend relying on temporary credentials instead of creating IAM users who have long-term credentials such as passwords and access keys.", "chunk": "Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. When you sign-in as an IAM user An IAM user is an identity within your AWS account that has specific permissions for a single person or application. Where possible, we recommend relying on temporary credentials instead of creating IAM users who have long-term credentials such as passwords and access keys. However, if you have specific use cases that require long-term credentials with IAM users, we recommend that you rotate access keys. For more information, see Rotate access keys regularly for use cases that require longterm credentials in the IAM User Guide. An IAM group is an identity that specifies a collection of IAM users. You can't sign in as a group. You can use groups to specify permissions for multiple users at a time. Groups make permissions easier to manage for large sets of users. For example, you could have a group named IAMAdmins and give that group permissions to administer IAM resources. Users are different from roles. A user is uniquely associated with one person or application, but a role is intended to be assumable by anyone who needs it. Users have permanent long-term credentials, but roles provide temporary credentials. To learn more, see Use cases for IAM users in the IAM User Guide. When you assume an IAM role An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM user, but is not associated with a specific person. To temporarily assume an IAM role in the AWS Management Console, you can switch from a user to an IAM role (console). You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information"} +{"global_id": 34, "doc_id": "iam", "chunk_id": "8", "question_id": 3, "question": "What is an IAM group?", "answer_span": "An IAM group is an identity that specifies a collection of IAM users.", "chunk": "Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. When you sign-in as an IAM user An IAM user is an identity within your AWS account that has specific permissions for a single person or application. Where possible, we recommend relying on temporary credentials instead of creating IAM users who have long-term credentials such as passwords and access keys. However, if you have specific use cases that require long-term credentials with IAM users, we recommend that you rotate access keys. For more information, see Rotate access keys regularly for use cases that require longterm credentials in the IAM User Guide. An IAM group is an identity that specifies a collection of IAM users. You can't sign in as a group. You can use groups to specify permissions for multiple users at a time. Groups make permissions easier to manage for large sets of users. For example, you could have a group named IAMAdmins and give that group permissions to administer IAM resources. Users are different from roles. A user is uniquely associated with one person or application, but a role is intended to be assumable by anyone who needs it. Users have permanent long-term credentials, but roles provide temporary credentials. To learn more, see Use cases for IAM users in the IAM User Guide. When you assume an IAM role An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM user, but is not associated with a specific person. To temporarily assume an IAM role in the AWS Management Console, you can switch from a user to an IAM role (console). You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information"} +{"global_id": 35, "doc_id": "iam", "chunk_id": "8", "question_id": 4, "question": "How is a user different from a role?", "answer_span": "A user is uniquely associated with one person or application, but a role is intended to be assumable by anyone who needs it.", "chunk": "Identity Center User Guide and AWS Multi-factor authentication in IAM in the IAM User Guide. When you sign-in as an IAM user An IAM user is an identity within your AWS account that has specific permissions for a single person or application. Where possible, we recommend relying on temporary credentials instead of creating IAM users who have long-term credentials such as passwords and access keys. However, if you have specific use cases that require long-term credentials with IAM users, we recommend that you rotate access keys. For more information, see Rotate access keys regularly for use cases that require longterm credentials in the IAM User Guide. An IAM group is an identity that specifies a collection of IAM users. You can't sign in as a group. You can use groups to specify permissions for multiple users at a time. Groups make permissions easier to manage for large sets of users. For example, you could have a group named IAMAdmins and give that group permissions to administer IAM resources. Users are different from roles. A user is uniquely associated with one person or application, but a role is intended to be assumable by anyone who needs it. Users have permanent long-term credentials, but roles provide temporary credentials. To learn more, see Use cases for IAM users in the IAM User Guide. When you assume an IAM role An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM user, but is not associated with a specific person. To temporarily assume an IAM role in the AWS Management Console, you can switch from a user to an IAM role (console). You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information"} +{"global_id": 36, "doc_id": "iam", "chunk_id": "9", "question_id": 1, "question": "How can you temporarily assume an IAM role in the AWS Management Console?", "answer_span": "To temporarily assume an IAM role in the AWS Management Console, you can switch from a user to an IAM role (console).", "chunk": "associated with a specific person. To temporarily assume an IAM role in the AWS Management Console, you can switch from a user to an IAM role (console). You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information about methods for using roles, see Methods to assume a role in the IAM User Guide. IAM roles with temporary credentials are useful in the following situations: When you sign-in as an IAM user 6 AWS Identity and Access Management User Guide • Federated user access – To assign permissions to a federated identity, you create a role and define permissions for the role. When a federated identity authenticates, the identity is associated with the role and is granted the permissions that are defined by the role. For information about roles for federation, see Create a role for a third-party identity provider (federation) in the IAM User Guide. If you use IAM Identity Center, you configure a permission set. To control what your identities can access after they authenticate, IAM Identity Center correlates the permission set to a role in IAM. For information about permissions sets, see Permission sets in the AWS IAM Identity Center User Guide. • Temporary IAM user permissions – An IAM user or role can assume an IAM role to temporarily take on different permissions for a specific task. • Cross-account access – You can use an IAM role to allow someone (a trusted principal) in a different account to access resources in your account. Roles are the primary way to grant crossaccount access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for"} +{"global_id": 37, "doc_id": "iam", "chunk_id": "9", "question_id": 2, "question": "What are IAM roles with temporary credentials useful for?", "answer_span": "IAM roles with temporary credentials are useful in the following situations: When you sign-in as an IAM user.", "chunk": "associated with a specific person. To temporarily assume an IAM role in the AWS Management Console, you can switch from a user to an IAM role (console). You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information about methods for using roles, see Methods to assume a role in the IAM User Guide. IAM roles with temporary credentials are useful in the following situations: When you sign-in as an IAM user 6 AWS Identity and Access Management User Guide • Federated user access – To assign permissions to a federated identity, you create a role and define permissions for the role. When a federated identity authenticates, the identity is associated with the role and is granted the permissions that are defined by the role. For information about roles for federation, see Create a role for a third-party identity provider (federation) in the IAM User Guide. If you use IAM Identity Center, you configure a permission set. To control what your identities can access after they authenticate, IAM Identity Center correlates the permission set to a role in IAM. For information about permissions sets, see Permission sets in the AWS IAM Identity Center User Guide. • Temporary IAM user permissions – An IAM user or role can assume an IAM role to temporarily take on different permissions for a specific task. • Cross-account access – You can use an IAM role to allow someone (a trusted principal) in a different account to access resources in your account. Roles are the primary way to grant crossaccount access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for"} +{"global_id": 38, "doc_id": "iam", "chunk_id": "9", "question_id": 3, "question": "What do you create to assign permissions to a federated identity?", "answer_span": "To assign permissions to a federated identity, you create a role and define permissions for the role.", "chunk": "associated with a specific person. To temporarily assume an IAM role in the AWS Management Console, you can switch from a user to an IAM role (console). You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information about methods for using roles, see Methods to assume a role in the IAM User Guide. IAM roles with temporary credentials are useful in the following situations: When you sign-in as an IAM user 6 AWS Identity and Access Management User Guide • Federated user access – To assign permissions to a federated identity, you create a role and define permissions for the role. When a federated identity authenticates, the identity is associated with the role and is granted the permissions that are defined by the role. For information about roles for federation, see Create a role for a third-party identity provider (federation) in the IAM User Guide. If you use IAM Identity Center, you configure a permission set. To control what your identities can access after they authenticate, IAM Identity Center correlates the permission set to a role in IAM. For information about permissions sets, see Permission sets in the AWS IAM Identity Center User Guide. • Temporary IAM user permissions – An IAM user or role can assume an IAM role to temporarily take on different permissions for a specific task. • Cross-account access – You can use an IAM role to allow someone (a trusted principal) in a different account to access resources in your account. Roles are the primary way to grant crossaccount access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for"} +{"global_id": 39, "doc_id": "iam", "chunk_id": "9", "question_id": 4, "question": "What can an IAM user or role do to temporarily take on different permissions?", "answer_span": "An IAM user or role can assume an IAM role to temporarily take on different permissions for a specific task.", "chunk": "associated with a specific person. To temporarily assume an IAM role in the AWS Management Console, you can switch from a user to an IAM role (console). You can assume a role by calling an AWS CLI or AWS API operation or by using a custom URL. For more information about methods for using roles, see Methods to assume a role in the IAM User Guide. IAM roles with temporary credentials are useful in the following situations: When you sign-in as an IAM user 6 AWS Identity and Access Management User Guide • Federated user access – To assign permissions to a federated identity, you create a role and define permissions for the role. When a federated identity authenticates, the identity is associated with the role and is granted the permissions that are defined by the role. For information about roles for federation, see Create a role for a third-party identity provider (federation) in the IAM User Guide. If you use IAM Identity Center, you configure a permission set. To control what your identities can access after they authenticate, IAM Identity Center correlates the permission set to a role in IAM. For information about permissions sets, see Permission sets in the AWS IAM Identity Center User Guide. • Temporary IAM user permissions – An IAM user or role can assume an IAM role to temporarily take on different permissions for a specific task. • Cross-account access – You can use an IAM role to allow someone (a trusted principal) in a different account to access resources in your account. Roles are the primary way to grant crossaccount access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for"} +{"global_id": 40, "doc_id": "iam", "chunk_id": "10", "question_id": 1, "question": "What are the primary way to grant cross-account access?", "answer_span": "Roles are the primary way to grant crossaccount access.", "chunk": "a different account to access resources in your account. Roles are the primary way to grant crossaccount access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for cross-account access, see Cross account resource access in IAM in the IAM User Guide. • Cross-service access – Some AWS services use features in other AWS services. For example, when you make a call in a service, it's common for that service to run applications in Amazon EC2 or store objects in Amazon S3. A service might do this using the calling principal's permissions, using a service role, or using a service-linked role. • Forward access sessions (FAS) – When you use an IAM user or role to perform actions in AWS, you are considered a principal. When you use some services, you might perform an action that then initiates another action in a different service. FAS uses the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. FAS requests are only made when a service receives a request that requires interactions with other AWS services or resources to complete. In this case, you must have permissions to perform both actions. For policy details when making FAS requests, see Forward access sessions. • Service role – A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. • Service-linked role – A service-linked role is a type of service role"} +{"global_id": 41, "doc_id": "iam", "chunk_id": "10", "question_id": 2, "question": "What is a service role?", "answer_span": "A service role is an IAM role that a service assumes to perform actions on your behalf.", "chunk": "a different account to access resources in your account. Roles are the primary way to grant crossaccount access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for cross-account access, see Cross account resource access in IAM in the IAM User Guide. • Cross-service access – Some AWS services use features in other AWS services. For example, when you make a call in a service, it's common for that service to run applications in Amazon EC2 or store objects in Amazon S3. A service might do this using the calling principal's permissions, using a service role, or using a service-linked role. • Forward access sessions (FAS) – When you use an IAM user or role to perform actions in AWS, you are considered a principal. When you use some services, you might perform an action that then initiates another action in a different service. FAS uses the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. FAS requests are only made when a service receives a request that requires interactions with other AWS services or resources to complete. In this case, you must have permissions to perform both actions. For policy details when making FAS requests, see Forward access sessions. • Service role – A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. • Service-linked role – A service-linked role is a type of service role"} +{"global_id": 42, "doc_id": "iam", "chunk_id": "10", "question_id": 3, "question": "What do FAS requests require?", "answer_span": "FAS requests are only made when a service receives a request that requires interactions with other AWS services or resources to complete.", "chunk": "a different account to access resources in your account. Roles are the primary way to grant crossaccount access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for cross-account access, see Cross account resource access in IAM in the IAM User Guide. • Cross-service access – Some AWS services use features in other AWS services. For example, when you make a call in a service, it's common for that service to run applications in Amazon EC2 or store objects in Amazon S3. A service might do this using the calling principal's permissions, using a service role, or using a service-linked role. • Forward access sessions (FAS) – When you use an IAM user or role to perform actions in AWS, you are considered a principal. When you use some services, you might perform an action that then initiates another action in a different service. FAS uses the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. FAS requests are only made when a service receives a request that requires interactions with other AWS services or resources to complete. In this case, you must have permissions to perform both actions. For policy details when making FAS requests, see Forward access sessions. • Service role – A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. • Service-linked role – A service-linked role is a type of service role"} +{"global_id": 43, "doc_id": "iam", "chunk_id": "10", "question_id": 4, "question": "What can an IAM administrator do with a service role?", "answer_span": "An IAM administrator can create, modify, and delete a service role from within IAM.", "chunk": "a different account to access resources in your account. Roles are the primary way to grant crossaccount access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for cross-account access, see Cross account resource access in IAM in the IAM User Guide. • Cross-service access – Some AWS services use features in other AWS services. For example, when you make a call in a service, it's common for that service to run applications in Amazon EC2 or store objects in Amazon S3. A service might do this using the calling principal's permissions, using a service role, or using a service-linked role. • Forward access sessions (FAS) – When you use an IAM user or role to perform actions in AWS, you are considered a principal. When you use some services, you might perform an action that then initiates another action in a different service. FAS uses the permissions of the principal calling an AWS service, combined with the requesting AWS service to make requests to downstream services. FAS requests are only made when a service receives a request that requires interactions with other AWS services or resources to complete. In this case, you must have permissions to perform both actions. For policy details when making FAS requests, see Forward access sessions. • Service role – A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. • Service-linked role – A service-linked role is a type of service role"} +{"global_id": 44, "doc_id": "iam", "chunk_id": "11", "question_id": 1, "question": "What can an IAM administrator do with a service role?", "answer_span": "An IAM administrator can create, modify, and delete a service role from within IAM.", "chunk": "actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. • Service-linked role – A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. When you assume an IAM role 7 AWS Identity and Access Management User Guide • Applications running on Amazon EC2 – You can use an IAM role to manage temporary credentials for applications that are running on an EC2 instance and making AWS CLI or AWS API requests. This is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2 instance and make it available to all of its applications, you create an instance profile that is attached to the instance. An instance profile contains the role and enables programs that are running on the EC2 instance to get temporary credentials. For more information, see Use an IAM role to grant permissions to applications running on Amazon EC2 instances in the IAM User Guide. When you create policies and permissions You grant permissions to a user by creating a policy, which is a document that lists the actions that a user can perform and the resources those actions can affect. Any actions or resources that are not explicitly allowed are denied by default. Policies can be created and attached to principals (users, groups of users, roles assumed by users, and resources). You can use these policies with"} +{"global_id": 45, "doc_id": "iam", "chunk_id": "11", "question_id": 2, "question": "What is a service-linked role?", "answer_span": "A service-linked role is a type of service role that is linked to an AWS service.", "chunk": "actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. • Service-linked role – A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. When you assume an IAM role 7 AWS Identity and Access Management User Guide • Applications running on Amazon EC2 – You can use an IAM role to manage temporary credentials for applications that are running on an EC2 instance and making AWS CLI or AWS API requests. This is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2 instance and make it available to all of its applications, you create an instance profile that is attached to the instance. An instance profile contains the role and enables programs that are running on the EC2 instance to get temporary credentials. For more information, see Use an IAM role to grant permissions to applications running on Amazon EC2 instances in the IAM User Guide. When you create policies and permissions You grant permissions to a user by creating a policy, which is a document that lists the actions that a user can perform and the resources those actions can affect. Any actions or resources that are not explicitly allowed are denied by default. Policies can be created and attached to principals (users, groups of users, roles assumed by users, and resources). You can use these policies with"} +{"global_id": 46, "doc_id": "iam", "chunk_id": "11", "question_id": 3, "question": "What is preferable to storing access keys within the EC2 instance?", "answer_span": "This is preferable to storing access keys within the EC2 instance.", "chunk": "actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. • Service-linked role – A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. When you assume an IAM role 7 AWS Identity and Access Management User Guide • Applications running on Amazon EC2 – You can use an IAM role to manage temporary credentials for applications that are running on an EC2 instance and making AWS CLI or AWS API requests. This is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2 instance and make it available to all of its applications, you create an instance profile that is attached to the instance. An instance profile contains the role and enables programs that are running on the EC2 instance to get temporary credentials. For more information, see Use an IAM role to grant permissions to applications running on Amazon EC2 instances in the IAM User Guide. When you create policies and permissions You grant permissions to a user by creating a policy, which is a document that lists the actions that a user can perform and the resources those actions can affect. Any actions or resources that are not explicitly allowed are denied by default. Policies can be created and attached to principals (users, groups of users, roles assumed by users, and resources). You can use these policies with"} +{"global_id": 47, "doc_id": "iam", "chunk_id": "11", "question_id": 4, "question": "How do you grant permissions to a user?", "answer_span": "You grant permissions to a user by creating a policy, which is a document that lists the actions that a user can perform and the resources those actions can affect.", "chunk": "actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide. • Service-linked role – A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. When you assume an IAM role 7 AWS Identity and Access Management User Guide • Applications running on Amazon EC2 – You can use an IAM role to manage temporary credentials for applications that are running on an EC2 instance and making AWS CLI or AWS API requests. This is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2 instance and make it available to all of its applications, you create an instance profile that is attached to the instance. An instance profile contains the role and enables programs that are running on the EC2 instance to get temporary credentials. For more information, see Use an IAM role to grant permissions to applications running on Amazon EC2 instances in the IAM User Guide. When you create policies and permissions You grant permissions to a user by creating a policy, which is a document that lists the actions that a user can perform and the resources those actions can affect. Any actions or resources that are not explicitly allowed are denied by default. Policies can be created and attached to principals (users, groups of users, roles assumed by users, and resources). You can use these policies with"} +{"global_id": 48, "doc_id": "iam", "chunk_id": "12", "question_id": 1, "question": "What are the actions that a user can perform and the resources those actions can affect?", "answer_span": "actions that a user can perform and the resources those actions can affect.", "chunk": "actions that a user can perform and the resources those actions can affect. Any actions or resources that are not explicitly allowed are denied by default. Policies can be created and attached to principals (users, groups of users, roles assumed by users, and resources). You can use these policies with an IAM role: • Trust policy – Defines which principal can assume the role, and under which conditions. A trust policy is a specific type of resource-based policy for IAM roles. A role can have only one trust policy. • Identity-based policies (inline and managed) – These policies define the permissions that the user of the role is able to perform (or is denied from performing), and on which resources. Use the Example IAM identity-based policies to help you define permissions for your IAM identities. After you find the policy that you need, choose view the policy to view the JSON for the policy. You can use the JSON policy document as a template for your own policies. Note If you are using IAM Identity Center to manage your users, you assign permission sets in IAM Identity Center instead of attaching a permissions policy to a principal. When you assign a permission set to a group or user in AWS IAM Identity Center, IAM Identity Center creates corresponding IAM roles in each account, and attaches the policies specified in the permission set to those roles. IAM Identity Center manages the role, and allows the authorized users you’ve defined to assume the role. If you modify the permission set, When you create policies and permissions 8 AWS Identity and Access Management User Guide IAM Identity Center ensures that the corresponding IAM policies and roles are updated accordingly. For more information about IAM Identity Center, see What is IAM Identity Center? in"} +{"global_id": 49, "doc_id": "iam", "chunk_id": "12", "question_id": 2, "question": "What happens to actions or resources that are not explicitly allowed?", "answer_span": "Any actions or resources that are not explicitly allowed are denied by default.", "chunk": "actions that a user can perform and the resources those actions can affect. Any actions or resources that are not explicitly allowed are denied by default. Policies can be created and attached to principals (users, groups of users, roles assumed by users, and resources). You can use these policies with an IAM role: • Trust policy – Defines which principal can assume the role, and under which conditions. A trust policy is a specific type of resource-based policy for IAM roles. A role can have only one trust policy. • Identity-based policies (inline and managed) – These policies define the permissions that the user of the role is able to perform (or is denied from performing), and on which resources. Use the Example IAM identity-based policies to help you define permissions for your IAM identities. After you find the policy that you need, choose view the policy to view the JSON for the policy. You can use the JSON policy document as a template for your own policies. Note If you are using IAM Identity Center to manage your users, you assign permission sets in IAM Identity Center instead of attaching a permissions policy to a principal. When you assign a permission set to a group or user in AWS IAM Identity Center, IAM Identity Center creates corresponding IAM roles in each account, and attaches the policies specified in the permission set to those roles. IAM Identity Center manages the role, and allows the authorized users you’ve defined to assume the role. If you modify the permission set, When you create policies and permissions 8 AWS Identity and Access Management User Guide IAM Identity Center ensures that the corresponding IAM policies and roles are updated accordingly. For more information about IAM Identity Center, see What is IAM Identity Center? in"} +{"global_id": 50, "doc_id": "iam", "chunk_id": "12", "question_id": 3, "question": "What defines which principal can assume the role in a trust policy?", "answer_span": "Trust policy – Defines which principal can assume the role, and under which conditions.", "chunk": "actions that a user can perform and the resources those actions can affect. Any actions or resources that are not explicitly allowed are denied by default. Policies can be created and attached to principals (users, groups of users, roles assumed by users, and resources). You can use these policies with an IAM role: • Trust policy – Defines which principal can assume the role, and under which conditions. A trust policy is a specific type of resource-based policy for IAM roles. A role can have only one trust policy. • Identity-based policies (inline and managed) – These policies define the permissions that the user of the role is able to perform (or is denied from performing), and on which resources. Use the Example IAM identity-based policies to help you define permissions for your IAM identities. After you find the policy that you need, choose view the policy to view the JSON for the policy. You can use the JSON policy document as a template for your own policies. Note If you are using IAM Identity Center to manage your users, you assign permission sets in IAM Identity Center instead of attaching a permissions policy to a principal. When you assign a permission set to a group or user in AWS IAM Identity Center, IAM Identity Center creates corresponding IAM roles in each account, and attaches the policies specified in the permission set to those roles. IAM Identity Center manages the role, and allows the authorized users you’ve defined to assume the role. If you modify the permission set, When you create policies and permissions 8 AWS Identity and Access Management User Guide IAM Identity Center ensures that the corresponding IAM policies and roles are updated accordingly. For more information about IAM Identity Center, see What is IAM Identity Center? in"} +{"global_id": 51, "doc_id": "iam", "chunk_id": "12", "question_id": 4, "question": "What does IAM Identity Center do when you assign a permission set to a group or user?", "answer_span": "IAM Identity Center creates corresponding IAM roles in each account, and attaches the policies specified in the permission set to those roles.", "chunk": "actions that a user can perform and the resources those actions can affect. Any actions or resources that are not explicitly allowed are denied by default. Policies can be created and attached to principals (users, groups of users, roles assumed by users, and resources). You can use these policies with an IAM role: • Trust policy – Defines which principal can assume the role, and under which conditions. A trust policy is a specific type of resource-based policy for IAM roles. A role can have only one trust policy. • Identity-based policies (inline and managed) – These policies define the permissions that the user of the role is able to perform (or is denied from performing), and on which resources. Use the Example IAM identity-based policies to help you define permissions for your IAM identities. After you find the policy that you need, choose view the policy to view the JSON for the policy. You can use the JSON policy document as a template for your own policies. Note If you are using IAM Identity Center to manage your users, you assign permission sets in IAM Identity Center instead of attaching a permissions policy to a principal. When you assign a permission set to a group or user in AWS IAM Identity Center, IAM Identity Center creates corresponding IAM roles in each account, and attaches the policies specified in the permission set to those roles. IAM Identity Center manages the role, and allows the authorized users you’ve defined to assume the role. If you modify the permission set, When you create policies and permissions 8 AWS Identity and Access Management User Guide IAM Identity Center ensures that the corresponding IAM policies and roles are updated accordingly. For more information about IAM Identity Center, see What is IAM Identity Center? in"} +{"global_id": 52, "doc_id": "iam", "chunk_id": "13", "question_id": 1, "question": "What does IAM Identity Center ensure when you modify the permission set?", "answer_span": "IAM Identity Center ensures that the corresponding IAM policies and roles are updated accordingly.", "chunk": "the role. If you modify the permission set, When you create policies and permissions 8 AWS Identity and Access Management User Guide IAM Identity Center ensures that the corresponding IAM policies and roles are updated accordingly. For more information about IAM Identity Center, see What is IAM Identity Center? in the AWS IAM Identity Center User Guide. How do I manage IAM? Managing AWS Identity and Access Management within an AWS environment involves leveraging a variety of tools and interfaces. The most common method is through the AWS Management Console, a web-based interface that allows you to perform a wide range of IAM administrative tasks, from creating users and roles to configuring permissions. For users more comfortable with command line interfaces, AWS provides two sets of command line tools - the AWS Command Line Interface and the AWS Tools for Windows PowerShell. These allow you to issue IAM-related commands directly from the terminal, often more efficiently than navigating the console. Additionally, AWS CloudShell enables you to run CLI or SDK commands directly from your web browser, using the permissions associated with your console sign-in. Beyond the console and command line, AWS offers Software Development Kits (SDKs) for various programming languages, enabling you to integrate IAM management functionality directly into your applications. Alternatively, you can access IAM programmatically using the IAM Query API, which allows you to issue HTTPS requests directly to the service. Leveraging these different management approaches provides you with the flexibility to incorporate IAM into your existing workflows and processes. Use the AWS Management Console The AWS Management Console is a web application that comprises and refers to a broad collection of service consoles for managing AWS resources. When you first sign in, you see the console home page. The home page provides access to each service console"} +{"global_id": 53, "doc_id": "iam", "chunk_id": "13", "question_id": 2, "question": "What is the most common method for managing AWS Identity and Access Management?", "answer_span": "The most common method is through the AWS Management Console, a web-based interface that allows you to perform a wide range of IAM administrative tasks.", "chunk": "the role. If you modify the permission set, When you create policies and permissions 8 AWS Identity and Access Management User Guide IAM Identity Center ensures that the corresponding IAM policies and roles are updated accordingly. For more information about IAM Identity Center, see What is IAM Identity Center? in the AWS IAM Identity Center User Guide. How do I manage IAM? Managing AWS Identity and Access Management within an AWS environment involves leveraging a variety of tools and interfaces. The most common method is through the AWS Management Console, a web-based interface that allows you to perform a wide range of IAM administrative tasks, from creating users and roles to configuring permissions. For users more comfortable with command line interfaces, AWS provides two sets of command line tools - the AWS Command Line Interface and the AWS Tools for Windows PowerShell. These allow you to issue IAM-related commands directly from the terminal, often more efficiently than navigating the console. Additionally, AWS CloudShell enables you to run CLI or SDK commands directly from your web browser, using the permissions associated with your console sign-in. Beyond the console and command line, AWS offers Software Development Kits (SDKs) for various programming languages, enabling you to integrate IAM management functionality directly into your applications. Alternatively, you can access IAM programmatically using the IAM Query API, which allows you to issue HTTPS requests directly to the service. Leveraging these different management approaches provides you with the flexibility to incorporate IAM into your existing workflows and processes. Use the AWS Management Console The AWS Management Console is a web application that comprises and refers to a broad collection of service consoles for managing AWS resources. When you first sign in, you see the console home page. The home page provides access to each service console"} +{"global_id": 54, "doc_id": "iam", "chunk_id": "13", "question_id": 3, "question": "What tools does AWS provide for users more comfortable with command line interfaces?", "answer_span": "AWS provides two sets of command line tools - the AWS Command Line Interface and the AWS Tools for Windows PowerShell.", "chunk": "the role. If you modify the permission set, When you create policies and permissions 8 AWS Identity and Access Management User Guide IAM Identity Center ensures that the corresponding IAM policies and roles are updated accordingly. For more information about IAM Identity Center, see What is IAM Identity Center? in the AWS IAM Identity Center User Guide. How do I manage IAM? Managing AWS Identity and Access Management within an AWS environment involves leveraging a variety of tools and interfaces. The most common method is through the AWS Management Console, a web-based interface that allows you to perform a wide range of IAM administrative tasks, from creating users and roles to configuring permissions. For users more comfortable with command line interfaces, AWS provides two sets of command line tools - the AWS Command Line Interface and the AWS Tools for Windows PowerShell. These allow you to issue IAM-related commands directly from the terminal, often more efficiently than navigating the console. Additionally, AWS CloudShell enables you to run CLI or SDK commands directly from your web browser, using the permissions associated with your console sign-in. Beyond the console and command line, AWS offers Software Development Kits (SDKs) for various programming languages, enabling you to integrate IAM management functionality directly into your applications. Alternatively, you can access IAM programmatically using the IAM Query API, which allows you to issue HTTPS requests directly to the service. Leveraging these different management approaches provides you with the flexibility to incorporate IAM into your existing workflows and processes. Use the AWS Management Console The AWS Management Console is a web application that comprises and refers to a broad collection of service consoles for managing AWS resources. When you first sign in, you see the console home page. The home page provides access to each service console"} +{"global_id": 55, "doc_id": "iam", "chunk_id": "13", "question_id": 4, "question": "What does the AWS Management Console comprise?", "answer_span": "The AWS Management Console is a web application that comprises and refers to a broad collection of service consoles for managing AWS resources.", "chunk": "the role. If you modify the permission set, When you create policies and permissions 8 AWS Identity and Access Management User Guide IAM Identity Center ensures that the corresponding IAM policies and roles are updated accordingly. For more information about IAM Identity Center, see What is IAM Identity Center? in the AWS IAM Identity Center User Guide. How do I manage IAM? Managing AWS Identity and Access Management within an AWS environment involves leveraging a variety of tools and interfaces. The most common method is through the AWS Management Console, a web-based interface that allows you to perform a wide range of IAM administrative tasks, from creating users and roles to configuring permissions. For users more comfortable with command line interfaces, AWS provides two sets of command line tools - the AWS Command Line Interface and the AWS Tools for Windows PowerShell. These allow you to issue IAM-related commands directly from the terminal, often more efficiently than navigating the console. Additionally, AWS CloudShell enables you to run CLI or SDK commands directly from your web browser, using the permissions associated with your console sign-in. Beyond the console and command line, AWS offers Software Development Kits (SDKs) for various programming languages, enabling you to integrate IAM management functionality directly into your applications. Alternatively, you can access IAM programmatically using the IAM Query API, which allows you to issue HTTPS requests directly to the service. Leveraging these different management approaches provides you with the flexibility to incorporate IAM into your existing workflows and processes. Use the AWS Management Console The AWS Management Console is a web application that comprises and refers to a broad collection of service consoles for managing AWS resources. When you first sign in, you see the console home page. The home page provides access to each service console"} +{"global_id": 56, "doc_id": "iam", "chunk_id": "14", "question_id": 1, "question": "What is the AWS Management Console?", "answer_span": "The AWS Management Console is a web application that comprises and refers to a broad collection of service consoles for managing AWS resources.", "chunk": "and processes. Use the AWS Management Console The AWS Management Console is a web application that comprises and refers to a broad collection of service consoles for managing AWS resources. When you first sign in, you see the console home page. The home page provides access to each service console and offers a single place to access the information for performing your AWS related tasks. Which services and applications are available to you after signing in to the console depend on which AWS resources you have permission to access. You can be granted permissions to resources either through assuming a role, being a member of a group that has been granted permissions, or being explicitly granted permission. For a stand-alone AWS account, the root user or IAM administrator configures access to resources. For AWS Organizations, the management account or delegated administrator configures access to resources. How do I manage IAM? 9 AWS Identity and Access Management User Guide If you plan to have people using the AWS Management Console to manage AWS resources, we recommend configuring users with temporary credentials as a security best practice. IAM users that have assumed a role, federated principals, and users in IAM Identity Center have temporary credentials, while the IAM user and root user have long-term credentials. Root user credentials provide full access to the AWS account, while other users have credentials that provide access to the resources granted them by IAM policies. The sign-in experience is different for the different types of AWS Management Console users. • IAM users and the root user sign-in from the main AWS sign-in URL (https:// signin.aws.amazon.com). Once they sign in they have access to the resources in the account to which they have been granted permission. To sign in as the root user you must have the"} +{"global_id": 57, "doc_id": "iam", "chunk_id": "14", "question_id": 2, "question": "What determines which services and applications are available after signing in to the console?", "answer_span": "Which services and applications are available to you after signing in to the console depend on which AWS resources you have permission to access.", "chunk": "and processes. Use the AWS Management Console The AWS Management Console is a web application that comprises and refers to a broad collection of service consoles for managing AWS resources. When you first sign in, you see the console home page. The home page provides access to each service console and offers a single place to access the information for performing your AWS related tasks. Which services and applications are available to you after signing in to the console depend on which AWS resources you have permission to access. You can be granted permissions to resources either through assuming a role, being a member of a group that has been granted permissions, or being explicitly granted permission. For a stand-alone AWS account, the root user or IAM administrator configures access to resources. For AWS Organizations, the management account or delegated administrator configures access to resources. How do I manage IAM? 9 AWS Identity and Access Management User Guide If you plan to have people using the AWS Management Console to manage AWS resources, we recommend configuring users with temporary credentials as a security best practice. IAM users that have assumed a role, federated principals, and users in IAM Identity Center have temporary credentials, while the IAM user and root user have long-term credentials. Root user credentials provide full access to the AWS account, while other users have credentials that provide access to the resources granted them by IAM policies. The sign-in experience is different for the different types of AWS Management Console users. • IAM users and the root user sign-in from the main AWS sign-in URL (https:// signin.aws.amazon.com). Once they sign in they have access to the resources in the account to which they have been granted permission. To sign in as the root user you must have the"} +{"global_id": 58, "doc_id": "iam", "chunk_id": "14", "question_id": 3, "question": "What is recommended as a security best practice for managing AWS resources?", "answer_span": "we recommend configuring users with temporary credentials as a security best practice.", "chunk": "and processes. Use the AWS Management Console The AWS Management Console is a web application that comprises and refers to a broad collection of service consoles for managing AWS resources. When you first sign in, you see the console home page. The home page provides access to each service console and offers a single place to access the information for performing your AWS related tasks. Which services and applications are available to you after signing in to the console depend on which AWS resources you have permission to access. You can be granted permissions to resources either through assuming a role, being a member of a group that has been granted permissions, or being explicitly granted permission. For a stand-alone AWS account, the root user or IAM administrator configures access to resources. For AWS Organizations, the management account or delegated administrator configures access to resources. How do I manage IAM? 9 AWS Identity and Access Management User Guide If you plan to have people using the AWS Management Console to manage AWS resources, we recommend configuring users with temporary credentials as a security best practice. IAM users that have assumed a role, federated principals, and users in IAM Identity Center have temporary credentials, while the IAM user and root user have long-term credentials. Root user credentials provide full access to the AWS account, while other users have credentials that provide access to the resources granted them by IAM policies. The sign-in experience is different for the different types of AWS Management Console users. • IAM users and the root user sign-in from the main AWS sign-in URL (https:// signin.aws.amazon.com). Once they sign in they have access to the resources in the account to which they have been granted permission. To sign in as the root user you must have the"} +{"global_id": 59, "doc_id": "iam", "chunk_id": "14", "question_id": 4, "question": "Where do IAM users and the root user sign in?", "answer_span": "IAM users and the root user sign-in from the main AWS sign-in URL (https:// signin.aws.amazon.com).", "chunk": "and processes. Use the AWS Management Console The AWS Management Console is a web application that comprises and refers to a broad collection of service consoles for managing AWS resources. When you first sign in, you see the console home page. The home page provides access to each service console and offers a single place to access the information for performing your AWS related tasks. Which services and applications are available to you after signing in to the console depend on which AWS resources you have permission to access. You can be granted permissions to resources either through assuming a role, being a member of a group that has been granted permissions, or being explicitly granted permission. For a stand-alone AWS account, the root user or IAM administrator configures access to resources. For AWS Organizations, the management account or delegated administrator configures access to resources. How do I manage IAM? 9 AWS Identity and Access Management User Guide If you plan to have people using the AWS Management Console to manage AWS resources, we recommend configuring users with temporary credentials as a security best practice. IAM users that have assumed a role, federated principals, and users in IAM Identity Center have temporary credentials, while the IAM user and root user have long-term credentials. Root user credentials provide full access to the AWS account, while other users have credentials that provide access to the resources granted them by IAM policies. The sign-in experience is different for the different types of AWS Management Console users. • IAM users and the root user sign-in from the main AWS sign-in URL (https:// signin.aws.amazon.com). Once they sign in they have access to the resources in the account to which they have been granted permission. To sign in as the root user you must have the"} +{"global_id": 60, "doc_id": "iam", "chunk_id": "15", "question_id": 1, "question": "How do IAM users and the root user sign in?", "answer_span": "IAM users and the root user sign-in from the main AWS sign-in URL (https:// signin.aws.amazon.com).", "chunk": "Management Console users. • IAM users and the root user sign-in from the main AWS sign-in URL (https:// signin.aws.amazon.com). Once they sign in they have access to the resources in the account to which they have been granted permission. To sign in as the root user you must have the root user email address and password. To sign in as an IAM user you must have the AWS account number or alias, the IAM user name, and the IAM user password. We recommend that you restrict IAM users in your account to specific situations that require long-term credentials, such as for emergency access, and that you use the root user only for tasks that require root user credentials. For convenience, the AWS sign-in page uses a browser cookie to remember the IAM user name and account information. The next time the user goes to any page in the AWS Management Console, the console uses the cookie to redirect the user to the account sign-in page. Sign out of the console when you finish your session to prevent reuse of your previous sign in. • IAM Identity Center users sign in using a specific AWS access portal that's unique to their organization. Once they sign in they can choose which account or application to access. If they choose to access an account, they choose which permission set they want to use for the management session. • OIDC and SAML federated principals managed in an external identity provider linked to an AWS account sign-in using a custom enterprise access portal. The AWS resources available to users are dependent upon the policies selected by their organization. Use the AWS Management Console 10 AWS Identity and Access Management User Guide Note To provide an additional level of security, root user, IAM users, and users"} +{"global_id": 61, "doc_id": "iam", "chunk_id": "15", "question_id": 2, "question": "What do you need to sign in as the root user?", "answer_span": "To sign in as the root user you must have the root user email address and password.", "chunk": "Management Console users. • IAM users and the root user sign-in from the main AWS sign-in URL (https:// signin.aws.amazon.com). Once they sign in they have access to the resources in the account to which they have been granted permission. To sign in as the root user you must have the root user email address and password. To sign in as an IAM user you must have the AWS account number or alias, the IAM user name, and the IAM user password. We recommend that you restrict IAM users in your account to specific situations that require long-term credentials, such as for emergency access, and that you use the root user only for tasks that require root user credentials. For convenience, the AWS sign-in page uses a browser cookie to remember the IAM user name and account information. The next time the user goes to any page in the AWS Management Console, the console uses the cookie to redirect the user to the account sign-in page. Sign out of the console when you finish your session to prevent reuse of your previous sign in. • IAM Identity Center users sign in using a specific AWS access portal that's unique to their organization. Once they sign in they can choose which account or application to access. If they choose to access an account, they choose which permission set they want to use for the management session. • OIDC and SAML federated principals managed in an external identity provider linked to an AWS account sign-in using a custom enterprise access portal. The AWS resources available to users are dependent upon the policies selected by their organization. Use the AWS Management Console 10 AWS Identity and Access Management User Guide Note To provide an additional level of security, root user, IAM users, and users"} +{"global_id": 62, "doc_id": "iam", "chunk_id": "15", "question_id": 3, "question": "What is recommended for IAM users in your account?", "answer_span": "We recommend that you restrict IAM users in your account to specific situations that require long-term credentials, such as for emergency access.", "chunk": "Management Console users. • IAM users and the root user sign-in from the main AWS sign-in URL (https:// signin.aws.amazon.com). Once they sign in they have access to the resources in the account to which they have been granted permission. To sign in as the root user you must have the root user email address and password. To sign in as an IAM user you must have the AWS account number or alias, the IAM user name, and the IAM user password. We recommend that you restrict IAM users in your account to specific situations that require long-term credentials, such as for emergency access, and that you use the root user only for tasks that require root user credentials. For convenience, the AWS sign-in page uses a browser cookie to remember the IAM user name and account information. The next time the user goes to any page in the AWS Management Console, the console uses the cookie to redirect the user to the account sign-in page. Sign out of the console when you finish your session to prevent reuse of your previous sign in. • IAM Identity Center users sign in using a specific AWS access portal that's unique to their organization. Once they sign in they can choose which account or application to access. If they choose to access an account, they choose which permission set they want to use for the management session. • OIDC and SAML federated principals managed in an external identity provider linked to an AWS account sign-in using a custom enterprise access portal. The AWS resources available to users are dependent upon the policies selected by their organization. Use the AWS Management Console 10 AWS Identity and Access Management User Guide Note To provide an additional level of security, root user, IAM users, and users"} +{"global_id": 63, "doc_id": "iam", "chunk_id": "15", "question_id": 4, "question": "How do IAM Identity Center users sign in?", "answer_span": "IAM Identity Center users sign in using a specific AWS access portal that's unique to their organization.", "chunk": "Management Console users. • IAM users and the root user sign-in from the main AWS sign-in URL (https:// signin.aws.amazon.com). Once they sign in they have access to the resources in the account to which they have been granted permission. To sign in as the root user you must have the root user email address and password. To sign in as an IAM user you must have the AWS account number or alias, the IAM user name, and the IAM user password. We recommend that you restrict IAM users in your account to specific situations that require long-term credentials, such as for emergency access, and that you use the root user only for tasks that require root user credentials. For convenience, the AWS sign-in page uses a browser cookie to remember the IAM user name and account information. The next time the user goes to any page in the AWS Management Console, the console uses the cookie to redirect the user to the account sign-in page. Sign out of the console when you finish your session to prevent reuse of your previous sign in. • IAM Identity Center users sign in using a specific AWS access portal that's unique to their organization. Once they sign in they can choose which account or application to access. If they choose to access an account, they choose which permission set they want to use for the management session. • OIDC and SAML federated principals managed in an external identity provider linked to an AWS account sign-in using a custom enterprise access portal. The AWS resources available to users are dependent upon the policies selected by their organization. Use the AWS Management Console 10 AWS Identity and Access Management User Guide Note To provide an additional level of security, root user, IAM users, and users"} +{"global_id": 64, "doc_id": "iam", "chunk_id": "16", "question_id": 1, "question": "What is required for root user, IAM users, and users in IAM Identity Center to access AWS resources?", "answer_span": "can have multi-factor authentication (MFA) verified by AWS before granting access to AWS resources.", "chunk": "account sign-in using a custom enterprise access portal. The AWS resources available to users are dependent upon the policies selected by their organization. Use the AWS Management Console 10 AWS Identity and Access Management User Guide Note To provide an additional level of security, root user, IAM users, and users in IAM Identity Center can have multi-factor authentication (MFA) verified by AWS before granting access to AWS resources. When MFA is enabled, you must also have access to the MFA device to sign in. To learn more about how different users sign-in to the management console, see Sign in to the AWS Management Console in the AWS Sign-In User Guide. AWS Command Line Tools You can use the AWS command line tools to issue commands at your system's command line to perform IAM and AWS tasks. Using the command line can be faster and more convenient than the console. The command line tools are also useful if you want to build scripts that perform AWS tasks. AWS provides two sets of command line tools: the AWS Command Line Interface (AWS CLI) and the AWS Tools for Windows PowerShell. For information about installing and using the AWS CLI, see the AWS Command Line Interface User Guide. For information about installing and using the Tools for Windows PowerShell, see the AWS Tools for PowerShell User Guide. After signing in to the console, you can use AWS CloudShell from your browser to run CLI or SDK commands. The permissions for accessing AWS resources are based on the credentials you used to sign-in to the console. Depending on your experience, you may find the CLI to be a more efficient method of managing your AWS account. For more information, see Use AWS CloudShell to work with AWS Identity and Access Management AWS Command Line"} +{"global_id": 65, "doc_id": "iam", "chunk_id": "16", "question_id": 2, "question": "What can you use to issue commands to perform IAM and AWS tasks?", "answer_span": "You can use the AWS command line tools to issue commands at your system's command line to perform IAM and AWS tasks.", "chunk": "account sign-in using a custom enterprise access portal. The AWS resources available to users are dependent upon the policies selected by their organization. Use the AWS Management Console 10 AWS Identity and Access Management User Guide Note To provide an additional level of security, root user, IAM users, and users in IAM Identity Center can have multi-factor authentication (MFA) verified by AWS before granting access to AWS resources. When MFA is enabled, you must also have access to the MFA device to sign in. To learn more about how different users sign-in to the management console, see Sign in to the AWS Management Console in the AWS Sign-In User Guide. AWS Command Line Tools You can use the AWS command line tools to issue commands at your system's command line to perform IAM and AWS tasks. Using the command line can be faster and more convenient than the console. The command line tools are also useful if you want to build scripts that perform AWS tasks. AWS provides two sets of command line tools: the AWS Command Line Interface (AWS CLI) and the AWS Tools for Windows PowerShell. For information about installing and using the AWS CLI, see the AWS Command Line Interface User Guide. For information about installing and using the Tools for Windows PowerShell, see the AWS Tools for PowerShell User Guide. After signing in to the console, you can use AWS CloudShell from your browser to run CLI or SDK commands. The permissions for accessing AWS resources are based on the credentials you used to sign-in to the console. Depending on your experience, you may find the CLI to be a more efficient method of managing your AWS account. For more information, see Use AWS CloudShell to work with AWS Identity and Access Management AWS Command Line"} +{"global_id": 66, "doc_id": "iam", "chunk_id": "16", "question_id": 3, "question": "What are the two sets of command line tools provided by AWS?", "answer_span": "the AWS Command Line Interface (AWS CLI) and the AWS Tools for Windows PowerShell.", "chunk": "account sign-in using a custom enterprise access portal. The AWS resources available to users are dependent upon the policies selected by their organization. Use the AWS Management Console 10 AWS Identity and Access Management User Guide Note To provide an additional level of security, root user, IAM users, and users in IAM Identity Center can have multi-factor authentication (MFA) verified by AWS before granting access to AWS resources. When MFA is enabled, you must also have access to the MFA device to sign in. To learn more about how different users sign-in to the management console, see Sign in to the AWS Management Console in the AWS Sign-In User Guide. AWS Command Line Tools You can use the AWS command line tools to issue commands at your system's command line to perform IAM and AWS tasks. Using the command line can be faster and more convenient than the console. The command line tools are also useful if you want to build scripts that perform AWS tasks. AWS provides two sets of command line tools: the AWS Command Line Interface (AWS CLI) and the AWS Tools for Windows PowerShell. For information about installing and using the AWS CLI, see the AWS Command Line Interface User Guide. For information about installing and using the Tools for Windows PowerShell, see the AWS Tools for PowerShell User Guide. After signing in to the console, you can use AWS CloudShell from your browser to run CLI or SDK commands. The permissions for accessing AWS resources are based on the credentials you used to sign-in to the console. Depending on your experience, you may find the CLI to be a more efficient method of managing your AWS account. For more information, see Use AWS CloudShell to work with AWS Identity and Access Management AWS Command Line"} +{"global_id": 67, "doc_id": "iam", "chunk_id": "16", "question_id": 4, "question": "What can you use after signing in to the console to run CLI or SDK commands?", "answer_span": "you can use AWS CloudShell from your browser to run CLI or SDK commands.", "chunk": "account sign-in using a custom enterprise access portal. The AWS resources available to users are dependent upon the policies selected by their organization. Use the AWS Management Console 10 AWS Identity and Access Management User Guide Note To provide an additional level of security, root user, IAM users, and users in IAM Identity Center can have multi-factor authentication (MFA) verified by AWS before granting access to AWS resources. When MFA is enabled, you must also have access to the MFA device to sign in. To learn more about how different users sign-in to the management console, see Sign in to the AWS Management Console in the AWS Sign-In User Guide. AWS Command Line Tools You can use the AWS command line tools to issue commands at your system's command line to perform IAM and AWS tasks. Using the command line can be faster and more convenient than the console. The command line tools are also useful if you want to build scripts that perform AWS tasks. AWS provides two sets of command line tools: the AWS Command Line Interface (AWS CLI) and the AWS Tools for Windows PowerShell. For information about installing and using the AWS CLI, see the AWS Command Line Interface User Guide. For information about installing and using the Tools for Windows PowerShell, see the AWS Tools for PowerShell User Guide. After signing in to the console, you can use AWS CloudShell from your browser to run CLI or SDK commands. The permissions for accessing AWS resources are based on the credentials you used to sign-in to the console. Depending on your experience, you may find the CLI to be a more efficient method of managing your AWS account. For more information, see Use AWS CloudShell to work with AWS Identity and Access Management AWS Command Line"} +{"global_id": 68, "doc_id": "iam", "chunk_id": "17", "question_id": 1, "question": "What are the credentials based on when signing in to the console?", "answer_span": "are based on the credentials you used to sign-in to the console.", "chunk": "are based on the credentials you used to sign-in to the console. Depending on your experience, you may find the CLI to be a more efficient method of managing your AWS account. For more information, see Use AWS CloudShell to work with AWS Identity and Access Management AWS Command Line Interface (CLI) and Software Development Kits (SDKs) IAM Identity Center and IAM users use different methods to authenticate their credentials when they authenticate through the CLI or the application interfaces (APIs) in the associated SDKs. Credentials and configuration settings are located in multiple places, such as the system or user environment variables, local AWS configuration files, or explicitly declared on the command line as a parameter. Certain locations take precedence over others. AWS Command Line Tools 11 AWS Identity and Access Management User Guide Both IAM Identity Center and IAM provide access keys that can be used with the CLI or SDK. IAM Identity Center access keys are temporary credentials that can be automatically refreshed and are recommended over the long-term access keys associated with IAM users. To manage your AWS account using the CLI or SDK you can use AWS CloudShell from your browser. If you use CloudShell to run CLI or SDK commands you must first sign-in to the console. The permissions for accessing AWS resources are based on the credentials you used to sign-in to the console. Depending on your experience, you may find the CLI to be a more efficient method of managing your AWS account. For application development, you can download the CLI or SDK to your computer and sign-in from the command prompt or a Docker window. In this scenario, you configure authentication and access credentials as part of the CLI script or SDK application. You can configure programmatic access to resources in different"} +{"global_id": 69, "doc_id": "iam", "chunk_id": "17", "question_id": 2, "question": "What is recommended over the long-term access keys associated with IAM users?", "answer_span": "IAM Identity Center access keys are temporary credentials that can be automatically refreshed and are recommended over the long-term access keys associated with IAM users.", "chunk": "are based on the credentials you used to sign-in to the console. Depending on your experience, you may find the CLI to be a more efficient method of managing your AWS account. For more information, see Use AWS CloudShell to work with AWS Identity and Access Management AWS Command Line Interface (CLI) and Software Development Kits (SDKs) IAM Identity Center and IAM users use different methods to authenticate their credentials when they authenticate through the CLI or the application interfaces (APIs) in the associated SDKs. Credentials and configuration settings are located in multiple places, such as the system or user environment variables, local AWS configuration files, or explicitly declared on the command line as a parameter. Certain locations take precedence over others. AWS Command Line Tools 11 AWS Identity and Access Management User Guide Both IAM Identity Center and IAM provide access keys that can be used with the CLI or SDK. IAM Identity Center access keys are temporary credentials that can be automatically refreshed and are recommended over the long-term access keys associated with IAM users. To manage your AWS account using the CLI or SDK you can use AWS CloudShell from your browser. If you use CloudShell to run CLI or SDK commands you must first sign-in to the console. The permissions for accessing AWS resources are based on the credentials you used to sign-in to the console. Depending on your experience, you may find the CLI to be a more efficient method of managing your AWS account. For application development, you can download the CLI or SDK to your computer and sign-in from the command prompt or a Docker window. In this scenario, you configure authentication and access credentials as part of the CLI script or SDK application. You can configure programmatic access to resources in different"} +{"global_id": 70, "doc_id": "iam", "chunk_id": "17", "question_id": 3, "question": "What must you do first to use CloudShell to run CLI or SDK commands?", "answer_span": "If you use CloudShell to run CLI or SDK commands you must first sign-in to the console.", "chunk": "are based on the credentials you used to sign-in to the console. Depending on your experience, you may find the CLI to be a more efficient method of managing your AWS account. For more information, see Use AWS CloudShell to work with AWS Identity and Access Management AWS Command Line Interface (CLI) and Software Development Kits (SDKs) IAM Identity Center and IAM users use different methods to authenticate their credentials when they authenticate through the CLI or the application interfaces (APIs) in the associated SDKs. Credentials and configuration settings are located in multiple places, such as the system or user environment variables, local AWS configuration files, or explicitly declared on the command line as a parameter. Certain locations take precedence over others. AWS Command Line Tools 11 AWS Identity and Access Management User Guide Both IAM Identity Center and IAM provide access keys that can be used with the CLI or SDK. IAM Identity Center access keys are temporary credentials that can be automatically refreshed and are recommended over the long-term access keys associated with IAM users. To manage your AWS account using the CLI or SDK you can use AWS CloudShell from your browser. If you use CloudShell to run CLI or SDK commands you must first sign-in to the console. The permissions for accessing AWS resources are based on the credentials you used to sign-in to the console. Depending on your experience, you may find the CLI to be a more efficient method of managing your AWS account. For application development, you can download the CLI or SDK to your computer and sign-in from the command prompt or a Docker window. In this scenario, you configure authentication and access credentials as part of the CLI script or SDK application. You can configure programmatic access to resources in different"} +{"global_id": 71, "doc_id": "iam", "chunk_id": "17", "question_id": 4, "question": "Where are credentials and configuration settings located?", "answer_span": "Credentials and configuration settings are located in multiple places, such as the system or user environment variables, local AWS configuration files, or explicitly declared on the command line as a parameter.", "chunk": "are based on the credentials you used to sign-in to the console. Depending on your experience, you may find the CLI to be a more efficient method of managing your AWS account. For more information, see Use AWS CloudShell to work with AWS Identity and Access Management AWS Command Line Interface (CLI) and Software Development Kits (SDKs) IAM Identity Center and IAM users use different methods to authenticate their credentials when they authenticate through the CLI or the application interfaces (APIs) in the associated SDKs. Credentials and configuration settings are located in multiple places, such as the system or user environment variables, local AWS configuration files, or explicitly declared on the command line as a parameter. Certain locations take precedence over others. AWS Command Line Tools 11 AWS Identity and Access Management User Guide Both IAM Identity Center and IAM provide access keys that can be used with the CLI or SDK. IAM Identity Center access keys are temporary credentials that can be automatically refreshed and are recommended over the long-term access keys associated with IAM users. To manage your AWS account using the CLI or SDK you can use AWS CloudShell from your browser. If you use CloudShell to run CLI or SDK commands you must first sign-in to the console. The permissions for accessing AWS resources are based on the credentials you used to sign-in to the console. Depending on your experience, you may find the CLI to be a more efficient method of managing your AWS account. For application development, you can download the CLI or SDK to your computer and sign-in from the command prompt or a Docker window. In this scenario, you configure authentication and access credentials as part of the CLI script or SDK application. You can configure programmatic access to resources in different"} +{"global_id": 72, "doc_id": "iam", "chunk_id": "18", "question_id": 1, "question": "What can you download for application development?", "answer_span": "you can download the CLI or SDK to your computer", "chunk": "For application development, you can download the CLI or SDK to your computer and sign-in from the command prompt or a Docker window. In this scenario, you configure authentication and access credentials as part of the CLI script or SDK application. You can configure programmatic access to resources in different ways, depending on the environment and the access available to you. • Recommended options for authenticating local code with AWS service are IAM Identity Center and IAM Roles Anywhere • Recommended options for authenticating code running within an AWS environment are to use IAM roles or use IAM Identity Center credentials. When signing in using the AWS access portal, you can get short-term credentials from the start page where you choose your permission set. These credentials have a defined duration and don't automatically refresh. If you want to use these credentials, after signing in to the AWS portal, choose the AWS account and then choose the permissions set. Select Command line or programmatic access to view the options you can use to access AWS resources programmatically or from the CLI. For more information about these methods, see Getting and refreshing temporary credentials in the IAM Identity Center User Guide. These credentials are often used during application development to quickly test code. We recommend using IAM Identity Center credentials that automatically refresh when automating access to your AWS resources. If you have configured users and permission sets in IAM Identity Center you use the aws configure sso command to use a command-line wizard that will help you identify the credentials available to you and store them in a profile. For more information about configuring your profile, see Configure your profile with the aws configure sso wizard in the AWS Command Line Interface User Guide for Version 2. AWS Command Line Tools"} +{"global_id": 73, "doc_id": "iam", "chunk_id": "18", "question_id": 2, "question": "What are the recommended options for authenticating local code with AWS service?", "answer_span": "Recommended options for authenticating local code with AWS service are IAM Identity Center and IAM Roles Anywhere", "chunk": "For application development, you can download the CLI or SDK to your computer and sign-in from the command prompt or a Docker window. In this scenario, you configure authentication and access credentials as part of the CLI script or SDK application. You can configure programmatic access to resources in different ways, depending on the environment and the access available to you. • Recommended options for authenticating local code with AWS service are IAM Identity Center and IAM Roles Anywhere • Recommended options for authenticating code running within an AWS environment are to use IAM roles or use IAM Identity Center credentials. When signing in using the AWS access portal, you can get short-term credentials from the start page where you choose your permission set. These credentials have a defined duration and don't automatically refresh. If you want to use these credentials, after signing in to the AWS portal, choose the AWS account and then choose the permissions set. Select Command line or programmatic access to view the options you can use to access AWS resources programmatically or from the CLI. For more information about these methods, see Getting and refreshing temporary credentials in the IAM Identity Center User Guide. These credentials are often used during application development to quickly test code. We recommend using IAM Identity Center credentials that automatically refresh when automating access to your AWS resources. If you have configured users and permission sets in IAM Identity Center you use the aws configure sso command to use a command-line wizard that will help you identify the credentials available to you and store them in a profile. For more information about configuring your profile, see Configure your profile with the aws configure sso wizard in the AWS Command Line Interface User Guide for Version 2. AWS Command Line Tools"} +{"global_id": 74, "doc_id": "iam", "chunk_id": "18", "question_id": 3, "question": "How can you get short-term credentials when signing in using the AWS access portal?", "answer_span": "you can get short-term credentials from the start page where you choose your permission set", "chunk": "For application development, you can download the CLI or SDK to your computer and sign-in from the command prompt or a Docker window. In this scenario, you configure authentication and access credentials as part of the CLI script or SDK application. You can configure programmatic access to resources in different ways, depending on the environment and the access available to you. • Recommended options for authenticating local code with AWS service are IAM Identity Center and IAM Roles Anywhere • Recommended options for authenticating code running within an AWS environment are to use IAM roles or use IAM Identity Center credentials. When signing in using the AWS access portal, you can get short-term credentials from the start page where you choose your permission set. These credentials have a defined duration and don't automatically refresh. If you want to use these credentials, after signing in to the AWS portal, choose the AWS account and then choose the permissions set. Select Command line or programmatic access to view the options you can use to access AWS resources programmatically or from the CLI. For more information about these methods, see Getting and refreshing temporary credentials in the IAM Identity Center User Guide. These credentials are often used during application development to quickly test code. We recommend using IAM Identity Center credentials that automatically refresh when automating access to your AWS resources. If you have configured users and permission sets in IAM Identity Center you use the aws configure sso command to use a command-line wizard that will help you identify the credentials available to you and store them in a profile. For more information about configuring your profile, see Configure your profile with the aws configure sso wizard in the AWS Command Line Interface User Guide for Version 2. AWS Command Line Tools"} +{"global_id": 75, "doc_id": "iam", "chunk_id": "18", "question_id": 4, "question": "What command do you use to identify the credentials available to you in IAM Identity Center?", "answer_span": "you use the aws configure sso command to use a command-line wizard that will help you identify the credentials available to you", "chunk": "For application development, you can download the CLI or SDK to your computer and sign-in from the command prompt or a Docker window. In this scenario, you configure authentication and access credentials as part of the CLI script or SDK application. You can configure programmatic access to resources in different ways, depending on the environment and the access available to you. • Recommended options for authenticating local code with AWS service are IAM Identity Center and IAM Roles Anywhere • Recommended options for authenticating code running within an AWS environment are to use IAM roles or use IAM Identity Center credentials. When signing in using the AWS access portal, you can get short-term credentials from the start page where you choose your permission set. These credentials have a defined duration and don't automatically refresh. If you want to use these credentials, after signing in to the AWS portal, choose the AWS account and then choose the permissions set. Select Command line or programmatic access to view the options you can use to access AWS resources programmatically or from the CLI. For more information about these methods, see Getting and refreshing temporary credentials in the IAM Identity Center User Guide. These credentials are often used during application development to quickly test code. We recommend using IAM Identity Center credentials that automatically refresh when automating access to your AWS resources. If you have configured users and permission sets in IAM Identity Center you use the aws configure sso command to use a command-line wizard that will help you identify the credentials available to you and store them in a profile. For more information about configuring your profile, see Configure your profile with the aws configure sso wizard in the AWS Command Line Interface User Guide for Version 2. AWS Command Line Tools"} +{"global_id": 76, "doc_id": "iam", "chunk_id": "19", "question_id": 1, "question": "What will the command-line wizard help you identify?", "answer_span": "the credentials available to you", "chunk": "command-line wizard that will help you identify the credentials available to you and store them in a profile. For more information about configuring your profile, see Configure your profile with the aws configure sso wizard in the AWS Command Line Interface User Guide for Version 2. AWS Command Line Tools 12 AWS Identity and Access Management User Guide Note Many sample applications use long-term access keys associated with IAM users or root user. You should only use long-term credentials within a sandbox environment as part of a learning exercise. Review the alternatives to long-term access keys and plan to transition your code to use alternative credentials, such as IAM Identity Center credentials or IAM roles, as soon as possible. After transitioning your code, delete the access keys. To learn more about configuring the CLI, see Install or update the latest version of the AWS CLI in the AWS Command Line Interface User Guide for Version 2 and Authentication and access credentials in the AWS Command Line Interface User Guide To learn more about configuring the SDK, see IAM Identity Center authentication in the AWS SDKs and Tools Reference Guide and IAM Roles Anywhere in the AWS SDKs and Tools Reference Guide. Use the AWS SDKs AWS provides SDKs (software development kits) that consist of libraries and sample code for various programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to IAM and AWS. For example, the SDKs take care of tasks such as cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services page. Use the IAM Query API You can access IAM and AWS programmatically by using the"} +{"global_id": 77, "doc_id": "iam", "chunk_id": "19", "question_id": 2, "question": "Where can you find more information about configuring your profile?", "answer_span": "see Configure your profile with the aws configure sso wizard in the AWS Command Line Interface User Guide for Version 2", "chunk": "command-line wizard that will help you identify the credentials available to you and store them in a profile. For more information about configuring your profile, see Configure your profile with the aws configure sso wizard in the AWS Command Line Interface User Guide for Version 2. AWS Command Line Tools 12 AWS Identity and Access Management User Guide Note Many sample applications use long-term access keys associated with IAM users or root user. You should only use long-term credentials within a sandbox environment as part of a learning exercise. Review the alternatives to long-term access keys and plan to transition your code to use alternative credentials, such as IAM Identity Center credentials or IAM roles, as soon as possible. After transitioning your code, delete the access keys. To learn more about configuring the CLI, see Install or update the latest version of the AWS CLI in the AWS Command Line Interface User Guide for Version 2 and Authentication and access credentials in the AWS Command Line Interface User Guide To learn more about configuring the SDK, see IAM Identity Center authentication in the AWS SDKs and Tools Reference Guide and IAM Roles Anywhere in the AWS SDKs and Tools Reference Guide. Use the AWS SDKs AWS provides SDKs (software development kits) that consist of libraries and sample code for various programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to IAM and AWS. For example, the SDKs take care of tasks such as cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services page. Use the IAM Query API You can access IAM and AWS programmatically by using the"} +{"global_id": 78, "doc_id": "iam", "chunk_id": "19", "question_id": 3, "question": "What should you only use long-term credentials within?", "answer_span": "a sandbox environment as part of a learning exercise", "chunk": "command-line wizard that will help you identify the credentials available to you and store them in a profile. For more information about configuring your profile, see Configure your profile with the aws configure sso wizard in the AWS Command Line Interface User Guide for Version 2. AWS Command Line Tools 12 AWS Identity and Access Management User Guide Note Many sample applications use long-term access keys associated with IAM users or root user. You should only use long-term credentials within a sandbox environment as part of a learning exercise. Review the alternatives to long-term access keys and plan to transition your code to use alternative credentials, such as IAM Identity Center credentials or IAM roles, as soon as possible. After transitioning your code, delete the access keys. To learn more about configuring the CLI, see Install or update the latest version of the AWS CLI in the AWS Command Line Interface User Guide for Version 2 and Authentication and access credentials in the AWS Command Line Interface User Guide To learn more about configuring the SDK, see IAM Identity Center authentication in the AWS SDKs and Tools Reference Guide and IAM Roles Anywhere in the AWS SDKs and Tools Reference Guide. Use the AWS SDKs AWS provides SDKs (software development kits) that consist of libraries and sample code for various programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to IAM and AWS. For example, the SDKs take care of tasks such as cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services page. Use the IAM Query API You can access IAM and AWS programmatically by using the"} +{"global_id": 79, "doc_id": "iam", "chunk_id": "19", "question_id": 4, "question": "What do the SDKs provide a convenient way to create?", "answer_span": "programmatic access to IAM and AWS", "chunk": "command-line wizard that will help you identify the credentials available to you and store them in a profile. For more information about configuring your profile, see Configure your profile with the aws configure sso wizard in the AWS Command Line Interface User Guide for Version 2. AWS Command Line Tools 12 AWS Identity and Access Management User Guide Note Many sample applications use long-term access keys associated with IAM users or root user. You should only use long-term credentials within a sandbox environment as part of a learning exercise. Review the alternatives to long-term access keys and plan to transition your code to use alternative credentials, such as IAM Identity Center credentials or IAM roles, as soon as possible. After transitioning your code, delete the access keys. To learn more about configuring the CLI, see Install or update the latest version of the AWS CLI in the AWS Command Line Interface User Guide for Version 2 and Authentication and access credentials in the AWS Command Line Interface User Guide To learn more about configuring the SDK, see IAM Identity Center authentication in the AWS SDKs and Tools Reference Guide and IAM Roles Anywhere in the AWS SDKs and Tools Reference Guide. Use the AWS SDKs AWS provides SDKs (software development kits) that consist of libraries and sample code for various programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to IAM and AWS. For example, the SDKs take care of tasks such as cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services page. Use the IAM Query API You can access IAM and AWS programmatically by using the"} +{"global_id": 80, "doc_id": "iam", "chunk_id": "20", "question_id": 1, "question": "What does AWS Identity and Access Management provide?", "answer_span": "AWS Identity and Access Management provides the infrastructure necessary to control authentication and authorization for your AWS account.", "chunk": "care of tasks such as cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services page. Use the IAM Query API You can access IAM and AWS programmatically by using the IAM Query API, which lets you issue HTTPS requests directly to the service. When you use the Query API, you must include code to digitally sign requests using your credentials. For more information, see Calling the IAM API using HTTP query requests and the IAM API Reference. How IAM works AWS Identity and Access Management provides the infrastructure necessary to control authentication and authorization for your AWS account. Use the AWS SDKs 13 AWS Identity and Access Management User Guide First, a human user or an application uses their sign-in credentials to authenticate with AWS. IAM matches the sign-in credentials to a principal (an IAM user, AWS STS federated user principal, IAM role, or application) trusted by the AWS account and authenticates permission to access AWS. Next, IAM makes a request to grant the principal access to resources. IAM grants or denies access in response to an authorization request. For example, when you first sign in to the console and are on the console Home page, you aren't accessing a specific service. When you select a service, you send an authorization request to IAM for that service. IAM verifies that your identity is on the list of authorized users, determines what policies control the level of access granted, and evaluates any other policies that might be in effect. Principals within your AWS account or from another AWS account that you trust can make authorization requests. Once authorized, the principal can perform actions or operations on resources in your AWS account."} +{"global_id": 81, "doc_id": "iam", "chunk_id": "20", "question_id": 2, "question": "How does a human user or application authenticate with AWS?", "answer_span": "First, a human user or an application uses their sign-in credentials to authenticate with AWS.", "chunk": "care of tasks such as cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services page. Use the IAM Query API You can access IAM and AWS programmatically by using the IAM Query API, which lets you issue HTTPS requests directly to the service. When you use the Query API, you must include code to digitally sign requests using your credentials. For more information, see Calling the IAM API using HTTP query requests and the IAM API Reference. How IAM works AWS Identity and Access Management provides the infrastructure necessary to control authentication and authorization for your AWS account. Use the AWS SDKs 13 AWS Identity and Access Management User Guide First, a human user or an application uses their sign-in credentials to authenticate with AWS. IAM matches the sign-in credentials to a principal (an IAM user, AWS STS federated user principal, IAM role, or application) trusted by the AWS account and authenticates permission to access AWS. Next, IAM makes a request to grant the principal access to resources. IAM grants or denies access in response to an authorization request. For example, when you first sign in to the console and are on the console Home page, you aren't accessing a specific service. When you select a service, you send an authorization request to IAM for that service. IAM verifies that your identity is on the list of authorized users, determines what policies control the level of access granted, and evaluates any other policies that might be in effect. Principals within your AWS account or from another AWS account that you trust can make authorization requests. Once authorized, the principal can perform actions or operations on resources in your AWS account."} +{"global_id": 82, "doc_id": "iam", "chunk_id": "20", "question_id": 3, "question": "What does IAM do after matching sign-in credentials?", "answer_span": "Next, IAM makes a request to grant the principal access to resources.", "chunk": "care of tasks such as cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services page. Use the IAM Query API You can access IAM and AWS programmatically by using the IAM Query API, which lets you issue HTTPS requests directly to the service. When you use the Query API, you must include code to digitally sign requests using your credentials. For more information, see Calling the IAM API using HTTP query requests and the IAM API Reference. How IAM works AWS Identity and Access Management provides the infrastructure necessary to control authentication and authorization for your AWS account. Use the AWS SDKs 13 AWS Identity and Access Management User Guide First, a human user or an application uses their sign-in credentials to authenticate with AWS. IAM matches the sign-in credentials to a principal (an IAM user, AWS STS federated user principal, IAM role, or application) trusted by the AWS account and authenticates permission to access AWS. Next, IAM makes a request to grant the principal access to resources. IAM grants or denies access in response to an authorization request. For example, when you first sign in to the console and are on the console Home page, you aren't accessing a specific service. When you select a service, you send an authorization request to IAM for that service. IAM verifies that your identity is on the list of authorized users, determines what policies control the level of access granted, and evaluates any other policies that might be in effect. Principals within your AWS account or from another AWS account that you trust can make authorization requests. Once authorized, the principal can perform actions or operations on resources in your AWS account."} +{"global_id": 83, "doc_id": "iam", "chunk_id": "20", "question_id": 4, "question": "What happens when you select a service on the console Home page?", "answer_span": "When you select a service, you send an authorization request to IAM for that service.", "chunk": "care of tasks such as cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see the Tools for Amazon Web Services page. Use the IAM Query API You can access IAM and AWS programmatically by using the IAM Query API, which lets you issue HTTPS requests directly to the service. When you use the Query API, you must include code to digitally sign requests using your credentials. For more information, see Calling the IAM API using HTTP query requests and the IAM API Reference. How IAM works AWS Identity and Access Management provides the infrastructure necessary to control authentication and authorization for your AWS account. Use the AWS SDKs 13 AWS Identity and Access Management User Guide First, a human user or an application uses their sign-in credentials to authenticate with AWS. IAM matches the sign-in credentials to a principal (an IAM user, AWS STS federated user principal, IAM role, or application) trusted by the AWS account and authenticates permission to access AWS. Next, IAM makes a request to grant the principal access to resources. IAM grants or denies access in response to an authorization request. For example, when you first sign in to the console and are on the console Home page, you aren't accessing a specific service. When you select a service, you send an authorization request to IAM for that service. IAM verifies that your identity is on the list of authorized users, determines what policies control the level of access granted, and evaluates any other policies that might be in effect. Principals within your AWS account or from another AWS account that you trust can make authorization requests. Once authorized, the principal can perform actions or operations on resources in your AWS account."} +{"global_id": 84, "doc_id": "iam", "chunk_id": "21", "question_id": 1, "question": "What can a principal do once authorized?", "answer_span": "Once authorized, the principal can perform actions or operations on resources in your AWS account.", "chunk": "what policies control the level of access granted, and evaluates any other policies that might be in effect. Principals within your AWS account or from another AWS account that you trust can make authorization requests. Once authorized, the principal can perform actions or operations on resources in your AWS account. For example, the principal could launch a new Amazon Elastic Compute Cloud instance, modify IAM group membership, or delete Amazon Simple Storage Service buckets. The following diagram illustrates this process through the IAM infrastructure: How IAM works 14 AWS Identity and Access Management User Guide Components of a request When a principal tries to use the AWS Management Console, the AWS API, or the AWS CLI, that principal sends a request to AWS. The request includes the following information: • Actions or operations – The actions or operations that the principal wants to perform., such as an action in the AWS Management Console, or an operation in the AWS CLI or AWS API. • Resources – The AWS resource object upon which the principal requests to perform an action or operation. • Principal – The person or application that used an entity (user or role) to send the request. Information about the principal includes the permission policies. Components of a request 15 AWS Identity and Access Management User Guide • Environment data – Information about the IP address, user agent, SSL enabled status, and the timestamp. • Resource data – Data related to the resource requested, such as a DynamoDB table name or a tag on an Amazon EC2 instance. AWS gathers the request information into a request context, which IAM evaluates to authorize the request. How principals are authenticated A principal signs in to AWS using their credentials which IAM authenticates to permit the principal to send a request"} +{"global_id": 85, "doc_id": "iam", "chunk_id": "21", "question_id": 2, "question": "What information does a request include?", "answer_span": "The request includes the following information: • Actions or operations – The actions or operations that the principal wants to perform., such as an action in the AWS Management Console, or an operation in the AWS CLI or AWS API.", "chunk": "what policies control the level of access granted, and evaluates any other policies that might be in effect. Principals within your AWS account or from another AWS account that you trust can make authorization requests. Once authorized, the principal can perform actions or operations on resources in your AWS account. For example, the principal could launch a new Amazon Elastic Compute Cloud instance, modify IAM group membership, or delete Amazon Simple Storage Service buckets. The following diagram illustrates this process through the IAM infrastructure: How IAM works 14 AWS Identity and Access Management User Guide Components of a request When a principal tries to use the AWS Management Console, the AWS API, or the AWS CLI, that principal sends a request to AWS. The request includes the following information: • Actions or operations – The actions or operations that the principal wants to perform., such as an action in the AWS Management Console, or an operation in the AWS CLI or AWS API. • Resources – The AWS resource object upon which the principal requests to perform an action or operation. • Principal – The person or application that used an entity (user or role) to send the request. Information about the principal includes the permission policies. Components of a request 15 AWS Identity and Access Management User Guide • Environment data – Information about the IP address, user agent, SSL enabled status, and the timestamp. • Resource data – Data related to the resource requested, such as a DynamoDB table name or a tag on an Amazon EC2 instance. AWS gathers the request information into a request context, which IAM evaluates to authorize the request. How principals are authenticated A principal signs in to AWS using their credentials which IAM authenticates to permit the principal to send a request"} +{"global_id": 86, "doc_id": "iam", "chunk_id": "21", "question_id": 3, "question": "What does AWS gather into a request context?", "answer_span": "AWS gathers the request information into a request context, which IAM evaluates to authorize the request.", "chunk": "what policies control the level of access granted, and evaluates any other policies that might be in effect. Principals within your AWS account or from another AWS account that you trust can make authorization requests. Once authorized, the principal can perform actions or operations on resources in your AWS account. For example, the principal could launch a new Amazon Elastic Compute Cloud instance, modify IAM group membership, or delete Amazon Simple Storage Service buckets. The following diagram illustrates this process through the IAM infrastructure: How IAM works 14 AWS Identity and Access Management User Guide Components of a request When a principal tries to use the AWS Management Console, the AWS API, or the AWS CLI, that principal sends a request to AWS. The request includes the following information: • Actions or operations – The actions or operations that the principal wants to perform., such as an action in the AWS Management Console, or an operation in the AWS CLI or AWS API. • Resources – The AWS resource object upon which the principal requests to perform an action or operation. • Principal – The person or application that used an entity (user or role) to send the request. Information about the principal includes the permission policies. Components of a request 15 AWS Identity and Access Management User Guide • Environment data – Information about the IP address, user agent, SSL enabled status, and the timestamp. • Resource data – Data related to the resource requested, such as a DynamoDB table name or a tag on an Amazon EC2 instance. AWS gathers the request information into a request context, which IAM evaluates to authorize the request. How principals are authenticated A principal signs in to AWS using their credentials which IAM authenticates to permit the principal to send a request"} +{"global_id": 87, "doc_id": "iam", "chunk_id": "21", "question_id": 4, "question": "How does a principal sign in to AWS?", "answer_span": "A principal signs in to AWS using their credentials which IAM authenticates to permit the principal to send a request.", "chunk": "what policies control the level of access granted, and evaluates any other policies that might be in effect. Principals within your AWS account or from another AWS account that you trust can make authorization requests. Once authorized, the principal can perform actions or operations on resources in your AWS account. For example, the principal could launch a new Amazon Elastic Compute Cloud instance, modify IAM group membership, or delete Amazon Simple Storage Service buckets. The following diagram illustrates this process through the IAM infrastructure: How IAM works 14 AWS Identity and Access Management User Guide Components of a request When a principal tries to use the AWS Management Console, the AWS API, or the AWS CLI, that principal sends a request to AWS. The request includes the following information: • Actions or operations – The actions or operations that the principal wants to perform., such as an action in the AWS Management Console, or an operation in the AWS CLI or AWS API. • Resources – The AWS resource object upon which the principal requests to perform an action or operation. • Principal – The person or application that used an entity (user or role) to send the request. Information about the principal includes the permission policies. Components of a request 15 AWS Identity and Access Management User Guide • Environment data – Information about the IP address, user agent, SSL enabled status, and the timestamp. • Resource data – Data related to the resource requested, such as a DynamoDB table name or a tag on an Amazon EC2 instance. AWS gathers the request information into a request context, which IAM evaluates to authorize the request. How principals are authenticated A principal signs in to AWS using their credentials which IAM authenticates to permit the principal to send a request"} +{"global_id": 88, "doc_id": "iam", "chunk_id": "22", "question_id": 1, "question": "What does AWS gather into a request context?", "answer_span": "AWS gathers the request information into a request context", "chunk": "table name or a tag on an Amazon EC2 instance. AWS gathers the request information into a request context, which IAM evaluates to authorize the request. How principals are authenticated A principal signs in to AWS using their credentials which IAM authenticates to permit the principal to send a request to AWS. Some services, such as Amazon S3 and AWS STS, allow specific requests from anonymous users. However, they're the exception to the rule. Each type of user goes through authentication. • Root user – Your sign in credentials used for authentication are the email address you used to create the AWS account and the password you specified at that time. • Federated principal – Your identity provider authenticates you and passes your credentials to AWS, you don't have to sign-in directly to AWS. Both IAM Identity Center and IAM support identity federation. • Users in AWS IAM Identity Center directory(not federated)– Users created directly in the IAM Identity Center default directory sign in using the AWS access portal and provide your username and password. • IAM user – You sign-in by providing your account ID or alias, your username, and password. To authenticate workloads from the API or AWS CLI, you might use temporary credentials through assuming a role or you might use long-term credentials by providing your access key and secret key. To learn more about the IAM entities, see IAM users and IAM roles. AWS recommends that you use multi-factor authentication (MFA) with all users to increase the security of your account. To learn more about MFA, see AWS Multi-factor authentication in IAM. Authorization and permission policy basics Authorization refers to the principal having the required permissions to complete their request. During authorization, IAM identifies the policies that apply to the request using values from the How"} +{"global_id": 89, "doc_id": "iam", "chunk_id": "22", "question_id": 2, "question": "What are the sign-in credentials for a root user?", "answer_span": "Your sign in credentials used for authentication are the email address you used to create the AWS account and the password you specified at that time", "chunk": "table name or a tag on an Amazon EC2 instance. AWS gathers the request information into a request context, which IAM evaluates to authorize the request. How principals are authenticated A principal signs in to AWS using their credentials which IAM authenticates to permit the principal to send a request to AWS. Some services, such as Amazon S3 and AWS STS, allow specific requests from anonymous users. However, they're the exception to the rule. Each type of user goes through authentication. • Root user – Your sign in credentials used for authentication are the email address you used to create the AWS account and the password you specified at that time. • Federated principal – Your identity provider authenticates you and passes your credentials to AWS, you don't have to sign-in directly to AWS. Both IAM Identity Center and IAM support identity federation. • Users in AWS IAM Identity Center directory(not federated)– Users created directly in the IAM Identity Center default directory sign in using the AWS access portal and provide your username and password. • IAM user – You sign-in by providing your account ID or alias, your username, and password. To authenticate workloads from the API or AWS CLI, you might use temporary credentials through assuming a role or you might use long-term credentials by providing your access key and secret key. To learn more about the IAM entities, see IAM users and IAM roles. AWS recommends that you use multi-factor authentication (MFA) with all users to increase the security of your account. To learn more about MFA, see AWS Multi-factor authentication in IAM. Authorization and permission policy basics Authorization refers to the principal having the required permissions to complete their request. During authorization, IAM identifies the policies that apply to the request using values from the How"} +{"global_id": 90, "doc_id": "iam", "chunk_id": "22", "question_id": 3, "question": "How do users in AWS IAM Identity Center directory sign in?", "answer_span": "Users created directly in the IAM Identity Center default directory sign in using the AWS access portal and provide your username and password", "chunk": "table name or a tag on an Amazon EC2 instance. AWS gathers the request information into a request context, which IAM evaluates to authorize the request. How principals are authenticated A principal signs in to AWS using their credentials which IAM authenticates to permit the principal to send a request to AWS. Some services, such as Amazon S3 and AWS STS, allow specific requests from anonymous users. However, they're the exception to the rule. Each type of user goes through authentication. • Root user – Your sign in credentials used for authentication are the email address you used to create the AWS account and the password you specified at that time. • Federated principal – Your identity provider authenticates you and passes your credentials to AWS, you don't have to sign-in directly to AWS. Both IAM Identity Center and IAM support identity federation. • Users in AWS IAM Identity Center directory(not federated)– Users created directly in the IAM Identity Center default directory sign in using the AWS access portal and provide your username and password. • IAM user – You sign-in by providing your account ID or alias, your username, and password. To authenticate workloads from the API or AWS CLI, you might use temporary credentials through assuming a role or you might use long-term credentials by providing your access key and secret key. To learn more about the IAM entities, see IAM users and IAM roles. AWS recommends that you use multi-factor authentication (MFA) with all users to increase the security of your account. To learn more about MFA, see AWS Multi-factor authentication in IAM. Authorization and permission policy basics Authorization refers to the principal having the required permissions to complete their request. During authorization, IAM identifies the policies that apply to the request using values from the How"} +{"global_id": 91, "doc_id": "iam", "chunk_id": "22", "question_id": 4, "question": "What does authorization refer to in the context of IAM?", "answer_span": "Authorization refers to the principal having the required permissions to complete their request", "chunk": "table name or a tag on an Amazon EC2 instance. AWS gathers the request information into a request context, which IAM evaluates to authorize the request. How principals are authenticated A principal signs in to AWS using their credentials which IAM authenticates to permit the principal to send a request to AWS. Some services, such as Amazon S3 and AWS STS, allow specific requests from anonymous users. However, they're the exception to the rule. Each type of user goes through authentication. • Root user – Your sign in credentials used for authentication are the email address you used to create the AWS account and the password you specified at that time. • Federated principal – Your identity provider authenticates you and passes your credentials to AWS, you don't have to sign-in directly to AWS. Both IAM Identity Center and IAM support identity federation. • Users in AWS IAM Identity Center directory(not federated)– Users created directly in the IAM Identity Center default directory sign in using the AWS access portal and provide your username and password. • IAM user – You sign-in by providing your account ID or alias, your username, and password. To authenticate workloads from the API or AWS CLI, you might use temporary credentials through assuming a role or you might use long-term credentials by providing your access key and secret key. To learn more about the IAM entities, see IAM users and IAM roles. AWS recommends that you use multi-factor authentication (MFA) with all users to increase the security of your account. To learn more about MFA, see AWS Multi-factor authentication in IAM. Authorization and permission policy basics Authorization refers to the principal having the required permissions to complete their request. During authorization, IAM identifies the policies that apply to the request using values from the How"} +{"global_id": 92, "doc_id": "iam", "chunk_id": "23", "question_id": 1, "question": "What does authorization refer to?", "answer_span": "Authorization refers to the principal having the required permissions to complete their request.", "chunk": "the security of your account. To learn more about MFA, see AWS Multi-factor authentication in IAM. Authorization and permission policy basics Authorization refers to the principal having the required permissions to complete their request. During authorization, IAM identifies the policies that apply to the request using values from the How principals are authenticated 16 AWS Identity and Access Management User Guide request context. It then uses the policies to determine whether to allow or deny the request. IAM stores most permission policies as JSON documents that specify the permissions for principal entities. There are several types of policies that can affect an authorization request. To provide your users with permissions to access the AWS resources in your account, you can use identity-based policies. Resource-based policies can grant cross-account access. To make a request in a different account, a policy in the other account must allow you to access the resource and the IAM entity that you use to make the request must have an identity-based policy that allows the request. IAM checks each policy that applies to the context of your request. IAM policy evaluation uses an explicit deny, which means that if a single permissions policy includes a denied action, IAM denies the entire request and stops evaluating. Because requests are denied by default, the applicable permissions policies must allow every part of your request for IAM to authorize your request. The evaluation logic for a request within a single account follows these basic rules: • By default, all requests are denied. (In general, requests made using the AWS account root user credentials for resources in the account are always allowed.) • An explicit allow in any permissions policy (identity-based or resource-based) overrides this default. • The existence of an AWS Organizations service control policy (SCP) or resource control"} +{"global_id": 93, "doc_id": "iam", "chunk_id": "23", "question_id": 2, "question": "What does IAM use to determine whether to allow or deny a request?", "answer_span": "It then uses the policies to determine whether to allow or deny the request.", "chunk": "the security of your account. To learn more about MFA, see AWS Multi-factor authentication in IAM. Authorization and permission policy basics Authorization refers to the principal having the required permissions to complete their request. During authorization, IAM identifies the policies that apply to the request using values from the How principals are authenticated 16 AWS Identity and Access Management User Guide request context. It then uses the policies to determine whether to allow or deny the request. IAM stores most permission policies as JSON documents that specify the permissions for principal entities. There are several types of policies that can affect an authorization request. To provide your users with permissions to access the AWS resources in your account, you can use identity-based policies. Resource-based policies can grant cross-account access. To make a request in a different account, a policy in the other account must allow you to access the resource and the IAM entity that you use to make the request must have an identity-based policy that allows the request. IAM checks each policy that applies to the context of your request. IAM policy evaluation uses an explicit deny, which means that if a single permissions policy includes a denied action, IAM denies the entire request and stops evaluating. Because requests are denied by default, the applicable permissions policies must allow every part of your request for IAM to authorize your request. The evaluation logic for a request within a single account follows these basic rules: • By default, all requests are denied. (In general, requests made using the AWS account root user credentials for resources in the account are always allowed.) • An explicit allow in any permissions policy (identity-based or resource-based) overrides this default. • The existence of an AWS Organizations service control policy (SCP) or resource control"} +{"global_id": 94, "doc_id": "iam", "chunk_id": "23", "question_id": 3, "question": "What must a policy in another account allow for cross-account access?", "answer_span": "a policy in the other account must allow you to access the resource", "chunk": "the security of your account. To learn more about MFA, see AWS Multi-factor authentication in IAM. Authorization and permission policy basics Authorization refers to the principal having the required permissions to complete their request. During authorization, IAM identifies the policies that apply to the request using values from the How principals are authenticated 16 AWS Identity and Access Management User Guide request context. It then uses the policies to determine whether to allow or deny the request. IAM stores most permission policies as JSON documents that specify the permissions for principal entities. There are several types of policies that can affect an authorization request. To provide your users with permissions to access the AWS resources in your account, you can use identity-based policies. Resource-based policies can grant cross-account access. To make a request in a different account, a policy in the other account must allow you to access the resource and the IAM entity that you use to make the request must have an identity-based policy that allows the request. IAM checks each policy that applies to the context of your request. IAM policy evaluation uses an explicit deny, which means that if a single permissions policy includes a denied action, IAM denies the entire request and stops evaluating. Because requests are denied by default, the applicable permissions policies must allow every part of your request for IAM to authorize your request. The evaluation logic for a request within a single account follows these basic rules: • By default, all requests are denied. (In general, requests made using the AWS account root user credentials for resources in the account are always allowed.) • An explicit allow in any permissions policy (identity-based or resource-based) overrides this default. • The existence of an AWS Organizations service control policy (SCP) or resource control"} +{"global_id": 95, "doc_id": "iam", "chunk_id": "23", "question_id": 4, "question": "What happens if a single permissions policy includes a denied action?", "answer_span": "IAM denies the entire request and stops evaluating.", "chunk": "the security of your account. To learn more about MFA, see AWS Multi-factor authentication in IAM. Authorization and permission policy basics Authorization refers to the principal having the required permissions to complete their request. During authorization, IAM identifies the policies that apply to the request using values from the How principals are authenticated 16 AWS Identity and Access Management User Guide request context. It then uses the policies to determine whether to allow or deny the request. IAM stores most permission policies as JSON documents that specify the permissions for principal entities. There are several types of policies that can affect an authorization request. To provide your users with permissions to access the AWS resources in your account, you can use identity-based policies. Resource-based policies can grant cross-account access. To make a request in a different account, a policy in the other account must allow you to access the resource and the IAM entity that you use to make the request must have an identity-based policy that allows the request. IAM checks each policy that applies to the context of your request. IAM policy evaluation uses an explicit deny, which means that if a single permissions policy includes a denied action, IAM denies the entire request and stops evaluating. Because requests are denied by default, the applicable permissions policies must allow every part of your request for IAM to authorize your request. The evaluation logic for a request within a single account follows these basic rules: • By default, all requests are denied. (In general, requests made using the AWS account root user credentials for resources in the account are always allowed.) • An explicit allow in any permissions policy (identity-based or resource-based) overrides this default. • The existence of an AWS Organizations service control policy (SCP) or resource control"} +{"global_id": 96, "doc_id": "iam", "chunk_id": "24", "question_id": 1, "question": "What happens if there is an explicit allow in any permissions policy?", "answer_span": "An explicit allow in any permissions policy (identity-based or resource-based) overrides this default.", "chunk": "requests are denied. (In general, requests made using the AWS account root user credentials for resources in the account are always allowed.) • An explicit allow in any permissions policy (identity-based or resource-based) overrides this default. • The existence of an AWS Organizations service control policy (SCP) or resource control policy (RCP), IAM permissions boundary, or a session policy overrides the allow. If one or more of these policy types exists, they must all allow the request. Otherwise, it's implicitly denied. For more information on SCPs and RCPs, see Authorization policies in AWS Organizations in the AWS Organizations User Guide. • An explicit deny in any policy overrides any allows in any policy. To learn more, see Policy evaluation logic. After IAM authenticates and authorizes the principal, IAM approves the actions or operations in their request by evaluating the permission policy that applies to the principal. Each AWS service defines the actions (operations) they support, and include things that you can do to a resource, such as viewing, creating, editing, and deleting that resource. The permission policy that applies to the principal must include the necessary actions to perform an operation. To learn more about how IAM evaluates permission policies, see the section called “Policy evaluation logic”. The service defines a set of actions that a principal can perform on each resource. When creating permission policies, make sure to include the actions that you want the user to be able to perform. Authorization and permission policy basics 17"} +{"global_id": 97, "doc_id": "iam", "chunk_id": "24", "question_id": 2, "question": "What must happen if one or more of the policy types exists?", "answer_span": "If one or more of these policy types exists, they must all allow the request.", "chunk": "requests are denied. (In general, requests made using the AWS account root user credentials for resources in the account are always allowed.) • An explicit allow in any permissions policy (identity-based or resource-based) overrides this default. • The existence of an AWS Organizations service control policy (SCP) or resource control policy (RCP), IAM permissions boundary, or a session policy overrides the allow. If one or more of these policy types exists, they must all allow the request. Otherwise, it's implicitly denied. For more information on SCPs and RCPs, see Authorization policies in AWS Organizations in the AWS Organizations User Guide. • An explicit deny in any policy overrides any allows in any policy. To learn more, see Policy evaluation logic. After IAM authenticates and authorizes the principal, IAM approves the actions or operations in their request by evaluating the permission policy that applies to the principal. Each AWS service defines the actions (operations) they support, and include things that you can do to a resource, such as viewing, creating, editing, and deleting that resource. The permission policy that applies to the principal must include the necessary actions to perform an operation. To learn more about how IAM evaluates permission policies, see the section called “Policy evaluation logic”. The service defines a set of actions that a principal can perform on each resource. When creating permission policies, make sure to include the actions that you want the user to be able to perform. Authorization and permission policy basics 17"} +{"global_id": 98, "doc_id": "iam", "chunk_id": "24", "question_id": 3, "question": "What overrides any allows in any policy?", "answer_span": "An explicit deny in any policy overrides any allows in any policy.", "chunk": "requests are denied. (In general, requests made using the AWS account root user credentials for resources in the account are always allowed.) • An explicit allow in any permissions policy (identity-based or resource-based) overrides this default. • The existence of an AWS Organizations service control policy (SCP) or resource control policy (RCP), IAM permissions boundary, or a session policy overrides the allow. If one or more of these policy types exists, they must all allow the request. Otherwise, it's implicitly denied. For more information on SCPs and RCPs, see Authorization policies in AWS Organizations in the AWS Organizations User Guide. • An explicit deny in any policy overrides any allows in any policy. To learn more, see Policy evaluation logic. After IAM authenticates and authorizes the principal, IAM approves the actions or operations in their request by evaluating the permission policy that applies to the principal. Each AWS service defines the actions (operations) they support, and include things that you can do to a resource, such as viewing, creating, editing, and deleting that resource. The permission policy that applies to the principal must include the necessary actions to perform an operation. To learn more about how IAM evaluates permission policies, see the section called “Policy evaluation logic”. The service defines a set of actions that a principal can perform on each resource. When creating permission policies, make sure to include the actions that you want the user to be able to perform. Authorization and permission policy basics 17"} +{"global_id": 99, "doc_id": "iam", "chunk_id": "24", "question_id": 4, "question": "What must the permission policy that applies to the principal include?", "answer_span": "The permission policy that applies to the principal must include the necessary actions to perform an operation.", "chunk": "requests are denied. (In general, requests made using the AWS account root user credentials for resources in the account are always allowed.) • An explicit allow in any permissions policy (identity-based or resource-based) overrides this default. • The existence of an AWS Organizations service control policy (SCP) or resource control policy (RCP), IAM permissions boundary, or a session policy overrides the allow. If one or more of these policy types exists, they must all allow the request. Otherwise, it's implicitly denied. For more information on SCPs and RCPs, see Authorization policies in AWS Organizations in the AWS Organizations User Guide. • An explicit deny in any policy overrides any allows in any policy. To learn more, see Policy evaluation logic. After IAM authenticates and authorizes the principal, IAM approves the actions or operations in their request by evaluating the permission policy that applies to the principal. Each AWS service defines the actions (operations) they support, and include things that you can do to a resource, such as viewing, creating, editing, and deleting that resource. The permission policy that applies to the principal must include the necessary actions to perform an operation. To learn more about how IAM evaluates permission policies, see the section called “Policy evaluation logic”. The service defines a set of actions that a principal can perform on each resource. When creating permission policies, make sure to include the actions that you want the user to be able to perform. Authorization and permission policy basics 17"} +{"global_id": 100, "doc_id": "athena", "chunk_id": "0", "question_id": 1, "question": "What is Amazon Athena?", "answer_span": "Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL.", "chunk": "Amazon Athena User Guide What is Amazon Athena? Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and begin using standard SQL to run ad-hoc queries and get results in seconds. For more information, see Get started. Amazon Athena also makes it easy to interactively run data analytics using Apache Spark without having to plan for, configure, or manage resources. When you run Apache Spark applications on Athena, you submit Spark code for processing and receive the results directly. Use the simplified notebook experience in Amazon Athena console to develop Apache Spark applications using Python or Use Athena notebook APIs. For more information, see Get started with Apache Spark on Amazon Athena. Athena SQL and Apache Spark on Amazon Athena are serverless, so there is no infrastructure to set up or manage, and you pay only for the queries you run. Athena scales automatically—running queries in parallel—so results are fast, even with large datasets and complex queries. Topics • When should I use Athena? • Client and programming tools for using Athena • Set up, administrative, and programmatic access • AWS service integrations with Athena When should I use Athena? Query services like Amazon Athena, data warehouses like Amazon Redshift, and sophisticated data processing frameworks like Amazon EMR all address different needs and use cases. The following guidance can help you choose one or more services based on your requirements. Amazon Athena Athena helps you analyze unstructured, semi-structured, and structured data stored in Amazon S3. Examples include CSV, JSON, or columnar data formats such as Apache Parquet and Apache ORC. When should I use"} +{"global_id": 101, "doc_id": "athena", "chunk_id": "0", "question_id": 2, "question": "How does Amazon Athena help with data analytics?", "answer_span": "Amazon Athena also makes it easy to interactively run data analytics using Apache Spark without having to plan for, configure, or manage resources.", "chunk": "Amazon Athena User Guide What is Amazon Athena? Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and begin using standard SQL to run ad-hoc queries and get results in seconds. For more information, see Get started. Amazon Athena also makes it easy to interactively run data analytics using Apache Spark without having to plan for, configure, or manage resources. When you run Apache Spark applications on Athena, you submit Spark code for processing and receive the results directly. Use the simplified notebook experience in Amazon Athena console to develop Apache Spark applications using Python or Use Athena notebook APIs. For more information, see Get started with Apache Spark on Amazon Athena. Athena SQL and Apache Spark on Amazon Athena are serverless, so there is no infrastructure to set up or manage, and you pay only for the queries you run. Athena scales automatically—running queries in parallel—so results are fast, even with large datasets and complex queries. Topics • When should I use Athena? • Client and programming tools for using Athena • Set up, administrative, and programmatic access • AWS service integrations with Athena When should I use Athena? Query services like Amazon Athena, data warehouses like Amazon Redshift, and sophisticated data processing frameworks like Amazon EMR all address different needs and use cases. The following guidance can help you choose one or more services based on your requirements. Amazon Athena Athena helps you analyze unstructured, semi-structured, and structured data stored in Amazon S3. Examples include CSV, JSON, or columnar data formats such as Apache Parquet and Apache ORC. When should I use"} +{"global_id": 102, "doc_id": "athena", "chunk_id": "0", "question_id": 3, "question": "What types of data can Amazon Athena analyze?", "answer_span": "Amazon Athena helps you analyze unstructured, semi-structured, and structured data stored in Amazon S3.", "chunk": "Amazon Athena User Guide What is Amazon Athena? Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and begin using standard SQL to run ad-hoc queries and get results in seconds. For more information, see Get started. Amazon Athena also makes it easy to interactively run data analytics using Apache Spark without having to plan for, configure, or manage resources. When you run Apache Spark applications on Athena, you submit Spark code for processing and receive the results directly. Use the simplified notebook experience in Amazon Athena console to develop Apache Spark applications using Python or Use Athena notebook APIs. For more information, see Get started with Apache Spark on Amazon Athena. Athena SQL and Apache Spark on Amazon Athena are serverless, so there is no infrastructure to set up or manage, and you pay only for the queries you run. Athena scales automatically—running queries in parallel—so results are fast, even with large datasets and complex queries. Topics • When should I use Athena? • Client and programming tools for using Athena • Set up, administrative, and programmatic access • AWS service integrations with Athena When should I use Athena? Query services like Amazon Athena, data warehouses like Amazon Redshift, and sophisticated data processing frameworks like Amazon EMR all address different needs and use cases. The following guidance can help you choose one or more services based on your requirements. Amazon Athena Athena helps you analyze unstructured, semi-structured, and structured data stored in Amazon S3. Examples include CSV, JSON, or columnar data formats such as Apache Parquet and Apache ORC. When should I use"} +{"global_id": 103, "doc_id": "athena", "chunk_id": "0", "question_id": 4, "question": "What formats of data can be analyzed with Amazon Athena?", "answer_span": "Examples include CSV, JSON, or columnar data formats such as Apache Parquet and Apache ORC.", "chunk": "Amazon Athena User Guide What is Amazon Athena? Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and begin using standard SQL to run ad-hoc queries and get results in seconds. For more information, see Get started. Amazon Athena also makes it easy to interactively run data analytics using Apache Spark without having to plan for, configure, or manage resources. When you run Apache Spark applications on Athena, you submit Spark code for processing and receive the results directly. Use the simplified notebook experience in Amazon Athena console to develop Apache Spark applications using Python or Use Athena notebook APIs. For more information, see Get started with Apache Spark on Amazon Athena. Athena SQL and Apache Spark on Amazon Athena are serverless, so there is no infrastructure to set up or manage, and you pay only for the queries you run. Athena scales automatically—running queries in parallel—so results are fast, even with large datasets and complex queries. Topics • When should I use Athena? • Client and programming tools for using Athena • Set up, administrative, and programmatic access • AWS service integrations with Athena When should I use Athena? Query services like Amazon Athena, data warehouses like Amazon Redshift, and sophisticated data processing frameworks like Amazon EMR all address different needs and use cases. The following guidance can help you choose one or more services based on your requirements. Amazon Athena Athena helps you analyze unstructured, semi-structured, and structured data stored in Amazon S3. Examples include CSV, JSON, or columnar data formats such as Apache Parquet and Apache ORC. When should I use"} +{"global_id": 104, "doc_id": "athena", "chunk_id": "1", "question_id": 1, "question": "What types of data can Amazon Athena analyze?", "answer_span": "Athena helps you analyze unstructured, semi-structured, and structured data stored in Amazon S3.", "chunk": "cases. The following guidance can help you choose one or more services based on your requirements. Amazon Athena Athena helps you analyze unstructured, semi-structured, and structured data stored in Amazon S3. Examples include CSV, JSON, or columnar data formats such as Apache Parquet and Apache ORC. When should I use Athena? 1 Amazon Athena User Guide You can use Athena to run ad-hoc queries using ANSI SQL, without the need to aggregate or load the data into Athena. Athena integrates with Amazon QuickSight for easy data visualization. You can use Athena to generate reports or to explore data with business intelligence tools or SQL clients connected with a JDBC or an ODBC driver. For more information, see What is Amazon QuickSight in the Amazon QuickSight User Guide and Connect to Amazon Athena with ODBC and JDBC drivers. Athena integrates with the AWS Glue Data Catalog, which offers a persistent metadata store for your data in Amazon S3. This allows you to create tables and query data in Athena based on a central metadata store available throughout your Amazon Web Services account and integrated with the ETL and data discovery features of AWS Glue. For more information, see Use AWS Glue Data Catalog to connect to your data and What is AWS Glue in the AWS Glue Developer Guide. Amazon Athena makes it easy to run interactive queries against data directly in Amazon S3 without having to format data or manage infrastructure. For example, Athena is useful if you want to run a quick query on web logs to troubleshoot a performance issue on your site. With Athena, you can get started fast: you just define a table for your data and start querying using standard SQL. You should use Amazon Athena if you want to run interactive ad hoc SQL"} +{"global_id": 105, "doc_id": "athena", "chunk_id": "1", "question_id": 2, "question": "What is a benefit of using Amazon Athena?", "answer_span": "Amazon Athena makes it easy to run interactive queries against data directly in Amazon S3 without having to format data or manage infrastructure.", "chunk": "cases. The following guidance can help you choose one or more services based on your requirements. Amazon Athena Athena helps you analyze unstructured, semi-structured, and structured data stored in Amazon S3. Examples include CSV, JSON, or columnar data formats such as Apache Parquet and Apache ORC. When should I use Athena? 1 Amazon Athena User Guide You can use Athena to run ad-hoc queries using ANSI SQL, without the need to aggregate or load the data into Athena. Athena integrates with Amazon QuickSight for easy data visualization. You can use Athena to generate reports or to explore data with business intelligence tools or SQL clients connected with a JDBC or an ODBC driver. For more information, see What is Amazon QuickSight in the Amazon QuickSight User Guide and Connect to Amazon Athena with ODBC and JDBC drivers. Athena integrates with the AWS Glue Data Catalog, which offers a persistent metadata store for your data in Amazon S3. This allows you to create tables and query data in Athena based on a central metadata store available throughout your Amazon Web Services account and integrated with the ETL and data discovery features of AWS Glue. For more information, see Use AWS Glue Data Catalog to connect to your data and What is AWS Glue in the AWS Glue Developer Guide. Amazon Athena makes it easy to run interactive queries against data directly in Amazon S3 without having to format data or manage infrastructure. For example, Athena is useful if you want to run a quick query on web logs to troubleshoot a performance issue on your site. With Athena, you can get started fast: you just define a table for your data and start querying using standard SQL. You should use Amazon Athena if you want to run interactive ad hoc SQL"} +{"global_id": 106, "doc_id": "athena", "chunk_id": "1", "question_id": 3, "question": "How does Athena integrate with Amazon QuickSight?", "answer_span": "Athena integrates with Amazon QuickSight for easy data visualization.", "chunk": "cases. The following guidance can help you choose one or more services based on your requirements. Amazon Athena Athena helps you analyze unstructured, semi-structured, and structured data stored in Amazon S3. Examples include CSV, JSON, or columnar data formats such as Apache Parquet and Apache ORC. When should I use Athena? 1 Amazon Athena User Guide You can use Athena to run ad-hoc queries using ANSI SQL, without the need to aggregate or load the data into Athena. Athena integrates with Amazon QuickSight for easy data visualization. You can use Athena to generate reports or to explore data with business intelligence tools or SQL clients connected with a JDBC or an ODBC driver. For more information, see What is Amazon QuickSight in the Amazon QuickSight User Guide and Connect to Amazon Athena with ODBC and JDBC drivers. Athena integrates with the AWS Glue Data Catalog, which offers a persistent metadata store for your data in Amazon S3. This allows you to create tables and query data in Athena based on a central metadata store available throughout your Amazon Web Services account and integrated with the ETL and data discovery features of AWS Glue. For more information, see Use AWS Glue Data Catalog to connect to your data and What is AWS Glue in the AWS Glue Developer Guide. Amazon Athena makes it easy to run interactive queries against data directly in Amazon S3 without having to format data or manage infrastructure. For example, Athena is useful if you want to run a quick query on web logs to troubleshoot a performance issue on your site. With Athena, you can get started fast: you just define a table for your data and start querying using standard SQL. You should use Amazon Athena if you want to run interactive ad hoc SQL"} +{"global_id": 107, "doc_id": "athena", "chunk_id": "1", "question_id": 4, "question": "What is required to run ad-hoc queries using Athena?", "answer_span": "You can use Athena to run ad-hoc queries using ANSI SQL, without the need to aggregate or load the data into Athena.", "chunk": "cases. The following guidance can help you choose one or more services based on your requirements. Amazon Athena Athena helps you analyze unstructured, semi-structured, and structured data stored in Amazon S3. Examples include CSV, JSON, or columnar data formats such as Apache Parquet and Apache ORC. When should I use Athena? 1 Amazon Athena User Guide You can use Athena to run ad-hoc queries using ANSI SQL, without the need to aggregate or load the data into Athena. Athena integrates with Amazon QuickSight for easy data visualization. You can use Athena to generate reports or to explore data with business intelligence tools or SQL clients connected with a JDBC or an ODBC driver. For more information, see What is Amazon QuickSight in the Amazon QuickSight User Guide and Connect to Amazon Athena with ODBC and JDBC drivers. Athena integrates with the AWS Glue Data Catalog, which offers a persistent metadata store for your data in Amazon S3. This allows you to create tables and query data in Athena based on a central metadata store available throughout your Amazon Web Services account and integrated with the ETL and data discovery features of AWS Glue. For more information, see Use AWS Glue Data Catalog to connect to your data and What is AWS Glue in the AWS Glue Developer Guide. Amazon Athena makes it easy to run interactive queries against data directly in Amazon S3 without having to format data or manage infrastructure. For example, Athena is useful if you want to run a quick query on web logs to troubleshoot a performance issue on your site. With Athena, you can get started fast: you just define a table for your data and start querying using standard SQL. You should use Amazon Athena if you want to run interactive ad hoc SQL"} +{"global_id": 108, "doc_id": "athena", "chunk_id": "2", "question_id": 1, "question": "What can you do with Amazon Athena?", "answer_span": "With Athena, you can get started fast: you just define a table for your data and start querying using standard SQL.", "chunk": "run a quick query on web logs to troubleshoot a performance issue on your site. With Athena, you can get started fast: you just define a table for your data and start querying using standard SQL. You should use Amazon Athena if you want to run interactive ad hoc SQL queries against data on Amazon S3, without having to manage any infrastructure or clusters. Amazon Athena provides the easiest way to run ad hoc queries for data in Amazon S3 without the need to setup or manage any servers. For a list of AWS services that Athena leverages or integrates with, see the section called “AWS service integrations”. SageMaker Unified Studio Amazon SageMaker Unified Studio makes it simple to work with Amazon Athena and Amazon Redshift to run SQL queries on SageMaker Lakehouse data. With Unified Studio, you can develop SQL queries, work with query results, and collaborate with your team through an integrated notebook environment. You can also use Amazon Q generative SQL to generate SQL code from natural language input. To learn more, see SQL Analytics in the SageMaker Unified Studio user guide. Amazon EMR Amazon EMR makes it simple and cost effective to run highly distributed processing frameworks such as Hadoop, Spark, and Presto when compared to on-premises deployments. Amazon EMR is SageMaker Unified Studio 2 Amazon Athena User Guide flexible – you can run custom applications and code, and define specific compute, memory, storage, and application parameters to optimize your analytic requirements. In addition to running SQL queries, Amazon EMR can run a wide variety of scale-out data processing tasks for applications such as machine learning, graph analytics, data transformation, streaming data, and virtually anything you can code. You should use Amazon EMR if you use custom code to process and analyze extremely large datasets with"} +{"global_id": 109, "doc_id": "athena", "chunk_id": "2", "question_id": 2, "question": "When should you use Amazon Athena?", "answer_span": "You should use Amazon Athena if you want to run interactive ad hoc SQL queries against data on Amazon S3, without having to manage any infrastructure or clusters.", "chunk": "run a quick query on web logs to troubleshoot a performance issue on your site. With Athena, you can get started fast: you just define a table for your data and start querying using standard SQL. You should use Amazon Athena if you want to run interactive ad hoc SQL queries against data on Amazon S3, without having to manage any infrastructure or clusters. Amazon Athena provides the easiest way to run ad hoc queries for data in Amazon S3 without the need to setup or manage any servers. For a list of AWS services that Athena leverages or integrates with, see the section called “AWS service integrations”. SageMaker Unified Studio Amazon SageMaker Unified Studio makes it simple to work with Amazon Athena and Amazon Redshift to run SQL queries on SageMaker Lakehouse data. With Unified Studio, you can develop SQL queries, work with query results, and collaborate with your team through an integrated notebook environment. You can also use Amazon Q generative SQL to generate SQL code from natural language input. To learn more, see SQL Analytics in the SageMaker Unified Studio user guide. Amazon EMR Amazon EMR makes it simple and cost effective to run highly distributed processing frameworks such as Hadoop, Spark, and Presto when compared to on-premises deployments. Amazon EMR is SageMaker Unified Studio 2 Amazon Athena User Guide flexible – you can run custom applications and code, and define specific compute, memory, storage, and application parameters to optimize your analytic requirements. In addition to running SQL queries, Amazon EMR can run a wide variety of scale-out data processing tasks for applications such as machine learning, graph analytics, data transformation, streaming data, and virtually anything you can code. You should use Amazon EMR if you use custom code to process and analyze extremely large datasets with"} +{"global_id": 110, "doc_id": "athena", "chunk_id": "2", "question_id": 3, "question": "What does Amazon EMR make simple and cost effective?", "answer_span": "Amazon EMR makes it simple and cost effective to run highly distributed processing frameworks such as Hadoop, Spark, and Presto when compared to on-premises deployments.", "chunk": "run a quick query on web logs to troubleshoot a performance issue on your site. With Athena, you can get started fast: you just define a table for your data and start querying using standard SQL. You should use Amazon Athena if you want to run interactive ad hoc SQL queries against data on Amazon S3, without having to manage any infrastructure or clusters. Amazon Athena provides the easiest way to run ad hoc queries for data in Amazon S3 without the need to setup or manage any servers. For a list of AWS services that Athena leverages or integrates with, see the section called “AWS service integrations”. SageMaker Unified Studio Amazon SageMaker Unified Studio makes it simple to work with Amazon Athena and Amazon Redshift to run SQL queries on SageMaker Lakehouse data. With Unified Studio, you can develop SQL queries, work with query results, and collaborate with your team through an integrated notebook environment. You can also use Amazon Q generative SQL to generate SQL code from natural language input. To learn more, see SQL Analytics in the SageMaker Unified Studio user guide. Amazon EMR Amazon EMR makes it simple and cost effective to run highly distributed processing frameworks such as Hadoop, Spark, and Presto when compared to on-premises deployments. Amazon EMR is SageMaker Unified Studio 2 Amazon Athena User Guide flexible – you can run custom applications and code, and define specific compute, memory, storage, and application parameters to optimize your analytic requirements. In addition to running SQL queries, Amazon EMR can run a wide variety of scale-out data processing tasks for applications such as machine learning, graph analytics, data transformation, streaming data, and virtually anything you can code. You should use Amazon EMR if you use custom code to process and analyze extremely large datasets with"} +{"global_id": 111, "doc_id": "athena", "chunk_id": "2", "question_id": 4, "question": "What can you do with SageMaker Unified Studio?", "answer_span": "With Unified Studio, you can develop SQL queries, work with query results, and collaborate with your team through an integrated notebook environment.", "chunk": "run a quick query on web logs to troubleshoot a performance issue on your site. With Athena, you can get started fast: you just define a table for your data and start querying using standard SQL. You should use Amazon Athena if you want to run interactive ad hoc SQL queries against data on Amazon S3, without having to manage any infrastructure or clusters. Amazon Athena provides the easiest way to run ad hoc queries for data in Amazon S3 without the need to setup or manage any servers. For a list of AWS services that Athena leverages or integrates with, see the section called “AWS service integrations”. SageMaker Unified Studio Amazon SageMaker Unified Studio makes it simple to work with Amazon Athena and Amazon Redshift to run SQL queries on SageMaker Lakehouse data. With Unified Studio, you can develop SQL queries, work with query results, and collaborate with your team through an integrated notebook environment. You can also use Amazon Q generative SQL to generate SQL code from natural language input. To learn more, see SQL Analytics in the SageMaker Unified Studio user guide. Amazon EMR Amazon EMR makes it simple and cost effective to run highly distributed processing frameworks such as Hadoop, Spark, and Presto when compared to on-premises deployments. Amazon EMR is SageMaker Unified Studio 2 Amazon Athena User Guide flexible – you can run custom applications and code, and define specific compute, memory, storage, and application parameters to optimize your analytic requirements. In addition to running SQL queries, Amazon EMR can run a wide variety of scale-out data processing tasks for applications such as machine learning, graph analytics, data transformation, streaming data, and virtually anything you can code. You should use Amazon EMR if you use custom code to process and analyze extremely large datasets with"} +{"global_id": 112, "doc_id": "athena", "chunk_id": "3", "question_id": 1, "question": "What types of tasks can Amazon EMR run?", "answer_span": "Amazon EMR can run a wide variety of scale-out data processing tasks for applications such as machine learning, graph analytics, data transformation, streaming data, and virtually anything you can code.", "chunk": "SQL queries, Amazon EMR can run a wide variety of scale-out data processing tasks for applications such as machine learning, graph analytics, data transformation, streaming data, and virtually anything you can code. You should use Amazon EMR if you use custom code to process and analyze extremely large datasets with the latest big data processing frameworks such as Spark, Hadoop, Presto, or Hbase. Amazon EMR gives you full control over the configuration of your clusters and the software installed on them. You can use Amazon Athena to query data that you process using Amazon EMR. Amazon Athena supports many of the same data formats as Amazon EMR. Athena's data catalog is Hive metastore compatible. If you use EMR and already have a Hive metastore, you can run your DDL statements on Amazon Athena and query your data immediately without affecting your Amazon EMR jobs. Amazon Redshift A data warehouse like Amazon Redshift is your best choice when you need to pull together data from many different sources – like inventory systems, financial systems, and retail sales systems – into a common format, and store it for long periods of time. If you want to build sophisticated business reports from historical data, then a data warehouse like Amazon Redshift is the best choice. The query engine in Amazon Redshift has been optimized to perform especially well on running complex queries that join large numbers of very large database tables. When you need to run queries against highly structured data with lots of joins across lots of very large tables, choose Amazon Redshift. For more information about when to use Athena, consult the following resources: • Decision guide for analytics services on AWS in the Getting Started Resource Center • When to use Athena vs other big data services in the Amazon"} +{"global_id": 113, "doc_id": "athena", "chunk_id": "3", "question_id": 2, "question": "What big data processing frameworks does Amazon EMR support?", "answer_span": "You should use Amazon EMR if you use custom code to process and analyze extremely large datasets with the latest big data processing frameworks such as Spark, Hadoop, Presto, or Hbase.", "chunk": "SQL queries, Amazon EMR can run a wide variety of scale-out data processing tasks for applications such as machine learning, graph analytics, data transformation, streaming data, and virtually anything you can code. You should use Amazon EMR if you use custom code to process and analyze extremely large datasets with the latest big data processing frameworks such as Spark, Hadoop, Presto, or Hbase. Amazon EMR gives you full control over the configuration of your clusters and the software installed on them. You can use Amazon Athena to query data that you process using Amazon EMR. Amazon Athena supports many of the same data formats as Amazon EMR. Athena's data catalog is Hive metastore compatible. If you use EMR and already have a Hive metastore, you can run your DDL statements on Amazon Athena and query your data immediately without affecting your Amazon EMR jobs. Amazon Redshift A data warehouse like Amazon Redshift is your best choice when you need to pull together data from many different sources – like inventory systems, financial systems, and retail sales systems – into a common format, and store it for long periods of time. If you want to build sophisticated business reports from historical data, then a data warehouse like Amazon Redshift is the best choice. The query engine in Amazon Redshift has been optimized to perform especially well on running complex queries that join large numbers of very large database tables. When you need to run queries against highly structured data with lots of joins across lots of very large tables, choose Amazon Redshift. For more information about when to use Athena, consult the following resources: • Decision guide for analytics services on AWS in the Getting Started Resource Center • When to use Athena vs other big data services in the Amazon"} +{"global_id": 114, "doc_id": "athena", "chunk_id": "3", "question_id": 3, "question": "What is the best choice for pulling together data from many different sources?", "answer_span": "A data warehouse like Amazon Redshift is your best choice when you need to pull together data from many different sources – like inventory systems, financial systems, and retail sales systems – into a common format, and store it for long periods of time.", "chunk": "SQL queries, Amazon EMR can run a wide variety of scale-out data processing tasks for applications such as machine learning, graph analytics, data transformation, streaming data, and virtually anything you can code. You should use Amazon EMR if you use custom code to process and analyze extremely large datasets with the latest big data processing frameworks such as Spark, Hadoop, Presto, or Hbase. Amazon EMR gives you full control over the configuration of your clusters and the software installed on them. You can use Amazon Athena to query data that you process using Amazon EMR. Amazon Athena supports many of the same data formats as Amazon EMR. Athena's data catalog is Hive metastore compatible. If you use EMR and already have a Hive metastore, you can run your DDL statements on Amazon Athena and query your data immediately without affecting your Amazon EMR jobs. Amazon Redshift A data warehouse like Amazon Redshift is your best choice when you need to pull together data from many different sources – like inventory systems, financial systems, and retail sales systems – into a common format, and store it for long periods of time. If you want to build sophisticated business reports from historical data, then a data warehouse like Amazon Redshift is the best choice. The query engine in Amazon Redshift has been optimized to perform especially well on running complex queries that join large numbers of very large database tables. When you need to run queries against highly structured data with lots of joins across lots of very large tables, choose Amazon Redshift. For more information about when to use Athena, consult the following resources: • Decision guide for analytics services on AWS in the Getting Started Resource Center • When to use Athena vs other big data services in the Amazon"} +{"global_id": 115, "doc_id": "athena", "chunk_id": "3", "question_id": 4, "question": "What is optimized for running complex queries that join large numbers of very large database tables?", "answer_span": "The query engine in Amazon Redshift has been optimized to perform especially well on running complex queries that join large numbers of very large database tables.", "chunk": "SQL queries, Amazon EMR can run a wide variety of scale-out data processing tasks for applications such as machine learning, graph analytics, data transformation, streaming data, and virtually anything you can code. You should use Amazon EMR if you use custom code to process and analyze extremely large datasets with the latest big data processing frameworks such as Spark, Hadoop, Presto, or Hbase. Amazon EMR gives you full control over the configuration of your clusters and the software installed on them. You can use Amazon Athena to query data that you process using Amazon EMR. Amazon Athena supports many of the same data formats as Amazon EMR. Athena's data catalog is Hive metastore compatible. If you use EMR and already have a Hive metastore, you can run your DDL statements on Amazon Athena and query your data immediately without affecting your Amazon EMR jobs. Amazon Redshift A data warehouse like Amazon Redshift is your best choice when you need to pull together data from many different sources – like inventory systems, financial systems, and retail sales systems – into a common format, and store it for long periods of time. If you want to build sophisticated business reports from historical data, then a data warehouse like Amazon Redshift is the best choice. The query engine in Amazon Redshift has been optimized to perform especially well on running complex queries that join large numbers of very large database tables. When you need to run queries against highly structured data with lots of joins across lots of very large tables, choose Amazon Redshift. For more information about when to use Athena, consult the following resources: • Decision guide for analytics services on AWS in the Getting Started Resource Center • When to use Athena vs other big data services in the Amazon"} +{"global_id": 116, "doc_id": "athena", "chunk_id": "4", "question_id": 1, "question": "What should you choose for joins across lots of very large tables?", "answer_span": "choose Amazon Redshift.", "chunk": "of joins across lots of very large tables, choose Amazon Redshift. For more information about when to use Athena, consult the following resources: • Decision guide for analytics services on AWS in the Getting Started Resource Center • When to use Athena vs other big data services in the Amazon Athena FAQs • Amazon Athena overview • Amazon Athena features • Amazon Athena FAQs • Amazon Athena blog posts Amazon Redshift 3 Amazon Athena User Guide Client and programming tools for using Athena You can access Athena using a variety of client and programming tools. These tools include the AWS Management Console, a JDBC or ODBC connection, the Athena API, the Athena CLI, the AWS SDK, or AWS Tools for Windows PowerShell. • To get started using Athena SQL with the console, see Get started. • To get started creating Jupyter compatible notebooks and Apache Spark applications that use Python, see Use Apache Spark in Amazon Athena. • To learn how to use JDBC or ODBC drivers, see Connect to Amazon Athena with JDBC and Connect to Amazon Athena with ODBC. • To use the Athena API, see the Amazon Athena API Reference. • To use the CLI, install the AWS CLI and then type aws athena help from the command line to see available commands. For information about available commands, see the Amazon Athena command line reference. • To use the AWS SDK for Java 2.x, see the Athena section of the AWS SDK for Java 2.x API Reference, the Athena Java V2 examples on GitHub.com, and the AWS SDK for Java 2.x Developer Guide. • To use the AWS SDK for .NET, see the Amazon.Athena namespace in the AWS SDK for .NET API Reference, the .NET Athena examples on GitHub.com, and the AWS SDK for .NET Developer Guide."} +{"global_id": 117, "doc_id": "athena", "chunk_id": "4", "question_id": 2, "question": "Where can you find the decision guide for analytics services on AWS?", "answer_span": "in the Getting Started Resource Center", "chunk": "of joins across lots of very large tables, choose Amazon Redshift. For more information about when to use Athena, consult the following resources: • Decision guide for analytics services on AWS in the Getting Started Resource Center • When to use Athena vs other big data services in the Amazon Athena FAQs • Amazon Athena overview • Amazon Athena features • Amazon Athena FAQs • Amazon Athena blog posts Amazon Redshift 3 Amazon Athena User Guide Client and programming tools for using Athena You can access Athena using a variety of client and programming tools. These tools include the AWS Management Console, a JDBC or ODBC connection, the Athena API, the Athena CLI, the AWS SDK, or AWS Tools for Windows PowerShell. • To get started using Athena SQL with the console, see Get started. • To get started creating Jupyter compatible notebooks and Apache Spark applications that use Python, see Use Apache Spark in Amazon Athena. • To learn how to use JDBC or ODBC drivers, see Connect to Amazon Athena with JDBC and Connect to Amazon Athena with ODBC. • To use the Athena API, see the Amazon Athena API Reference. • To use the CLI, install the AWS CLI and then type aws athena help from the command line to see available commands. For information about available commands, see the Amazon Athena command line reference. • To use the AWS SDK for Java 2.x, see the Athena section of the AWS SDK for Java 2.x API Reference, the Athena Java V2 examples on GitHub.com, and the AWS SDK for Java 2.x Developer Guide. • To use the AWS SDK for .NET, see the Amazon.Athena namespace in the AWS SDK for .NET API Reference, the .NET Athena examples on GitHub.com, and the AWS SDK for .NET Developer Guide."} +{"global_id": 118, "doc_id": "athena", "chunk_id": "4", "question_id": 3, "question": "What tools can you use to access Athena?", "answer_span": "These tools include the AWS Management Console, a JDBC or ODBC connection, the Athena API, the Athena CLI, the AWS SDK, or AWS Tools for Windows PowerShell.", "chunk": "of joins across lots of very large tables, choose Amazon Redshift. For more information about when to use Athena, consult the following resources: • Decision guide for analytics services on AWS in the Getting Started Resource Center • When to use Athena vs other big data services in the Amazon Athena FAQs • Amazon Athena overview • Amazon Athena features • Amazon Athena FAQs • Amazon Athena blog posts Amazon Redshift 3 Amazon Athena User Guide Client and programming tools for using Athena You can access Athena using a variety of client and programming tools. These tools include the AWS Management Console, a JDBC or ODBC connection, the Athena API, the Athena CLI, the AWS SDK, or AWS Tools for Windows PowerShell. • To get started using Athena SQL with the console, see Get started. • To get started creating Jupyter compatible notebooks and Apache Spark applications that use Python, see Use Apache Spark in Amazon Athena. • To learn how to use JDBC or ODBC drivers, see Connect to Amazon Athena with JDBC and Connect to Amazon Athena with ODBC. • To use the Athena API, see the Amazon Athena API Reference. • To use the CLI, install the AWS CLI and then type aws athena help from the command line to see available commands. For information about available commands, see the Amazon Athena command line reference. • To use the AWS SDK for Java 2.x, see the Athena section of the AWS SDK for Java 2.x API Reference, the Athena Java V2 examples on GitHub.com, and the AWS SDK for Java 2.x Developer Guide. • To use the AWS SDK for .NET, see the Amazon.Athena namespace in the AWS SDK for .NET API Reference, the .NET Athena examples on GitHub.com, and the AWS SDK for .NET Developer Guide."} +{"global_id": 119, "doc_id": "athena", "chunk_id": "4", "question_id": 4, "question": "What should you see to get started using Athena SQL with the console?", "answer_span": "see Get started.", "chunk": "of joins across lots of very large tables, choose Amazon Redshift. For more information about when to use Athena, consult the following resources: • Decision guide for analytics services on AWS in the Getting Started Resource Center • When to use Athena vs other big data services in the Amazon Athena FAQs • Amazon Athena overview • Amazon Athena features • Amazon Athena FAQs • Amazon Athena blog posts Amazon Redshift 3 Amazon Athena User Guide Client and programming tools for using Athena You can access Athena using a variety of client and programming tools. These tools include the AWS Management Console, a JDBC or ODBC connection, the Athena API, the Athena CLI, the AWS SDK, or AWS Tools for Windows PowerShell. • To get started using Athena SQL with the console, see Get started. • To get started creating Jupyter compatible notebooks and Apache Spark applications that use Python, see Use Apache Spark in Amazon Athena. • To learn how to use JDBC or ODBC drivers, see Connect to Amazon Athena with JDBC and Connect to Amazon Athena with ODBC. • To use the Athena API, see the Amazon Athena API Reference. • To use the CLI, install the AWS CLI and then type aws athena help from the command line to see available commands. For information about available commands, see the Amazon Athena command line reference. • To use the AWS SDK for Java 2.x, see the Athena section of the AWS SDK for Java 2.x API Reference, the Athena Java V2 examples on GitHub.com, and the AWS SDK for Java 2.x Developer Guide. • To use the AWS SDK for .NET, see the Amazon.Athena namespace in the AWS SDK for .NET API Reference, the .NET Athena examples on GitHub.com, and the AWS SDK for .NET Developer Guide."} +{"global_id": 120, "doc_id": "athena", "chunk_id": "5", "question_id": 1, "question": "Where can you find the Athena Java V2 examples?", "answer_span": "the Athena Java V2 examples on GitHub.com", "chunk": "the Athena Java V2 examples on GitHub.com, and the AWS SDK for Java 2.x Developer Guide. • To use the AWS SDK for .NET, see the Amazon.Athena namespace in the AWS SDK for .NET API Reference, the .NET Athena examples on GitHub.com, and the AWS SDK for .NET Developer Guide. • To use AWS Tools for Windows PowerShell, see the AWS Tools for PowerShell - Amazon Athena cmdlet reference, the AWS Tools for PowerShell portal page, and the AWS Tools for PowerShell User Guide. • For information about Athena service endpoints that you can connect to programmatically, see Amazon Athena endpoints and quotas in the Amazon Web Services General Reference. Set up, administrative, and programmatic access If you've already signed up for Amazon Web Services, you can start using Amazon Athena immediately. If you haven't signed up for AWS or need assistance getting started, be sure to complete the following tasks. Ways to use Athena 4"} +{"global_id": 121, "doc_id": "athena", "chunk_id": "5", "question_id": 2, "question": "What is the reference for the AWS SDK for .NET?", "answer_span": "the Amazon.Athena namespace in the AWS SDK for .NET API Reference", "chunk": "the Athena Java V2 examples on GitHub.com, and the AWS SDK for Java 2.x Developer Guide. • To use the AWS SDK for .NET, see the Amazon.Athena namespace in the AWS SDK for .NET API Reference, the .NET Athena examples on GitHub.com, and the AWS SDK for .NET Developer Guide. • To use AWS Tools for Windows PowerShell, see the AWS Tools for PowerShell - Amazon Athena cmdlet reference, the AWS Tools for PowerShell portal page, and the AWS Tools for PowerShell User Guide. • For information about Athena service endpoints that you can connect to programmatically, see Amazon Athena endpoints and quotas in the Amazon Web Services General Reference. Set up, administrative, and programmatic access If you've already signed up for Amazon Web Services, you can start using Amazon Athena immediately. If you haven't signed up for AWS or need assistance getting started, be sure to complete the following tasks. Ways to use Athena 4"} +{"global_id": 122, "doc_id": "athena", "chunk_id": "5", "question_id": 3, "question": "What should you see for information about Athena service endpoints?", "answer_span": "Amazon Athena endpoints and quotas in the Amazon Web Services General Reference", "chunk": "the Athena Java V2 examples on GitHub.com, and the AWS SDK for Java 2.x Developer Guide. • To use the AWS SDK for .NET, see the Amazon.Athena namespace in the AWS SDK for .NET API Reference, the .NET Athena examples on GitHub.com, and the AWS SDK for .NET Developer Guide. • To use AWS Tools for Windows PowerShell, see the AWS Tools for PowerShell - Amazon Athena cmdlet reference, the AWS Tools for PowerShell portal page, and the AWS Tools for PowerShell User Guide. • For information about Athena service endpoints that you can connect to programmatically, see Amazon Athena endpoints and quotas in the Amazon Web Services General Reference. Set up, administrative, and programmatic access If you've already signed up for Amazon Web Services, you can start using Amazon Athena immediately. If you haven't signed up for AWS or need assistance getting started, be sure to complete the following tasks. Ways to use Athena 4"} +{"global_id": 123, "doc_id": "athena", "chunk_id": "5", "question_id": 4, "question": "What can you do if you've already signed up for Amazon Web Services?", "answer_span": "you can start using Amazon Athena immediately", "chunk": "the Athena Java V2 examples on GitHub.com, and the AWS SDK for Java 2.x Developer Guide. • To use the AWS SDK for .NET, see the Amazon.Athena namespace in the AWS SDK for .NET API Reference, the .NET Athena examples on GitHub.com, and the AWS SDK for .NET Developer Guide. • To use AWS Tools for Windows PowerShell, see the AWS Tools for PowerShell - Amazon Athena cmdlet reference, the AWS Tools for PowerShell portal page, and the AWS Tools for PowerShell User Guide. • For information about Athena service endpoints that you can connect to programmatically, see Amazon Athena endpoints and quotas in the Amazon Web Services General Reference. Set up, administrative, and programmatic access If you've already signed up for Amazon Web Services, you can start using Amazon Athena immediately. If you haven't signed up for AWS or need assistance getting started, be sure to complete the following tasks. Ways to use Athena 4"} +{"global_id": 124, "doc_id": "wavelength", "chunk_id": "0", "question_id": 1, "question": "What is AWS Wavelength?", "answer_span": "AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications.", "chunk": "AWS Wavelength Developer Guide What is AWS Wavelength? AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications. Wavelength deploys standard AWS compute and storage services to the edge of communications service providers' (CSP) networks. You can extend a virtual private cloud (VPC) to one or more Wavelength Zones. You can then use AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances to run the applications that require low latency or edge resiliency within the Wavelength Zone, while seamlessly communicating back to your existing AWS services deployed in the parent AWS Region. For more information, see AWS Wavelength. Wavelength concepts The following are the key concepts: • Wavelength — A new type of AWS infrastructure designed to run workloads that require low latency or edge resiliency. • Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed. Wavelength Zones are associated with an AWS Region. A Wavelength Zone is a logical extension of the Region, and is managed by the control plane in the Region. • VPC — A customer virtual private cloud (VPC) that spans Availability Zones, Local Zones, and Wavelength Zones, and has deployed resources such as Amazon EC2 instances in the subnets that are associated with the zones. • Wavelength subnet — A subnet that you create in a Wavelength Zone. You can create one or more subnets, and then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group"} +{"global_id": 125, "doc_id": "wavelength", "chunk_id": "0", "question_id": 2, "question": "What is a Wavelength Zone?", "answer_span": "Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed.", "chunk": "AWS Wavelength Developer Guide What is AWS Wavelength? AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications. Wavelength deploys standard AWS compute and storage services to the edge of communications service providers' (CSP) networks. You can extend a virtual private cloud (VPC) to one or more Wavelength Zones. You can then use AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances to run the applications that require low latency or edge resiliency within the Wavelength Zone, while seamlessly communicating back to your existing AWS services deployed in the parent AWS Region. For more information, see AWS Wavelength. Wavelength concepts The following are the key concepts: • Wavelength — A new type of AWS infrastructure designed to run workloads that require low latency or edge resiliency. • Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed. Wavelength Zones are associated with an AWS Region. A Wavelength Zone is a logical extension of the Region, and is managed by the control plane in the Region. • VPC — A customer virtual private cloud (VPC) that spans Availability Zones, Local Zones, and Wavelength Zones, and has deployed resources such as Amazon EC2 instances in the subnets that are associated with the zones. • Wavelength subnet — A subnet that you create in a Wavelength Zone. You can create one or more subnets, and then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group"} +{"global_id": 126, "doc_id": "wavelength", "chunk_id": "0", "question_id": 3, "question": "What does a carrier gateway do?", "answer_span": "A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet.", "chunk": "AWS Wavelength Developer Guide What is AWS Wavelength? AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications. Wavelength deploys standard AWS compute and storage services to the edge of communications service providers' (CSP) networks. You can extend a virtual private cloud (VPC) to one or more Wavelength Zones. You can then use AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances to run the applications that require low latency or edge resiliency within the Wavelength Zone, while seamlessly communicating back to your existing AWS services deployed in the parent AWS Region. For more information, see AWS Wavelength. Wavelength concepts The following are the key concepts: • Wavelength — A new type of AWS infrastructure designed to run workloads that require low latency or edge resiliency. • Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed. Wavelength Zones are associated with an AWS Region. A Wavelength Zone is a logical extension of the Region, and is managed by the control plane in the Region. • VPC — A customer virtual private cloud (VPC) that spans Availability Zones, Local Zones, and Wavelength Zones, and has deployed resources such as Amazon EC2 instances in the subnets that are associated with the zones. • Wavelength subnet — A subnet that you create in a Wavelength Zone. You can create one or more subnets, and then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group"} +{"global_id": 127, "doc_id": "wavelength", "chunk_id": "0", "question_id": 4, "question": "What is a VPC?", "answer_span": "VPC — A customer virtual private cloud (VPC) that spans Availability Zones, Local Zones, and Wavelength Zones, and has deployed resources such as Amazon EC2 instances in the subnets that are associated with the zones.", "chunk": "AWS Wavelength Developer Guide What is AWS Wavelength? AWS Wavelength enables developers to build applications that require edge computing infrastructure to deliver low latency to mobile devices and end users or increase the resiliency of their existing edge applications. Wavelength deploys standard AWS compute and storage services to the edge of communications service providers' (CSP) networks. You can extend a virtual private cloud (VPC) to one or more Wavelength Zones. You can then use AWS resources such as Amazon Elastic Compute Cloud (Amazon EC2) instances to run the applications that require low latency or edge resiliency within the Wavelength Zone, while seamlessly communicating back to your existing AWS services deployed in the parent AWS Region. For more information, see AWS Wavelength. Wavelength concepts The following are the key concepts: • Wavelength — A new type of AWS infrastructure designed to run workloads that require low latency or edge resiliency. • Wavelength Zone — A zone in the carrier location where the Wavelength infrastructure is deployed. Wavelength Zones are associated with an AWS Region. A Wavelength Zone is a logical extension of the Region, and is managed by the control plane in the Region. • VPC — A customer virtual private cloud (VPC) that spans Availability Zones, Local Zones, and Wavelength Zones, and has deployed resources such as Amazon EC2 instances in the subnets that are associated with the zones. • Wavelength subnet — A subnet that you create in a Wavelength Zone. You can create one or more subnets, and then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group"} +{"global_id": 128, "doc_id": "wavelength", "chunk_id": "1", "question_id": 1, "question": "What does a carrier gateway allow?", "answer_span": "It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet.", "chunk": "then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group — A unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses. • Wavelength application — An application that you run on an AWS resource in a Wavelength Zone. Wavelength concepts 1 AWS Wavelength Developer Guide AWS resources on Wavelength You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones. You can also use the following: • Amazon EC2 Auto Scaling • Amazon EKS clusters • Amazon ECS clusters • Amazon EC2 Systems Manager • Amazon CloudWatch • AWS CloudTrail • AWS CloudFormation • Application Load Balancer in select Wavelength Zones. For a list of these Zones, see Load balancing. The services in Wavelength are part of a VPC that is connected over a reliable connection to an AWS Region for easy access to services running in Regional subnets. Working with Wavelength You can create, access, and manage your EC2 resources, Wavelength Zones, and carrier gateways using any of the following interfaces: • AWS Management Console— Provides a web interface that you can use to access your Wavelength resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs"} +{"global_id": 129, "doc_id": "wavelength", "chunk_id": "1", "question_id": 2, "question": "What can you create in Wavelength Zones?", "answer_span": "You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones.", "chunk": "then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group — A unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses. • Wavelength application — An application that you run on an AWS resource in a Wavelength Zone. Wavelength concepts 1 AWS Wavelength Developer Guide AWS resources on Wavelength You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones. You can also use the following: • Amazon EC2 Auto Scaling • Amazon EKS clusters • Amazon ECS clusters • Amazon EC2 Systems Manager • Amazon CloudWatch • AWS CloudTrail • AWS CloudFormation • Application Load Balancer in select Wavelength Zones. For a list of these Zones, see Load balancing. The services in Wavelength are part of a VPC that is connected over a reliable connection to an AWS Region for easy access to services running in Regional subnets. Working with Wavelength You can create, access, and manage your EC2 resources, Wavelength Zones, and carrier gateways using any of the following interfaces: • AWS Management Console— Provides a web interface that you can use to access your Wavelength resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs"} +{"global_id": 130, "doc_id": "wavelength", "chunk_id": "1", "question_id": 3, "question": "What interface provides a web interface to access Wavelength resources?", "answer_span": "AWS Management Console— Provides a web interface that you can use to access your Wavelength resources.", "chunk": "then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group — A unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses. • Wavelength application — An application that you run on an AWS resource in a Wavelength Zone. Wavelength concepts 1 AWS Wavelength Developer Guide AWS resources on Wavelength You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones. You can also use the following: • Amazon EC2 Auto Scaling • Amazon EKS clusters • Amazon ECS clusters • Amazon EC2 Systems Manager • Amazon CloudWatch �� AWS CloudTrail • AWS CloudFormation • Application Load Balancer in select Wavelength Zones. For a list of these Zones, see Load balancing. The services in Wavelength are part of a VPC that is connected over a reliable connection to an AWS Region for easy access to services running in Regional subnets. Working with Wavelength You can create, access, and manage your EC2 resources, Wavelength Zones, and carrier gateways using any of the following interfaces: • AWS Management Console— Provides a web interface that you can use to access your Wavelength resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs"} +{"global_id": 131, "doc_id": "wavelength", "chunk_id": "1", "question_id": 4, "question": "Which operating systems support the AWS Command Line Interface?", "answer_span": "is supported on Windows, macOS, and Linux.", "chunk": "then run and manage AWS services, such as Amazon EC2 instances, in the subnet. • Carrier gateway — A carrier gateway serves two purposes. It allows inbound traffic from a carrier network in a specific location, and allows outbound traffic to the carrier network and internet. • Network Border Group — A unique set of Availability Zones, Local Zones, or Wavelength Zones from which AWS advertises IP addresses. • Wavelength application — An application that you run on an AWS resource in a Wavelength Zone. Wavelength concepts 1 AWS Wavelength Developer Guide AWS resources on Wavelength You can create Amazon EC2 instances, Amazon EBS volumes, and Amazon VPC subnets and carrier gateways in Wavelength Zones. You can also use the following: • Amazon EC2 Auto Scaling • Amazon EKS clusters • Amazon ECS clusters • Amazon EC2 Systems Manager • Amazon CloudWatch • AWS CloudTrail • AWS CloudFormation • Application Load Balancer in select Wavelength Zones. For a list of these Zones, see Load balancing. The services in Wavelength are part of a VPC that is connected over a reliable connection to an AWS Region for easy access to services running in Regional subnets. Working with Wavelength You can create, access, and manage your EC2 resources, Wavelength Zones, and carrier gateways using any of the following interfaces: • AWS Management Console— Provides a web interface that you can use to access your Wavelength resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs"} +{"global_id": 132, "doc_id": "wavelength", "chunk_id": "2", "question_id": 1, "question": "What operating systems are supported by the services including Amazon VPC?", "answer_span": "is supported on Windows, macOS, and Linux.", "chunk": "services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors. For more information, see AWS SDKs. When you use any of the interfaces for your Wavelength Zones, use the parent Region. AWS resources on Wavelength 2 AWS Wavelength Developer Guide Pricing For more information, see AWS Wavelength Pricing. Use cases for AWS Wavelength Using AWS Wavelength Zones can help you accomplish a variety of goals. This section lists a few to give you an idea of the possibilities. Contents • Online betting and regulated industries • Media and entertainment • Healthcare • Augmented reality (AR) and virtual reality (VR) • Connected vehicles • Smart factories • Real-time gaming Online betting and regulated industries AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting. Using a combination of AWS Wavelength alongside existing AWS hybrid and edge services such as AWS Outposts or AWS Local Zones, you can create highly-available architectures within state or country borders. Media and entertainment Wavelength provides the low latency needed to live stream high-resolution video and high-fidelity audio, and to embed interactive experiences into live video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using"} +{"global_id": 133, "doc_id": "wavelength", "chunk_id": "2", "question_id": 2, "question": "What namespace does Amazon EC2 use?", "answer_span": "for example Amazon EC2 uses the \"ec2\" namespace,", "chunk": "services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors. For more information, see AWS SDKs. When you use any of the interfaces for your Wavelength Zones, use the parent Region. AWS resources on Wavelength 2 AWS Wavelength Developer Guide Pricing For more information, see AWS Wavelength Pricing. Use cases for AWS Wavelength Using AWS Wavelength Zones can help you accomplish a variety of goals. This section lists a few to give you an idea of the possibilities. Contents • Online betting and regulated industries • Media and entertainment • Healthcare • Augmented reality (AR) and virtual reality (VR) • Connected vehicles • Smart factories • Real-time gaming Online betting and regulated industries AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting. Using a combination of AWS Wavelength alongside existing AWS hybrid and edge services such as AWS Outposts or AWS Local Zones, you can create highly-available architectures within state or country borders. Media and entertainment Wavelength provides the low latency needed to live stream high-resolution video and high-fidelity audio, and to embed interactive experiences into live video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using"} +{"global_id": 134, "doc_id": "wavelength", "chunk_id": "2", "question_id": 3, "question": "What does AWS Wavelength provide for online betting and regulated industries?", "answer_span": "AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting.", "chunk": "services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors. For more information, see AWS SDKs. When you use any of the interfaces for your Wavelength Zones, use the parent Region. AWS resources on Wavelength 2 AWS Wavelength Developer Guide Pricing For more information, see AWS Wavelength Pricing. Use cases for AWS Wavelength Using AWS Wavelength Zones can help you accomplish a variety of goals. This section lists a few to give you an idea of the possibilities. Contents • Online betting and regulated industries • Media and entertainment • Healthcare • Augmented reality (AR) and virtual reality (VR) • Connected vehicles • Smart factories • Real-time gaming Online betting and regulated industries AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting. Using a combination of AWS Wavelength alongside existing AWS hybrid and edge services such as AWS Outposts or AWS Local Zones, you can create highly-available architectures within state or country borders. Media and entertainment Wavelength provides the low latency needed to live stream high-resolution video and high-fidelity audio, and to embed interactive experiences into live video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using"} +{"global_id": 135, "doc_id": "wavelength", "chunk_id": "2", "question_id": 4, "question": "How does Wavelength benefit media and entertainment?", "answer_span": "Wavelength provides the low latency needed to live stream high-resolution video and high-��delity audio, and to embed interactive experiences into live video streams.", "chunk": "services, including Amazon VPC, and is supported on Windows, macOS, and Linux. The services you use in Wavelength continue to use their own namespace, for example Amazon EC2 uses the \"ec2\" namespace, and Amazon EBS uses the \"ebs\" namespace. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and handling errors. For more information, see AWS SDKs. When you use any of the interfaces for your Wavelength Zones, use the parent Region. AWS resources on Wavelength 2 AWS Wavelength Developer Guide Pricing For more information, see AWS Wavelength Pricing. Use cases for AWS Wavelength Using AWS Wavelength Zones can help you accomplish a variety of goals. This section lists a few to give you an idea of the possibilities. Contents • Online betting and regulated industries • Media and entertainment • Healthcare • Augmented reality (AR) and virtual reality (VR) • Connected vehicles • Smart factories • Real-time gaming Online betting and regulated industries AWS Wavelength provides edge resiliency to help address data residency requirements for regulated industries, such as online sports betting. Using a combination of AWS Wavelength alongside existing AWS hybrid and edge services such as AWS Outposts or AWS Local Zones, you can create highly-available architectures within state or country borders. Media and entertainment Wavelength provides the low latency needed to live stream high-resolution video and high-fidelity audio, and to embed interactive experiences into live video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using"} +{"global_id": 136, "doc_id": "wavelength", "chunk_id": "3", "question_id": 1, "question": "What do real-time video analytics provide?", "answer_span": "Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience.", "chunk": "video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using AWS Wavelength to host the remote rendering engine, doctors can experience an immersive training experience without procuring the often-required expensive equipment to do so. Augmented reality (AR) and virtual reality (VR) By accessing compute resources on AWS Wavelength, AR/VR applications can reduce the Motion to Photon (MTP) latencies to the benchmark that is needed to offer a realistic customer experience. When you use AWS Wavelength, you can offer AR/VR in locations where it is not possible to run local system servers. Connected vehicles Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety. Low latency access to the compute infrastructure that's needed to run data processing and analytics on AWS Wavelength enables real-time monitoring of data from sensors on the vehicle. This allows for secure connectivity, in-car telematics, and autonomous driving. Smart factories Industrial automation applications use ML inference at the edge to analyze images and videos to detect quality issues on fast moving assembly lines and to trigger actions that address the issues. With AWS Wavelength, these applications can be deployed without having to use expensive, GPUbased servers on the factory floor. Real-time gaming Real-time game streaming depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength"} +{"global_id": 137, "doc_id": "wavelength", "chunk_id": "3", "question_id": 2, "question": "What can medical training providers offer using AWS Wavelength?", "answer_span": "medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more.", "chunk": "video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using AWS Wavelength to host the remote rendering engine, doctors can experience an immersive training experience without procuring the often-required expensive equipment to do so. Augmented reality (AR) and virtual reality (VR) By accessing compute resources on AWS Wavelength, AR/VR applications can reduce the Motion to Photon (MTP) latencies to the benchmark that is needed to offer a realistic customer experience. When you use AWS Wavelength, you can offer AR/VR in locations where it is not possible to run local system servers. Connected vehicles Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety. Low latency access to the compute infrastructure that's needed to run data processing and analytics on AWS Wavelength enables real-time monitoring of data from sensors on the vehicle. This allows for secure connectivity, in-car telematics, and autonomous driving. Smart factories Industrial automation applications use ML inference at the edge to analyze images and videos to detect quality issues on fast moving assembly lines and to trigger actions that address the issues. With AWS Wavelength, these applications can be deployed without having to use expensive, GPUbased servers on the factory floor. Real-time gaming Real-time game streaming depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength"} +{"global_id": 138, "doc_id": "wavelength", "chunk_id": "3", "question_id": 3, "question": "What does Cellular Vehicle-to-Everything (C-V2X) enable?", "answer_span": "Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety.", "chunk": "video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using AWS Wavelength to host the remote rendering engine, doctors can experience an immersive training experience without procuring the often-required expensive equipment to do so. Augmented reality (AR) and virtual reality (VR) By accessing compute resources on AWS Wavelength, AR/VR applications can reduce the Motion to Photon (MTP) latencies to the benchmark that is needed to offer a realistic customer experience. When you use AWS Wavelength, you can offer AR/VR in locations where it is not possible to run local system servers. Connected vehicles Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety. Low latency access to the compute infrastructure that's needed to run data processing and analytics on AWS Wavelength enables real-time monitoring of data from sensors on the vehicle. This allows for secure connectivity, in-car telematics, and autonomous driving. Smart factories Industrial automation applications use ML inference at the edge to analyze images and videos to detect quality issues on fast moving assembly lines and to trigger actions that address the issues. With AWS Wavelength, these applications can be deployed without having to use expensive, GPUbased servers on the factory floor. Real-time gaming Real-time game streaming depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength"} +{"global_id": 139, "doc_id": "wavelength", "chunk_id": "3", "question_id": 4, "question": "What is essential for real-time game streaming?", "answer_span": "Real-time game streaming depends on low latency to preserve the user experience.", "chunk": "video streams. Real-time video analytics provide the ability to generate real-time statistics that enhance the live event experience. Healthcare Using AWS Wavelength, medical training providers can offer mobile games, medical simulations for rare disease diagnosis, advanced endoscopic maneuvers, ultrasound equipment and much more. Pricing 3 AWS Wavelength Developer Guide Using AWS Wavelength to host the remote rendering engine, doctors can experience an immersive training experience without procuring the often-required expensive equipment to do so. Augmented reality (AR) and virtual reality (VR) By accessing compute resources on AWS Wavelength, AR/VR applications can reduce the Motion to Photon (MTP) latencies to the benchmark that is needed to offer a realistic customer experience. When you use AWS Wavelength, you can offer AR/VR in locations where it is not possible to run local system servers. Connected vehicles Cellular Vehicle-to-Everything (C-V2X) is an increasingly important platform for enabling functionality such as intelligent driving, real-time HD maps, and increased road safety. Low latency access to the compute infrastructure that's needed to run data processing and analytics on AWS Wavelength enables real-time monitoring of data from sensors on the vehicle. This allows for secure connectivity, in-car telematics, and autonomous driving. Smart factories Industrial automation applications use ML inference at the edge to analyze images and videos to detect quality issues on fast moving assembly lines and to trigger actions that address the issues. With AWS Wavelength, these applications can be deployed without having to use expensive, GPUbased servers on the factory floor. Real-time gaming Real-time game streaming depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength"} +{"global_id": 140, "doc_id": "wavelength", "chunk_id": "4", "question_id": 1, "question": "What does AWS Wavelength depend on to preserve the user experience?", "answer_span": "depends on low latency to preserve the user experience.", "chunk": "depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength works The following diagram demonstrates how you can create a subnet that uses resources in a communications service provider (CSP) network at a specific location. For resources that must be deployed to the Wavelength Zone, first opt in to the Wavelength Zone, and then create resources in the Wavelength Zone. Contents • VPCs • Subnets • Carrier gateways • Carrier IP address • Routing • DNS • Maximum transmission unit VPCs After you create a VPC in a Region, create a subnet in a Wavelength Zone that is associated with the VPC. In addition to the Wavelength Zone, you can create resources in all of the Availability Zones and Local Zones that are associated with the VPC. VPCs 5 AWS Wavelength Developer Guide Architect apps for Wavelength Wavelength Zones are designed for the following workloads: • Applications that require edge resiliency across existing AWS hybrid and edge infrastructure deployments • Applications that need to connect to compute with low latency • Applications that need to run in a certain geography due to legal or regulatory requirements • Applications that need consistent data rates from mobile devices to compute in a Wavelength Zone Review Quotas and considerations, which includes information about available Wavelength Zones, service differences, and Service Quotas. Consider the following factors when using Wavelength Zones: • AWS recommends that you architect the edge applications in a hub and spoke model with the Region to provide the most scalable, resilient, and cost-effective options for components. For more information, see the"} +{"global_id": 141, "doc_id": "wavelength", "chunk_id": "4", "question_id": 2, "question": "What can you stream from Wavelength Zones?", "answer_span": "you can stream the most demanding games from Wavelength Zones", "chunk": "depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength works The following diagram demonstrates how you can create a subnet that uses resources in a communications service provider (CSP) network at a specific location. For resources that must be deployed to the Wavelength Zone, first opt in to the Wavelength Zone, and then create resources in the Wavelength Zone. Contents • VPCs • Subnets • Carrier gateways • Carrier IP address • Routing • DNS • Maximum transmission unit VPCs After you create a VPC in a Region, create a subnet in a Wavelength Zone that is associated with the VPC. In addition to the Wavelength Zone, you can create resources in all of the Availability Zones and Local Zones that are associated with the VPC. VPCs 5 AWS Wavelength Developer Guide Architect apps for Wavelength Wavelength Zones are designed for the following workloads: • Applications that require edge resiliency across existing AWS hybrid and edge infrastructure deployments • Applications that need to connect to compute with low latency • Applications that need to run in a certain geography due to legal or regulatory requirements • Applications that need consistent data rates from mobile devices to compute in a Wavelength Zone Review Quotas and considerations, which includes information about available Wavelength Zones, service differences, and Service Quotas. Consider the following factors when using Wavelength Zones: • AWS recommends that you architect the edge applications in a hub and spoke model with the Region to provide the most scalable, resilient, and cost-effective options for components. For more information, see the"} +{"global_id": 142, "doc_id": "wavelength", "chunk_id": "4", "question_id": 3, "question": "What are Wavelength Zones designed for?", "answer_span": "Wavelength Zones are designed for the following workloads: • Applications that require edge resiliency across existing AWS hybrid and edge infrastructure deployments", "chunk": "depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength works The following diagram demonstrates how you can create a subnet that uses resources in a communications service provider (CSP) network at a specific location. For resources that must be deployed to the Wavelength Zone, first opt in to the Wavelength Zone, and then create resources in the Wavelength Zone. Contents • VPCs • Subnets • Carrier gateways • Carrier IP address • Routing • DNS • Maximum transmission unit VPCs After you create a VPC in a Region, create a subnet in a Wavelength Zone that is associated with the VPC. In addition to the Wavelength Zone, you can create resources in all of the Availability Zones and Local Zones that are associated with the VPC. VPCs 5 AWS Wavelength Developer Guide Architect apps for Wavelength Wavelength Zones are designed for the following workloads: • Applications that require edge resiliency across existing AWS hybrid and edge infrastructure deployments • Applications that need to connect to compute with low latency • Applications that need to run in a certain geography due to legal or regulatory requirements • Applications that need consistent data rates from mobile devices to compute in a Wavelength Zone Review Quotas and considerations, which includes information about available Wavelength Zones, service differences, and Service Quotas. Consider the following factors when using Wavelength Zones: • AWS recommends that you architect the edge applications in a hub and spoke model with the Region to provide the most scalable, resilient, and cost-effective options for components. For more information, see the"} +{"global_id": 143, "doc_id": "wavelength", "chunk_id": "4", "question_id": 4, "question": "What does AWS recommend for architecting edge applications?", "answer_span": "AWS recommends that you architect the edge applications in a hub and spoke model with the Region", "chunk": "depends on low latency to preserve the user experience. With AWS Wavelength, you can stream the most demanding games from Wavelength Zones so that they are available on end devices that have limited processing power. Augmented reality (AR) and virtual reality (VR) 4 AWS Wavelength Developer Guide How AWS Wavelength works The following diagram demonstrates how you can create a subnet that uses resources in a communications service provider (CSP) network at a specific location. For resources that must be deployed to the Wavelength Zone, first opt in to the Wavelength Zone, and then create resources in the Wavelength Zone. Contents • VPCs • Subnets • Carrier gateways • Carrier IP address • Routing • DNS • Maximum transmission unit VPCs After you create a VPC in a Region, create a subnet in a Wavelength Zone that is associated with the VPC. In addition to the Wavelength Zone, you can create resources in all of the Availability Zones and Local Zones that are associated with the VPC. VPCs 5 AWS Wavelength Developer Guide Architect apps for Wavelength Wavelength Zones are designed for the following workloads: • Applications that require edge resiliency across existing AWS hybrid and edge infrastructure deployments • Applications that need to connect to compute with low latency • Applications that need to run in a certain geography due to legal or regulatory requirements • Applications that need consistent data rates from mobile devices to compute in a Wavelength Zone Review Quotas and considerations, which includes information about available Wavelength Zones, service differences, and Service Quotas. Consider the following factors when using Wavelength Zones: • AWS recommends that you architect the edge applications in a hub and spoke model with the Region to provide the most scalable, resilient, and cost-effective options for components. For more information, see the"} +{"global_id": 144, "doc_id": "wavelength", "chunk_id": "5", "question_id": 1, "question": "What model does AWS recommend for architecting edge applications?", "answer_span": "AWS recommends that you architect the edge applications in a hub and spoke model with the Region to provide the most scalable, resilient, and cost-effective options for components.", "chunk": "available Wavelength Zones, service differences, and Service Quotas. Consider the following factors when using Wavelength Zones: • AWS recommends that you architect the edge applications in a hub and spoke model with the Region to provide the most scalable, resilient, and cost-effective options for components. For more information, see the section called “Workload placement” • Services that run in Wavelength Zones have different compliance than services in an AWS Region. For more information, see the section called “Compliance validation”. Most Wavelength Zones have network access that is specific to a telecommunication carrier and location. Therefore, you might need to have multiple Wavelength Zones for your latency-sensitive applications to meet your latency requirements. For more information, see the section called “Networking considerations”. Discover the closest Wavelength Zone endpoint You can use the following procedures to have client devices discover the closest Wavelength Zone endpoint, for example an Amazon EC2 instance: • Register the instance with a discovery service such as AWS Cloud Map. For information about how to register an instance, see Registering Instances in the AWS Cloud Map Developer Guide. • Another approach is to use multiple Wavelength Zones across your deployment and utilize adjacent Zones, powered by carrier-developed edge discovery services to route mobile traffic. Discover the closest Wavelength Zone endpoint 27 AWS Wavelength Developer Guide For more information, see Deploying dynamic 5G Edge Discovery architectures with AWS Wavelength. • Applications that run on client devices can run latency tests such as ping from the client to select the best endpoint that is registered in AWS Cloud Map, or can use the geolocation data from the mobile device. Load balancing Application Load Balancer (ALB) is supported in select Wavelength Zones. Load balancers distribute your incoming traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, within"} +{"global_id": 145, "doc_id": "wavelength", "chunk_id": "5", "question_id": 2, "question": "What is a requirement for services that run in Wavelength Zones?", "answer_span": "Services that run in Wavelength Zones have different compliance than services in an AWS Region.", "chunk": "available Wavelength Zones, service differences, and Service Quotas. Consider the following factors when using Wavelength Zones: • AWS recommends that you architect the edge applications in a hub and spoke model with the Region to provide the most scalable, resilient, and cost-effective options for components. For more information, see the section called “Workload placement” • Services that run in Wavelength Zones have different compliance than services in an AWS Region. For more information, see the section called “Compliance validation”. Most Wavelength Zones have network access that is specific to a telecommunication carrier and location. Therefore, you might need to have multiple Wavelength Zones for your latency-sensitive applications to meet your latency requirements. For more information, see the section called “Networking considerations”. Discover the closest Wavelength Zone endpoint You can use the following procedures to have client devices discover the closest Wavelength Zone endpoint, for example an Amazon EC2 instance: • Register the instance with a discovery service such as AWS Cloud Map. For information about how to register an instance, see Registering Instances in the AWS Cloud Map Developer Guide. • Another approach is to use multiple Wavelength Zones across your deployment and utilize adjacent Zones, powered by carrier-developed edge discovery services to route mobile traffic. Discover the closest Wavelength Zone endpoint 27 AWS Wavelength Developer Guide For more information, see Deploying dynamic 5G Edge Discovery architectures with AWS Wavelength. • Applications that run on client devices can run latency tests such as ping from the client to select the best endpoint that is registered in AWS Cloud Map, or can use the geolocation data from the mobile device. Load balancing Application Load Balancer (ALB) is supported in select Wavelength Zones. Load balancers distribute your incoming traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, within"} +{"global_id": 146, "doc_id": "wavelength", "chunk_id": "5", "question_id": 3, "question": "What can be used to discover the closest Wavelength Zone endpoint?", "answer_span": "You can use the following procedures to have client devices discover the closest Wavelength Zone endpoint, for example an Amazon EC2 instance: • Register the instance with a discovery service such as AWS Cloud Map.", "chunk": "available Wavelength Zones, service differences, and Service Quotas. Consider the following factors when using Wavelength Zones: • AWS recommends that you architect the edge applications in a hub and spoke model with the Region to provide the most scalable, resilient, and cost-effective options for components. For more information, see the section called “Workload placement” • Services that run in Wavelength Zones have different compliance than services in an AWS Region. For more information, see the section called “Compliance validation”. Most Wavelength Zones have network access that is specific to a telecommunication carrier and location. Therefore, you might need to have multiple Wavelength Zones for your latency-sensitive applications to meet your latency requirements. For more information, see the section called “Networking considerations”. Discover the closest Wavelength Zone endpoint You can use the following procedures to have client devices discover the closest Wavelength Zone endpoint, for example an Amazon EC2 instance: • Register the instance with a discovery service such as AWS Cloud Map. For information about how to register an instance, see Registering Instances in the AWS Cloud Map Developer Guide. • Another approach is to use multiple Wavelength Zones across your deployment and utilize adjacent Zones, powered by carrier-developed edge discovery services to route mobile traffic. Discover the closest Wavelength Zone endpoint 27 AWS Wavelength Developer Guide For more information, see Deploying dynamic 5G Edge Discovery architectures with AWS Wavelength. • Applications that run on client devices can run latency tests such as ping from the client to select the best endpoint that is registered in AWS Cloud Map, or can use the geolocation data from the mobile device. Load balancing Application Load Balancer (ALB) is supported in select Wavelength Zones. Load balancers distribute your incoming traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, within"} +{"global_id": 147, "doc_id": "wavelength", "chunk_id": "5", "question_id": 4, "question": "What is supported in select Wavelength Zones for load balancing?", "answer_span": "Application Load Balancer (ALB) is supported in select Wavelength Zones.", "chunk": "available Wavelength Zones, service differences, and Service Quotas. Consider the following factors when using Wavelength Zones: • AWS recommends that you architect the edge applications in a hub and spoke model with the Region to provide the most scalable, resilient, and cost-effective options for components. For more information, see the section called “Workload placement” • Services that run in Wavelength Zones have different compliance than services in an AWS Region. For more information, see the section called “Compliance validation”. Most Wavelength Zones have network access that is specific to a telecommunication carrier and location. Therefore, you might need to have multiple Wavelength Zones for your latency-sensitive applications to meet your latency requirements. For more information, see the section called “Networking considerations”. Discover the closest Wavelength Zone endpoint You can use the following procedures to have client devices discover the closest Wavelength Zone endpoint, for example an Amazon EC2 instance: • Register the instance with a discovery service such as AWS Cloud Map. For information about how to register an instance, see Registering Instances in the AWS Cloud Map Developer Guide. • Another approach is to use multiple Wavelength Zones across your deployment and utilize adjacent Zones, powered by carrier-developed edge discovery services to route mobile traffic. Discover the closest Wavelength Zone endpoint 27 AWS Wavelength Developer Guide For more information, see Deploying dynamic 5G Edge Discovery architectures with AWS Wavelength. • Applications that run on client devices can run latency tests such as ping from the client to select the best endpoint that is registered in AWS Cloud Map, or can use the geolocation data from the mobile device. Load balancing Application Load Balancer (ALB) is supported in select Wavelength Zones. Load balancers distribute your incoming traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, within"} +{"global_id": 148, "doc_id": "wavelength", "chunk_id": "6", "question_id": 1, "question": "What is supported in select Wavelength Zones?", "answer_span": "Load balancing Application Load Balancer (ALB) is supported in select Wavelength Zones.", "chunk": "best endpoint that is registered in AWS Cloud Map, or can use the geolocation data from the mobile device. Load balancing Application Load Balancer (ALB) is supported in select Wavelength Zones. Load balancers distribute your incoming traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, within the Wavelength Zone. Key considerations include: • Network Load Balancer (NLB) is not supported in Wavelength Zones. To learn more, see Enabling load-balancing of non-HTTP(s) traffic on AWS Wavelength. • Cross-Zone load balancing across multiple Wavelength Zones is not supported. ALB is available in the following Wavelength Zones: • All Wavelength Zones in the us-east-1 Region. • All Wavelength Zones in us-west-2 Region. • All Wavelength Zones in the ap-northeast-1 Region. • All Wavelength Zones in the eu-central-1 Region. High availability Follow these strategies to deploy highly available architectures at the edge. Deployment Consider the following: • Multiple Wavelength Zones within a given VPC: using techniques highlighted in the Discover the closest Wavelength Zone endpoint section, you can steer traffic to the optimal Wavelength Zone based on latency or application health. • Combine Wavelength Zones with other AWS hybrid and edge locations: you can combine AWS Local Zones subnets with AWS Wavelength Zones subnets to create highly-available Load balancing 28 AWS Wavelength Developer Guide deployments within a given geography. For example, you can create an Atlanta AWS Local Zone subnet (us-east-1-atl-2a) alongside an Atlanta Wavelength Zone subnet (us-east-1-wl1atl-wlz-1) within the same VPC. DNS resolution One way to create both physical and logical redundancy across your high-availability edge deployments is to utilize the parent Region as the failover, using simple Route 53-based failover policies to steer traffic to an available endpoint. For more information, see Configuring DNS failover in the Amazon Route 53 Developer Guide. Workload placement Run the following components"} +{"global_id": 149, "doc_id": "wavelength", "chunk_id": "6", "question_id": 2, "question": "Which load balancer is not supported in Wavelength Zones?", "answer_span": "Network Load Balancer (NLB) is not supported in Wavelength Zones.", "chunk": "best endpoint that is registered in AWS Cloud Map, or can use the geolocation data from the mobile device. Load balancing Application Load Balancer (ALB) is supported in select Wavelength Zones. Load balancers distribute your incoming traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, within the Wavelength Zone. Key considerations include: • Network Load Balancer (NLB) is not supported in Wavelength Zones. To learn more, see Enabling load-balancing of non-HTTP(s) traffic on AWS Wavelength. • Cross-Zone load balancing across multiple Wavelength Zones is not supported. ALB is available in the following Wavelength Zones: • All Wavelength Zones in the us-east-1 Region. • All Wavelength Zones in us-west-2 Region. • All Wavelength Zones in the ap-northeast-1 Region. • All Wavelength Zones in the eu-central-1 Region. High availability Follow these strategies to deploy highly available architectures at the edge. Deployment Consider the following: • Multiple Wavelength Zones within a given VPC: using techniques highlighted in the Discover the closest Wavelength Zone endpoint section, you can steer traffic to the optimal Wavelength Zone based on latency or application health. • Combine Wavelength Zones with other AWS hybrid and edge locations: you can combine AWS Local Zones subnets with AWS Wavelength Zones subnets to create highly-available Load balancing 28 AWS Wavelength Developer Guide deployments within a given geography. For example, you can create an Atlanta AWS Local Zone subnet (us-east-1-atl-2a) alongside an Atlanta Wavelength Zone subnet (us-east-1-wl1atl-wlz-1) within the same VPC. DNS resolution One way to create both physical and logical redundancy across your high-availability edge deployments is to utilize the parent Region as the failover, using simple Route 53-based failover policies to steer traffic to an available endpoint. For more information, see Configuring DNS failover in the Amazon Route 53 Developer Guide. Workload placement Run the following components"} +{"global_id": 150, "doc_id": "wavelength", "chunk_id": "6", "question_id": 3, "question": "In which regions is ALB available?", "answer_span": "ALB is available in the following Wavelength Zones: • All Wavelength Zones in the us-east-1 Region. • All Wavelength Zones in us-west-2 Region. • All Wavelength Zones in the ap-northeast-1 Region. • All Wavelength Zones in the eu-central-1 Region.", "chunk": "best endpoint that is registered in AWS Cloud Map, or can use the geolocation data from the mobile device. Load balancing Application Load Balancer (ALB) is supported in select Wavelength Zones. Load balancers distribute your incoming traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, within the Wavelength Zone. Key considerations include: • Network Load Balancer (NLB) is not supported in Wavelength Zones. To learn more, see Enabling load-balancing of non-HTTP(s) traffic on AWS Wavelength. • Cross-Zone load balancing across multiple Wavelength Zones is not supported. ALB is available in the following Wavelength Zones: • All Wavelength Zones in the us-east-1 Region. • All Wavelength Zones in us-west-2 Region. • All Wavelength Zones in the ap-northeast-1 Region. • All Wavelength Zones in the eu-central-1 Region. High availability Follow these strategies to deploy highly available architectures at the edge. Deployment Consider the following: • Multiple Wavelength Zones within a given VPC: using techniques highlighted in the Discover the closest Wavelength Zone endpoint section, you can steer traffic to the optimal Wavelength Zone based on latency or application health. • Combine Wavelength Zones with other AWS hybrid and edge locations: you can combine AWS Local Zones subnets with AWS Wavelength Zones subnets to create highly-available Load balancing 28 AWS Wavelength Developer Guide deployments within a given geography. For example, you can create an Atlanta AWS Local Zone subnet (us-east-1-atl-2a) alongside an Atlanta Wavelength Zone subnet (us-east-1-wl1atl-wlz-1) within the same VPC. DNS resolution One way to create both physical and logical redundancy across your high-availability edge deployments is to utilize the parent Region as the failover, using simple Route 53-based failover policies to steer traffic to an available endpoint. For more information, see Configuring DNS failover in the Amazon Route 53 Developer Guide. Workload placement Run the following components"} +{"global_id": 151, "doc_id": "wavelength", "chunk_id": "6", "question_id": 4, "question": "What can you combine with AWS Wavelength Zones to create highly-available deployments?", "answer_span": "you can combine AWS Local Zones subnets with AWS Wavelength Zones subnets to create highly-available Load balancing deployments within a given geography.", "chunk": "best endpoint that is registered in AWS Cloud Map, or can use the geolocation data from the mobile device. Load balancing Application Load Balancer (ALB) is supported in select Wavelength Zones. Load balancers distribute your incoming traffic across multiple targets, such as Amazon EC2 instances, containers, and IP addresses, within the Wavelength Zone. Key considerations include: • Network Load Balancer (NLB) is not supported in Wavelength Zones. To learn more, see Enabling load-balancing of non-HTTP(s) traffic on AWS Wavelength. • Cross-Zone load balancing across multiple Wavelength Zones is not supported. ALB is available in the following Wavelength Zones: • All Wavelength Zones in the us-east-1 Region. • All Wavelength Zones in us-west-2 Region. • All Wavelength Zones in the ap-northeast-1 Region. • All Wavelength Zones in the eu-central-1 Region. High availability Follow these strategies to deploy highly available architectures at the edge. Deployment Consider the following: • Multiple Wavelength Zones within a given VPC: using techniques highlighted in the Discover the closest Wavelength Zone endpoint section, you can steer traffic to the optimal Wavelength Zone based on latency or application health. • Combine Wavelength Zones with other AWS hybrid and edge locations: you can combine AWS Local Zones subnets with AWS Wavelength Zones subnets to create highly-available Load balancing 28 AWS Wavelength Developer Guide deployments within a given geography. For example, you can create an Atlanta AWS Local Zone subnet (us-east-1-atl-2a) alongside an Atlanta Wavelength Zone subnet (us-east-1-wl1atl-wlz-1) within the same VPC. DNS resolution One way to create both physical and logical redundancy across your high-availability edge deployments is to utilize the parent Region as the failover, using simple Route 53-based failover policies to steer traffic to an available endpoint. For more information, see Configuring DNS failover in the Amazon Route 53 Developer Guide. Workload placement Run the following components"} +{"global_id": 152, "doc_id": "wavelength", "chunk_id": "7", "question_id": 1, "question": "What is the purpose of utilizing the parent Region in high-availability edge deployments?", "answer_span": "to utilize the parent Region as the failover, using simple Route 53-based failover policies to steer traffic to an available endpoint.", "chunk": "and logical redundancy across your high-availability edge deployments is to utilize the parent Region as the failover, using simple Route 53-based failover policies to steer traffic to an available endpoint. For more information, see Configuring DNS failover in the Amazon Route 53 Developer Guide. Workload placement Run the following components in the Region: • Components that are less latency sensitive • Components that do not require data residency • Components that need to be shared across Zones • Components that need to persist state, such as databases Run the application components that need low latency and higher bandwidth over mobile networks in Wavelength Zones. For optimal throughput, AWS recommends that you use a public service endpoint when applications in the Wavelength Zone need to connect to AWS services in the parent Region. DNS resolution 29"} +{"global_id": 153, "doc_id": "wavelength", "chunk_id": "7", "question_id": 2, "question": "What components should be run in the Region?", "answer_span": "Run the following components in the Region: • Components that are less latency sensitive • Components that do not require data residency • Components that need to be shared across Zones • Components that need to persist state, such as databases", "chunk": "and logical redundancy across your high-availability edge deployments is to utilize the parent Region as the failover, using simple Route 53-based failover policies to steer traffic to an available endpoint. For more information, see Configuring DNS failover in the Amazon Route 53 Developer Guide. Workload placement Run the following components in the Region: • Components that are less latency sensitive • Components that do not require data residency • Components that need to be shared across Zones • Components that need to persist state, such as databases Run the application components that need low latency and higher bandwidth over mobile networks in Wavelength Zones. For optimal throughput, AWS recommends that you use a public service endpoint when applications in the Wavelength Zone need to connect to AWS services in the parent Region. DNS resolution 29"} +{"global_id": 154, "doc_id": "wavelength", "chunk_id": "7", "question_id": 3, "question": "Where should application components that need low latency and higher bandwidth be run?", "answer_span": "Run the application components that need low latency and higher bandwidth over mobile networks in Wavelength Zones.", "chunk": "and logical redundancy across your high-availability edge deployments is to utilize the parent Region as the failover, using simple Route 53-based failover policies to steer traffic to an available endpoint. For more information, see Configuring DNS failover in the Amazon Route 53 Developer Guide. Workload placement Run the following components in the Region: • Components that are less latency sensitive • Components that do not require data residency • Components that need to be shared across Zones • Components that need to persist state, such as databases Run the application components that need low latency and higher bandwidth over mobile networks in Wavelength Zones. For optimal throughput, AWS recommends that you use a public service endpoint when applications in the Wavelength Zone need to connect to AWS services in the parent Region. DNS resolution 29"} +{"global_id": 155, "doc_id": "wavelength", "chunk_id": "7", "question_id": 4, "question": "What does AWS recommend for optimal throughput when connecting to AWS services in the parent Region?", "answer_span": "AWS recommends that you use a public service endpoint when applications in the Wavelength Zone need to connect to AWS services in the parent Region.", "chunk": "and logical redundancy across your high-availability edge deployments is to utilize the parent Region as the failover, using simple Route 53-based failover policies to steer traffic to an available endpoint. For more information, see Configuring DNS failover in the Amazon Route 53 Developer Guide. Workload placement Run the following components in the Region: • Components that are less latency sensitive • Components that do not require data residency • Components that need to be shared across Zones • Components that need to persist state, such as databases Run the application components that need low latency and higher bandwidth over mobile networks in Wavelength Zones. For optimal throughput, AWS recommends that you use a public service endpoint when applications in the Wavelength Zone need to connect to AWS services in the parent Region. DNS resolution 29"} +{"global_id": 156, "doc_id": "beanstalk", "chunk_id": "0", "question_id": 1, "question": "What can you deploy with Elastic Beanstalk?", "answer_span": "With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms.", "chunk": "AWS Elastic Beanstalk Developer Guide What is AWS Elastic Beanstalk? With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms. You build and deploy your applications. Elastic Beanstalk provisions Amazon EC2 instances, configures load balancing, sets up health monitoring, and dynamically scales your environment. In addition to web server environments, Elastic Beanstalk also provides worker environments which you can use to process messages from an Amazon SQS queue, useful for asynchronous or longrunning tasks. For more information, see Elastic Beanstalk worker environments. 1 AWS Elastic Beanstalk Developer Guide Supported platforms Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. Elastic Beanstalk also supports Docker containers, where you can choose your own programming language and application dependencies. When you deploy your application, Elastic Supported platforms 2 AWS Elastic Beanstalk Developer Guide Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, in your AWS account to run your application. You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk. You can perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console). To learn more about how to deploy a sample web application using Elastic Beanstalk, see Learn how to get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your"} +{"global_id": 157, "doc_id": "beanstalk", "chunk_id": "0", "question_id": 2, "question": "What programming languages does Elastic Beanstalk support?", "answer_span": "Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby.", "chunk": "AWS Elastic Beanstalk Developer Guide What is AWS Elastic Beanstalk? With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms. You build and deploy your applications. Elastic Beanstalk provisions Amazon EC2 instances, configures load balancing, sets up health monitoring, and dynamically scales your environment. In addition to web server environments, Elastic Beanstalk also provides worker environments which you can use to process messages from an Amazon SQS queue, useful for asynchronous or longrunning tasks. For more information, see Elastic Beanstalk worker environments. 1 AWS Elastic Beanstalk Developer Guide Supported platforms Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. Elastic Beanstalk also supports Docker containers, where you can choose your own programming language and application dependencies. When you deploy your application, Elastic Supported platforms 2 AWS Elastic Beanstalk Developer Guide Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, in your AWS account to run your application. You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk. You can perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console). To learn more about how to deploy a sample web application using Elastic Beanstalk, see Learn how to get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your"} +{"global_id": 158, "doc_id": "beanstalk", "chunk_id": "0", "question_id": 3, "question": "What does Elastic Beanstalk provision for your application?", "answer_span": "Elastic Beanstalk provisions Amazon EC2 instances, configures load balancing, sets up health monitoring, and dynamically scales your environment.", "chunk": "AWS Elastic Beanstalk Developer Guide What is AWS Elastic Beanstalk? With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms. You build and deploy your applications. Elastic Beanstalk provisions Amazon EC2 instances, configures load balancing, sets up health monitoring, and dynamically scales your environment. In addition to web server environments, Elastic Beanstalk also provides worker environments which you can use to process messages from an Amazon SQS queue, useful for asynchronous or longrunning tasks. For more information, see Elastic Beanstalk worker environments. 1 AWS Elastic Beanstalk Developer Guide Supported platforms Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. Elastic Beanstalk also supports Docker containers, where you can choose your own programming language and application dependencies. When you deploy your application, Elastic Supported platforms 2 AWS Elastic Beanstalk Developer Guide Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, in your AWS account to run your application. You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk. You can perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console). To learn more about how to deploy a sample web application using Elastic Beanstalk, see Learn how to get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your"} +{"global_id": 159, "doc_id": "beanstalk", "chunk_id": "0", "question_id": 4, "question": "How can you interact with Elastic Beanstalk?", "answer_span": "You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk.", "chunk": "AWS Elastic Beanstalk Developer Guide What is AWS Elastic Beanstalk? With Elastic Beanstalk you can deploy web applications into the AWS Cloud on a variety of supported platforms. You build and deploy your applications. Elastic Beanstalk provisions Amazon EC2 instances, configures load balancing, sets up health monitoring, and dynamically scales your environment. In addition to web server environments, Elastic Beanstalk also provides worker environments which you can use to process messages from an Amazon SQS queue, useful for asynchronous or longrunning tasks. For more information, see Elastic Beanstalk worker environments. 1 AWS Elastic Beanstalk Developer Guide Supported platforms Elastic Beanstalk supports applications developed in Go, Java, .NET, Node.js, PHP, Python, and Ruby. Elastic Beanstalk also supports Docker containers, where you can choose your own programming language and application dependencies. When you deploy your application, Elastic Supported platforms 2 AWS Elastic Beanstalk Developer Guide Beanstalk builds the selected supported platform version and provisions one or more AWS resources, such as Amazon EC2 instances, in your AWS account to run your application. You can interact with Elastic Beanstalk through the Elastic Beanstalk console, the AWS Command Line Interface (AWS CLI), or the EB CLI, a high-level command line tool designed specifically for Elastic Beanstalk. You can perform most deployment tasks, such as changing the size of your fleet of Amazon EC2 instances or monitoring your application, directly from the Elastic Beanstalk web interface (console). To learn more about how to deploy a sample web application using Elastic Beanstalk, see Learn how to get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your"} +{"global_id": 160, "doc_id": "beanstalk", "chunk_id": "1", "question_id": 1, "question": "What is the first step to use Elastic Beanstalk?", "answer_span": "you create an application, then upload your application source bundle to Elastic Beanstalk.", "chunk": "get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After you create and deploy your application and your environment is launched, you can manage your environment and deploy new application versions. Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces. The following diagram illustrates Elastic Beanstalk workflow: Pricing There is no additional charge for Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes. For details about pricing, see the Elastic Beanstalk service detail page. Application deploy workflow 3 AWS Elastic Beanstalk Developer Guide Next steps We recommend the tutorial, Getting started tutorial, to start using Elastic Beanstalk. The tutorial steps you through creating, viewing, and updating a sample Elastic Beanstalk application. Next steps 4 AWS Elastic Beanstalk Developer Guide Learn how to get started with Elastic Beanstalk With Elastic Beanstalk you can deploy, monitor, and scale web applications and services. Typically, you will develop your code locally then deploy it to Amazon EC2 server instances. Theses instances, also called environments, run on platforms that can be upgraded through the AWS console or the command line. To get started, we recommend deploying a pre-built sample application directly from the console. Then, you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them"} +{"global_id": 161, "doc_id": "beanstalk", "chunk_id": "1", "question_id": 2, "question": "What information is provided about the application?", "answer_span": "Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces.", "chunk": "get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After you create and deploy your application and your environment is launched, you can manage your environment and deploy new application versions. Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces. The following diagram illustrates Elastic Beanstalk workflow: Pricing There is no additional charge for Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes. For details about pricing, see the Elastic Beanstalk service detail page. Application deploy workflow 3 AWS Elastic Beanstalk Developer Guide Next steps We recommend the tutorial, Getting started tutorial, to start using Elastic Beanstalk. The tutorial steps you through creating, viewing, and updating a sample Elastic Beanstalk application. Next steps 4 AWS Elastic Beanstalk Developer Guide Learn how to get started with Elastic Beanstalk With Elastic Beanstalk you can deploy, monitor, and scale web applications and services. Typically, you will develop your code locally then deploy it to Amazon EC2 server instances. Theses instances, also called environments, run on platforms that can be upgraded through the AWS console or the command line. To get started, we recommend deploying a pre-built sample application directly from the console. Then, you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them"} +{"global_id": 162, "doc_id": "beanstalk", "chunk_id": "1", "question_id": 3, "question": "Is there an additional charge for Elastic Beanstalk?", "answer_span": "There is no additional charge for Elastic Beanstalk.", "chunk": "get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After you create and deploy your application and your environment is launched, you can manage your environment and deploy new application versions. Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces. The following diagram illustrates Elastic Beanstalk workflow: Pricing There is no additional charge for Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes. For details about pricing, see the Elastic Beanstalk service detail page. Application deploy workflow 3 AWS Elastic Beanstalk Developer Guide Next steps We recommend the tutorial, Getting started tutorial, to start using Elastic Beanstalk. The tutorial steps you through creating, viewing, and updating a sample Elastic Beanstalk application. Next steps 4 AWS Elastic Beanstalk Developer Guide Learn how to get started with Elastic Beanstalk With Elastic Beanstalk you can deploy, monitor, and scale web applications and services. Typically, you will develop your code locally then deploy it to Amazon EC2 server instances. Theses instances, also called environments, run on platforms that can be upgraded through the AWS console or the command line. To get started, we recommend deploying a pre-built sample application directly from the console. Then, you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them"} +{"global_id": 163, "doc_id": "beanstalk", "chunk_id": "1", "question_id": 4, "question": "What do you typically do before deploying your code to Elastic Beanstalk?", "answer_span": "you will develop your code locally then deploy it to Amazon EC2 server instances.", "chunk": "get started with Elastic Beanstalk. Application deploy workflow To use Elastic Beanstalk, you create an application, then upload your application source bundle to Elastic Beanstalk. Next, you provide information about the application, and Elastic Beanstalk automatically launches an environment and creates and configures the AWS resources needed to run your code. After you create and deploy your application and your environment is launched, you can manage your environment and deploy new application versions. Information about the application— including metrics, events, and environment status—is made available through the Elastic Beanstalk console, APIs, and Command Line Interfaces. The following diagram illustrates Elastic Beanstalk workflow: Pricing There is no additional charge for Elastic Beanstalk. You pay only for the underlying AWS resources that your application consumes. For details about pricing, see the Elastic Beanstalk service detail page. Application deploy workflow 3 AWS Elastic Beanstalk Developer Guide Next steps We recommend the tutorial, Getting started tutorial, to start using Elastic Beanstalk. The tutorial steps you through creating, viewing, and updating a sample Elastic Beanstalk application. Next steps 4 AWS Elastic Beanstalk Developer Guide Learn how to get started with Elastic Beanstalk With Elastic Beanstalk you can deploy, monitor, and scale web applications and services. Typically, you will develop your code locally then deploy it to Amazon EC2 server instances. Theses instances, also called environments, run on platforms that can be upgraded through the AWS console or the command line. To get started, we recommend deploying a pre-built sample application directly from the console. Then, you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them"} +{"global_id": 164, "doc_id": "beanstalk", "chunk_id": "2", "question_id": 1, "question": "What section teaches you how to develop locally and deploy from the command line?", "answer_span": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”", "chunk": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end. The total charges are typically less than a dollar. For information about how to minimize charges, see AWS free tier. After completing this tutorial, you will understand the basics of creating, configuring, deploying, updating, and monitoring an Elastic Beanstalk application with environments running on Amazon EC2 instances. Estimated duration: 35-45 minutes 5 AWS Elastic Beanstalk Developer Guide What you will build Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform. Elastic Beanstalk application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments where your application code runs on platforms provided and managed by Elastic Beanstalk, or in custom containers that you provide. Environment An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance. When you create an environment, Elastic Beanstalk provisions the necessary resources into your AWS account. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and additional Elastic Beanstalk components. Elastic Beanstalk provides manged platforms, or you can provide your own platform in a container. Elastic Beanstalk supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different"} +{"global_id": 165, "doc_id": "beanstalk", "chunk_id": "2", "question_id": 2, "question": "Are there any costs associated with using Elastic Beanstalk?", "answer_span": "There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end.", "chunk": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end. The total charges are typically less than a dollar. For information about how to minimize charges, see AWS free tier. After completing this tutorial, you will understand the basics of creating, configuring, deploying, updating, and monitoring an Elastic Beanstalk application with environments running on Amazon EC2 instances. Estimated duration: 35-45 minutes 5 AWS Elastic Beanstalk Developer Guide What you will build Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform. Elastic Beanstalk application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments where your application code runs on platforms provided and managed by Elastic Beanstalk, or in custom containers that you provide. Environment An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance. When you create an environment, Elastic Beanstalk provisions the necessary resources into your AWS account. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and additional Elastic Beanstalk components. Elastic Beanstalk provides manged platforms, or you can provide your own platform in a container. Elastic Beanstalk supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different"} +{"global_id": 166, "doc_id": "beanstalk", "chunk_id": "2", "question_id": 3, "question": "What will your first Elastic Beanstalk application consist of?", "answer_span": "Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform.", "chunk": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end. The total charges are typically less than a dollar. For information about how to minimize charges, see AWS free tier. After completing this tutorial, you will understand the basics of creating, configuring, deploying, updating, and monitoring an Elastic Beanstalk application with environments running on Amazon EC2 instances. Estimated duration: 35-45 minutes 5 AWS Elastic Beanstalk Developer Guide What you will build Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform. Elastic Beanstalk application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments where your application code runs on platforms provided and managed by Elastic Beanstalk, or in custom containers that you provide. Environment An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance. When you create an environment, Elastic Beanstalk provisions the necessary resources into your AWS account. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and additional Elastic Beanstalk components. Elastic Beanstalk provides manged platforms, or you can provide your own platform in a container. Elastic Beanstalk supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different"} +{"global_id": 167, "doc_id": "beanstalk", "chunk_id": "2", "question_id": 4, "question": "What is an Elastic Beanstalk environment?", "answer_span": "An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance.", "chunk": "you can learn how to develop locally and deploy from the command line in the the section called “QuickStart for PHP”. There is no cost for using Elastic Beanstalk, but standard fees do apply to AWS resources that you create during the course of this tutorial until you delete them at the end. The total charges are typically less than a dollar. For information about how to minimize charges, see AWS free tier. After completing this tutorial, you will understand the basics of creating, configuring, deploying, updating, and monitoring an Elastic Beanstalk application with environments running on Amazon EC2 instances. Estimated duration: 35-45 minutes 5 AWS Elastic Beanstalk Developer Guide What you will build Your first Elastic Beanstalk application will consist of a single Amazon EC2 environment running the PHP sample on a PHP managed platform. Elastic Beanstalk application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments where your application code runs on platforms provided and managed by Elastic Beanstalk, or in custom containers that you provide. Environment An Elastic Beanstalk environment is a collection of AWS resources running together including an Amazon EC2 instance. When you create an environment, Elastic Beanstalk provisions the necessary resources into your AWS account. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and additional Elastic Beanstalk components. Elastic Beanstalk provides manged platforms, or you can provide your own platform in a container. Elastic Beanstalk supports platforms for different programming languages, application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different"} +{"global_id": 168, "doc_id": "beanstalk", "chunk_id": "3", "question_id": 1, "question": "What must you choose when you create an environment?", "answer_span": "you must choose the platform.", "chunk": "application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different platform. Step 1 - Create an application To create your example application, you'll use the Create application console wizard. It creates an Elastic Beanstalk application and launches an environment within it. Reminder: an environment is a collection of AWS resources required to run your application code. What you will build 7 AWS Elastic Beanstalk Developer Guide To create an application 1. Open the Elastic Beanstalk console. 2. Choose Create application. 3. For Application name enter getting-started-app. The console provides a six step process for creating an application and configuring an environment. For this quick start, you'll only need to focus on the first two steps, then you can skip ahead to review and create your application and environment. To configure an environment 1. In Environment information, for Environment name enter: gs-app-web-env. 2. For Platform, choose the PHP platform. 3. For Application code and Presets, accept the defaults (Sample application and Single instance), then choose Next. To configure service access Next, you need two roles. A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform. An EC2 instance profile role permits tasks such as writing logs and interacting with other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer"} +{"global_id": 169, "doc_id": "beanstalk", "chunk_id": "3", "question_id": 2, "question": "What is required to run your application code?", "answer_span": "an environment is a collection of AWS resources required to run your application code.", "chunk": "application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different platform. Step 1 - Create an application To create your example application, you'll use the Create application console wizard. It creates an Elastic Beanstalk application and launches an environment within it. Reminder: an environment is a collection of AWS resources required to run your application code. What you will build 7 AWS Elastic Beanstalk Developer Guide To create an application 1. Open the Elastic Beanstalk console. 2. Choose Create application. 3. For Application name enter getting-started-app. The console provides a six step process for creating an application and configuring an environment. For this quick start, you'll only need to focus on the first two steps, then you can skip ahead to review and create your application and environment. To configure an environment 1. In Environment information, for Environment name enter: gs-app-web-env. 2. For Platform, choose the PHP platform. 3. For Application code and Presets, accept the defaults (Sample application and Single instance), then choose Next. To configure service access Next, you need two roles. A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform. An EC2 instance profile role permits tasks such as writing logs and interacting with other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer"} +{"global_id": 170, "doc_id": "beanstalk", "chunk_id": "3", "question_id": 3, "question": "What is the name of the application you need to enter for Application name?", "answer_span": "enter getting-started-app.", "chunk": "application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different platform. Step 1 - Create an application To create your example application, you'll use the Create application console wizard. It creates an Elastic Beanstalk application and launches an environment within it. Reminder: an environment is a collection of AWS resources required to run your application code. What you will build 7 AWS Elastic Beanstalk Developer Guide To create an application 1. Open the Elastic Beanstalk console. 2. Choose Create application. 3. For Application name enter getting-started-app. The console provides a six step process for creating an application and configuring an environment. For this quick start, you'll only need to focus on the first two steps, then you can skip ahead to review and create your application and environment. To configure an environment 1. In Environment information, for Environment name enter: gs-app-web-env. 2. For Platform, choose the PHP platform. 3. For Application code and Presets, accept the defaults (Sample application and Single instance), then choose Next. To configure service access Next, you need two roles. A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform. An EC2 instance profile role permits tasks such as writing logs and interacting with other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer"} +{"global_id": 171, "doc_id": "beanstalk", "chunk_id": "3", "question_id": 4, "question": "What role allows Elastic Beanstalk to monitor your EC2 instances?", "answer_span": "A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform.", "chunk": "application servers, and Docker containers. When you create an environment, you must choose the platform. You can upgrade the platform, but you cannot change the platform for an environment. Switching platforms If you need to change programming languages, you must create and switch to a new environment on a different platform. Step 1 - Create an application To create your example application, you'll use the Create application console wizard. It creates an Elastic Beanstalk application and launches an environment within it. Reminder: an environment is a collection of AWS resources required to run your application code. What you will build 7 AWS Elastic Beanstalk Developer Guide To create an application 1. Open the Elastic Beanstalk console. 2. Choose Create application. 3. For Application name enter getting-started-app. The console provides a six step process for creating an application and configuring an environment. For this quick start, you'll only need to focus on the first two steps, then you can skip ahead to review and create your application and environment. To configure an environment 1. In Environment information, for Environment name enter: gs-app-web-env. 2. For Platform, choose the PHP platform. 3. For Application code and Presets, accept the defaults (Sample application and Single instance), then choose Next. To configure service access Next, you need two roles. A service role allows Elastic Beanstalk to monitor your EC2 instances and upgrade you environment’s platform. An EC2 instance profile role permits tasks such as writing logs and interacting with other services. To create the Service role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer"} +{"global_id": 172, "doc_id": "beanstalk", "chunk_id": "4", "question_id": 1, "question": "What should you choose for Service role?", "answer_span": "For Service role, choose Create role.", "chunk": "role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer Guide • AWSElasticBeanstalkEnhancedHealth • AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created service role. To create the EC2 instance profile 1. Choose Create role. 2. For Trusted entity type, choose AWS service. 3. For Use case, choose Elastic Beanstalk – Compute. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: • AWSElasticBeanstalkWebTier • AWSElasticBeanstalkWorkerTier • AWSElasticBeanstalkMulticontainerDocker 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created EC2 instance profile. To finish configuring and creating your application 1. Skip over EC2 key pair. We'll show you other ways to connect to your Amazon EC2 instances through the Console. 2. Choose Skip to Review to move over several optional steps. Optional steps: networking, databases, scaling parameters, advanced configuration for updates, monitoring, and logging. 3. On the Review page which shows a summary of your choices, choose Submit. Step 1 - Create an application 9 AWS Elastic Beanstalk Developer Guide Congratulations! You have created an application and configured an environment! Now you need to wait for the resources to deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When"} +{"global_id": 173, "doc_id": "beanstalk", "chunk_id": "4", "question_id": 2, "question": "What is the Use case for creating an application in Elastic Beanstalk?", "answer_span": "For Use case, choose Elastic Beanstalk – Environment.", "chunk": "role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer Guide • AWSElasticBeanstalkEnhancedHealth • AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created service role. To create the EC2 instance profile 1. Choose Create role. 2. For Trusted entity type, choose AWS service. 3. For Use case, choose Elastic Beanstalk – Compute. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: • AWSElasticBeanstalkWebTier • AWSElasticBeanstalkWorkerTier • AWSElasticBeanstalkMulticontainerDocker 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created EC2 instance profile. To finish configuring and creating your application 1. Skip over EC2 key pair. We'll show you other ways to connect to your Amazon EC2 instances through the Console. 2. Choose Skip to Review to move over several optional steps. Optional steps: networking, databases, scaling parameters, advanced configuration for updates, monitoring, and logging. 3. On the Review page which shows a summary of your choices, choose Submit. Step 1 - Create an application 9 AWS Elastic Beanstalk Developer Guide Congratulations! You have created an application and configured an environment! Now you need to wait for the resources to deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When"} +{"global_id": 174, "doc_id": "beanstalk", "chunk_id": "4", "question_id": 3, "question": "What should you verify that Permissions policies include when creating the EC2 instance profile?", "answer_span": "Verify that Permissions policies include the following, then choose Next: • AWSElasticBeanstalkWebTier • AWSElasticBeanstalkWorkerTier • AWSElasticBeanstalkMulticontainerDocker", "chunk": "role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer Guide • AWSElasticBeanstalkEnhancedHealth • AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created service role. To create the EC2 instance profile 1. Choose Create role. 2. For Trusted entity type, choose AWS service. 3. For Use case, choose Elastic Beanstalk – Compute. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: • AWSElasticBeanstalkWebTier • AWSElasticBeanstalkWorkerTier • AWSElasticBeanstalkMulticontainerDocker 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created EC2 instance profile. To finish configuring and creating your application 1. Skip over EC2 key pair. We'll show you other ways to connect to your Amazon EC2 instances through the Console. 2. Choose Skip to Review to move over several optional steps. Optional steps: networking, databases, scaling parameters, advanced configuration for updates, monitoring, and logging. 3. On the Review page which shows a summary of your choices, choose Submit. Step 1 - Create an application 9 AWS Elastic Beanstalk Developer Guide Congratulations! You have created an application and configured an environment! Now you need to wait for the resources to deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When"} +{"global_id": 175, "doc_id": "beanstalk", "chunk_id": "4", "question_id": 4, "question": "What should you choose on the Review page after configuring your application?", "answer_span": "On the Review page which shows a summary of your choices, choose Submit.", "chunk": "role 1. For Service role, choose Create role. 2. For Trusted entity type, choose AWS service. Step 1 - Create an application 8 AWS Elastic Beanstalk 3. For Use case, choose Elastic Beanstalk – Environment. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: Developer Guide • AWSElasticBeanstalkEnhancedHealth • AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created service role. To create the EC2 instance profile 1. Choose Create role. 2. For Trusted entity type, choose AWS service. 3. For Use case, choose Elastic Beanstalk – Compute. 4. Choose Next. 5. Verify that Permissions policies include the following, then choose Next: • AWSElasticBeanstalkWebTier • AWSElasticBeanstalkWorkerTier • AWSElasticBeanstalkMulticontainerDocker 6. Choose Create role. 7. Return to the Configure service access tab, refresh the list, then select the newly created EC2 instance profile. To finish configuring and creating your application 1. Skip over EC2 key pair. We'll show you other ways to connect to your Amazon EC2 instances through the Console. 2. Choose Skip to Review to move over several optional steps. Optional steps: networking, databases, scaling parameters, advanced configuration for updates, monitoring, and logging. 3. On the Review page which shows a summary of your choices, choose Submit. Step 1 - Create an application 9 AWS Elastic Beanstalk Developer Guide Congratulations! You have created an application and configured an environment! Now you need to wait for the resources to deploy. Step 2 - Deploy your application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When"} +{"global_id": 176, "doc_id": "beanstalk", "chunk_id": "5", "question_id": 1, "question": "What does Elastic Beanstalk do when you create an application?", "answer_span": "Elastic Beanstalk sets up the environments for you.", "chunk": "application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When you create the example application, Elastic Beanstalk creates the following resources: • EC2 instance – An Amazon EC2 virtual machine configured to run web apps on the platform you selected. Every platform runs a different set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy to forward web traffic to your web app, serve static assets, and generate access and error logs. You can connect to your Amazon EC2 instances to view configuration and logs. Step 2 - Deploy your application 10 AWS Elastic Beanstalk Developer Guide • Instance security group – An Amazon EC2 security group will be created to allow incoming requests on port 80, so inbound traffic on a load balancer can reach your web app. • Amazon S3 bucket – A storage location for your source code, logs, and other artifacts. • Amazon CloudWatch alarms – Two CloudWatch alarms are created to monitor the load on your instances and scale them up or down as needed. • AWS CloudFormation stack – Elastic Beanstalk uses AWS CloudFormation to deploy the resources in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment."} +{"global_id": 177, "doc_id": "beanstalk", "chunk_id": "5", "question_id": 2, "question": "How long can the initial deploy take?", "answer_span": "The initial deploy can take up to five minutes to create the resources.", "chunk": "application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When you create the example application, Elastic Beanstalk creates the following resources: • EC2 instance – An Amazon EC2 virtual machine configured to run web apps on the platform you selected. Every platform runs a different set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy to forward web traffic to your web app, serve static assets, and generate access and error logs. You can connect to your Amazon EC2 instances to view configuration and logs. Step 2 - Deploy your application 10 AWS Elastic Beanstalk Developer Guide • Instance security group – An Amazon EC2 security group will be created to allow incoming requests on port 80, so inbound traffic on a load balancer can reach your web app. • Amazon S3 bucket – A storage location for your source code, logs, and other artifacts. • Amazon CloudWatch alarms – Two CloudWatch alarms are created to monitor the load on your instances and scale them up or down as needed. • AWS CloudFormation stack – Elastic Beanstalk uses AWS CloudFormation to deploy the resources in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment."} +{"global_id": 178, "doc_id": "beanstalk", "chunk_id": "5", "question_id": 3, "question": "What type of instance does Elastic Beanstalk create?", "answer_span": "EC2 instance – An Amazon EC2 virtual machine configured to run web apps on the platform you selected.", "chunk": "application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When you create the example application, Elastic Beanstalk creates the following resources: • EC2 instance – An Amazon EC2 virtual machine configured to run web apps on the platform you selected. Every platform runs a different set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy to forward web traffic to your web app, serve static assets, and generate access and error logs. You can connect to your Amazon EC2 instances to view configuration and logs. Step 2 - Deploy your application 10 AWS Elastic Beanstalk Developer Guide • Instance security group – An Amazon EC2 security group will be created to allow incoming requests on port 80, so inbound traffic on a load balancer can reach your web app. • Amazon S3 bucket – A storage location for your source code, logs, and other artifacts. • Amazon CloudWatch alarms – Two CloudWatch alarms are created to monitor the load on your instances and scale them up or down as needed. • AWS CloudFormation stack – Elastic Beanstalk uses AWS CloudFormation to deploy the resources in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment."} +{"global_id": 179, "doc_id": "beanstalk", "chunk_id": "5", "question_id": 4, "question": "What is created to monitor the load on your instances?", "answer_span": "Two CloudWatch alarms are created to monitor the load on your instances and scale them up or down as needed.", "chunk": "application When you create an application, Elastic Beanstalk sets up the environments for you. You just need to sit back and wait. The initial deploy can take up to five minutes to create the resources. Updates will take less time because only changes will be deployed to your stack. When you create the example application, Elastic Beanstalk creates the following resources: • EC2 instance – An Amazon EC2 virtual machine configured to run web apps on the platform you selected. Every platform runs a different set of software, configuration files, and scripts to support a specific language version, framework, web container, or combination thereof. Most platforms use either Apache or nginx as a reverse proxy to forward web traffic to your web app, serve static assets, and generate access and error logs. You can connect to your Amazon EC2 instances to view configuration and logs. Step 2 - Deploy your application 10 AWS Elastic Beanstalk Developer Guide • Instance security group – An Amazon EC2 security group will be created to allow incoming requests on port 80, so inbound traffic on a load balancer can reach your web app. • Amazon S3 bucket – A storage location for your source code, logs, and other artifacts. • Amazon CloudWatch alarms – Two CloudWatch alarms are created to monitor the load on your instances and scale them up or down as needed. • AWS CloudFormation stack – Elastic Beanstalk uses AWS CloudFormation to deploy the resources in your environment and make configuration changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment."} +{"global_id": 180, "doc_id": "beanstalk", "chunk_id": "6", "question_id": 1, "question": "What is the format of the domain name that routes to your web app?", "answer_span": "A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com.", "chunk": "changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment. During the process, the console tracks progress and displays event status in the Events tab. Step 2 - Deploy your application 11 AWS Elastic Beanstalk Developer Guide Your application is ready! After you see your application health change to Ok, you can browse to your web application's website. Step 3 - Explore the Elastic Beanstalk environment You'll start exploring your deployed application environment from the Environment overview page in the console. To view the environment and your application 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. Choose Go to environment to browse your application! (You can also choose the URL link listed for Domain to browse your application.) The connection will be HTTP (not HTTPS), so you might see a warning in your browser. Step 3 - Explore the environment 13 AWS Elastic Beanstalk Developer Guide Back in the Elastic Beanstalk console, the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application,"} +{"global_id": 181, "doc_id": "beanstalk", "chunk_id": "6", "question_id": 2, "question": "What does Elastic Beanstalk do after creating your application?", "answer_span": "Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment.", "chunk": "changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment. During the process, the console tracks progress and displays event status in the Events tab. Step 2 - Deploy your application 11 AWS Elastic Beanstalk Developer Guide Your application is ready! After you see your application health change to Ok, you can browse to your web application's website. Step 3 - Explore the Elastic Beanstalk environment You'll start exploring your deployed application environment from the Environment overview page in the console. To view the environment and your application 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. Choose Go to environment to browse your application! (You can also choose the URL link listed for Domain to browse your application.) The connection will be HTTP (not HTTPS), so you might see a warning in your browser. Step 3 - Explore the environment 13 AWS Elastic Beanstalk Developer Guide Back in the Elastic Beanstalk console, the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application,"} +{"global_id": 182, "doc_id": "beanstalk", "chunk_id": "6", "question_id": 3, "question": "Where can you view the environment and your application?", "answer_span": "You'll start exploring your deployed application environment from the Environment overview page in the console.", "chunk": "changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment. During the process, the console tracks progress and displays event status in the Events tab. Step 2 - Deploy your application 11 AWS Elastic Beanstalk Developer Guide Your application is ready! After you see your application health change to Ok, you can browse to your web application's website. Step 3 - Explore the Elastic Beanstalk environment You'll start exploring your deployed application environment from the Environment overview page in the console. To view the environment and your application 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. Choose Go to environment to browse your application! (You can also choose the URL link listed for Domain to browse your application.) The connection will be HTTP (not HTTPS), so you might see a warning in your browser. Step 3 - Explore the environment 13 AWS Elastic Beanstalk Developer Guide Back in the Elastic Beanstalk console, the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application,"} +{"global_id": 183, "doc_id": "beanstalk", "chunk_id": "6", "question_id": 4, "question": "What type of connection will you have when browsing your application?", "answer_span": "The connection will be HTTP (not HTTPS), so you might see a warning in your browser.", "chunk": "changes. You can view the resource definition template in the AWS CloudFormation console. • Domain name – A domain name that routes to your web app in the form : subdomain.region.elasticbeanstalk.com. Elastic Beanstalk creates your application, launches an environment, makes an application version, then deploys your code into the environment. During the process, the console tracks progress and displays event status in the Events tab. Step 2 - Deploy your application 11 AWS Elastic Beanstalk Developer Guide Your application is ready! After you see your application health change to Ok, you can browse to your web application's website. Step 3 - Explore the Elastic Beanstalk environment You'll start exploring your deployed application environment from the Environment overview page in the console. To view the environment and your application 1. Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region. 2. In the navigation pane, choose Environments, and then choose the name of your environment from the list. 3. Choose Go to environment to browse your application! (You can also choose the URL link listed for Domain to browse your application.) The connection will be HTTP (not HTTPS), so you might see a warning in your browser. Step 3 - Explore the environment 13 AWS Elastic Beanstalk Developer Guide Back in the Elastic Beanstalk console, the upper portion shows the Environment overview with top level information about your environment, including name, domain URL, current health status, running version, and the platform that the application is running on. The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application,"} +{"global_id": 184, "doc_id": "beanstalk", "chunk_id": "7", "question_id": 1, "question": "What are essential for troubleshooting your currently deployed application?", "answer_span": "The running version and platform are essential for troubleshooting your currently deployed application.", "chunk": "The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application, the environment is in a Pending state. Status messages about launch events are continuously added to the list of Events . The environment's Domain is the URL for your deployed web application. In the left navigation pane, Go to environment also takes you to your domain. Similarly, the left navigation pane has links that correspond to the various tabs. Take note of the Configuration link in the left navigation pane. which displays a summary of environment configuration option values, grouped by category. Environment configuration settings Take note of the Configuration link in the left navigation pane. You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more! The various tabs contain detailed information about your environment: Step 3 - Explore the environment 15 AWS Elastic Beanstalk Developer Guide Understanding concepts in Elastic Beanstalk Becoming familiar with the concepts and terms will help you gain an understanding needed for deploying your applications with Elastic Beanstalk. 142 AWS Elastic Beanstalk Developer Guide Application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments, versions, and environment configurations. Within an Elastic Beanstalk application, you manage all the resources relevant to running your code. Application version In Elastic Beanstalk, an application version refers to a specific, labeled iteration of deployable code for a web application. An application version points to an Amazon Simple Storage Service (Amazon S3) object that contains the deployable code, such as a"} +{"global_id": 185, "doc_id": "beanstalk", "chunk_id": "7", "question_id": 2, "question": "What does the environment's Domain represent?", "answer_span": "The environment's Domain is the URL for your deployed web application.", "chunk": "The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application, the environment is in a Pending state. Status messages about launch events are continuously added to the list of Events . The environment's Domain is the URL for your deployed web application. In the left navigation pane, Go to environment also takes you to your domain. Similarly, the left navigation pane has links that correspond to the various tabs. Take note of the Configuration link in the left navigation pane. which displays a summary of environment configuration option values, grouped by category. Environment configuration settings Take note of the Configuration link in the left navigation pane. You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more! The various tabs contain detailed information about your environment: Step 3 - Explore the environment 15 AWS Elastic Beanstalk Developer Guide Understanding concepts in Elastic Beanstalk Becoming familiar with the concepts and terms will help you gain an understanding needed for deploying your applications with Elastic Beanstalk. 142 AWS Elastic Beanstalk Developer Guide Application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments, versions, and environment configurations. Within an Elastic Beanstalk application, you manage all the resources relevant to running your code. Application version In Elastic Beanstalk, an application version refers to a specific, labeled iteration of deployable code for a web application. An application version points to an Amazon Simple Storage Service (Amazon S3) object that contains the deployable code, such as a"} +{"global_id": 186, "doc_id": "beanstalk", "chunk_id": "7", "question_id": 3, "question": "What can you view and edit in the Configuration link?", "answer_span": "You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more!", "chunk": "The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application, the environment is in a Pending state. Status messages about launch events are continuously added to the list of Events . The environment's Domain is the URL for your deployed web application. In the left navigation pane, Go to environment also takes you to your domain. Similarly, the left navigation pane has links that correspond to the various tabs. Take note of the Configuration link in the left navigation pane. which displays a summary of environment configuration option values, grouped by category. Environment configuration settings Take note of the Configuration link in the left navigation pane. You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more! The various tabs contain detailed information about your environment: Step 3 - Explore the environment 15 AWS Elastic Beanstalk Developer Guide Understanding concepts in Elastic Beanstalk Becoming familiar with the concepts and terms will help you gain an understanding needed for deploying your applications with Elastic Beanstalk. 142 AWS Elastic Beanstalk Developer Guide Application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments, versions, and environment configurations. Within an Elastic Beanstalk application, you manage all the resources relevant to running your code. Application version In Elastic Beanstalk, an application version refers to a specific, labeled iteration of deployable code for a web application. An application version points to an Amazon Simple Storage Service (Amazon S3) object that contains the deployable code, such as a"} +{"global_id": 187, "doc_id": "beanstalk", "chunk_id": "7", "question_id": 4, "question": "What is an Elastic Beanstalk application?", "answer_span": "An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments, versions, and environment configurations.", "chunk": "The running version and platform are essential for troubleshooting your currently deployed application. After the overview pane, you will see recent environment activity in the Events tab. Step 3 - Explore the environment 14 AWS Elastic Beanstalk Developer Guide While Elastic Beanstalk creates your AWS resources and launches your application, the environment is in a Pending state. Status messages about launch events are continuously added to the list of Events . The environment's Domain is the URL for your deployed web application. In the left navigation pane, Go to environment also takes you to your domain. Similarly, the left navigation pane has links that correspond to the various tabs. Take note of the Configuration link in the left navigation pane. which displays a summary of environment configuration option values, grouped by category. Environment configuration settings Take note of the Configuration link in the left navigation pane. You can view and edit detailed environment settings, such as service roles, networking, database, scaling, managed platform updates, memory, health monitoring, rolling deployment, logging, and more! The various tabs contain detailed information about your environment: Step 3 - Explore the environment 15 AWS Elastic Beanstalk Developer Guide Understanding concepts in Elastic Beanstalk Becoming familiar with the concepts and terms will help you gain an understanding needed for deploying your applications with Elastic Beanstalk. 142 AWS Elastic Beanstalk Developer Guide Application An Elastic Beanstalk application is a container for Elastic Beanstalk components, including environments, versions, and environment configurations. Within an Elastic Beanstalk application, you manage all the resources relevant to running your code. Application version In Elastic Beanstalk, an application version refers to a specific, labeled iteration of deployable code for a web application. An application version points to an Amazon Simple Storage Service (Amazon S3) object that contains the deployable code, such as a"} +{"global_id": 188, "doc_id": "beanstalk", "chunk_id": "8", "question_id": 1, "question": "What does an application version refer to in Elastic Beanstalk?", "answer_span": "In Elastic Beanstalk, an application version refers to a specific, labeled iteration of deployable code for a web application.", "chunk": "all the resources relevant to running your code. Application version In Elastic Beanstalk, an application version refers to a specific, labeled iteration of deployable code for a web application. An application version points to an Amazon Simple Storage Service (Amazon S3) object that contains the deployable code, such as a Java WAR file. An application version is part of an application. Applications can have many versions and each application version is unique. In a running environment, you can deploy any application version you already uploaded to the application, or you can upload and immediately deploy a new application version. For example, you could upload multiple application versions to test differences between them. Environment An environment is a collection of AWS resources running an application version. Each environment runs only one application version at a time, however, you can run the same application version or different application versions in many environments simultaneously. When you create an environment, Elastic Beanstalk provisions the resources needed in your AWS account to run the application version you specified. Environment tier When you launch an Elastic Beanstalk environment, you first choose an environment tier. The environment tier designates the type of application that the environment runs and determines what resources Elastic Beanstalk provisions to support it. An application that serves HTTP requests runs in a web server environment tier. A backend environment that pulls tasks from an Amazon Simple Queue Service (Amazon SQS) queue runs in a worker environment tier. Environment configuration An environment configuration identifies a collection of parameters and settings that define how an environment and its associated resources behave. When you update an environment’s Application 143 AWS Elastic Beanstalk Developer Guide configuration settings, Elastic Beanstalk automatically applies the changes to existing resources or deletes and deploys new resources (depending on the type of"} +{"global_id": 189, "doc_id": "beanstalk", "chunk_id": "8", "question_id": 2, "question": "What is an environment in Elastic Beanstalk?", "answer_span": "An environment is a collection of AWS resources running an application version.", "chunk": "all the resources relevant to running your code. Application version In Elastic Beanstalk, an application version refers to a specific, labeled iteration of deployable code for a web application. An application version points to an Amazon Simple Storage Service (Amazon S3) object that contains the deployable code, such as a Java WAR file. An application version is part of an application. Applications can have many versions and each application version is unique. In a running environment, you can deploy any application version you already uploaded to the application, or you can upload and immediately deploy a new application version. For example, you could upload multiple application versions to test differences between them. Environment An environment is a collection of AWS resources running an application version. Each environment runs only one application version at a time, however, you can run the same application version or different application versions in many environments simultaneously. When you create an environment, Elastic Beanstalk provisions the resources needed in your AWS account to run the application version you specified. Environment tier When you launch an Elastic Beanstalk environment, you first choose an environment tier. The environment tier designates the type of application that the environment runs and determines what resources Elastic Beanstalk provisions to support it. An application that serves HTTP requests runs in a web server environment tier. A backend environment that pulls tasks from an Amazon Simple Queue Service (Amazon SQS) queue runs in a worker environment tier. Environment configuration An environment configuration identifies a collection of parameters and settings that define how an environment and its associated resources behave. When you update an environment’s Application 143 AWS Elastic Beanstalk Developer Guide configuration settings, Elastic Beanstalk automatically applies the changes to existing resources or deletes and deploys new resources (depending on the type of"} +{"global_id": 190, "doc_id": "beanstalk", "chunk_id": "8", "question_id": 3, "question": "What does the environment tier designate?", "answer_span": "The environment tier designates the type of application that the environment runs and determines what resources Elastic Beanstalk provisions to support it.", "chunk": "all the resources relevant to running your code. Application version In Elastic Beanstalk, an application version refers to a specific, labeled iteration of deployable code for a web application. An application version points to an Amazon Simple Storage Service (Amazon S3) object that contains the deployable code, such as a Java WAR file. An application version is part of an application. Applications can have many versions and each application version is unique. In a running environment, you can deploy any application version you already uploaded to the application, or you can upload and immediately deploy a new application version. For example, you could upload multiple application versions to test differences between them. Environment An environment is a collection of AWS resources running an application version. Each environment runs only one application version at a time, however, you can run the same application version or different application versions in many environments simultaneously. When you create an environment, Elastic Beanstalk provisions the resources needed in your AWS account to run the application version you specified. Environment tier When you launch an Elastic Beanstalk environment, you first choose an environment tier. The environment tier designates the type of application that the environment runs and determines what resources Elastic Beanstalk provisions to support it. An application that serves HTTP requests runs in a web server environment tier. A backend environment that pulls tasks from an Amazon Simple Queue Service (Amazon SQS) queue runs in a worker environment tier. Environment configuration An environment configuration identifies a collection of parameters and settings that define how an environment and its associated resources behave. When you update an environment’s Application 143 AWS Elastic Beanstalk Developer Guide configuration settings, Elastic Beanstalk automatically applies the changes to existing resources or deletes and deploys new resources (depending on the type of"} +{"global_id": 191, "doc_id": "beanstalk", "chunk_id": "8", "question_id": 4, "question": "What happens when you update an environment’s configuration settings?", "answer_span": "When you update an environment’s configuration settings, Elastic Beanstalk automatically applies the changes to existing resources or deletes and deploys new resources.", "chunk": "all the resources relevant to running your code. Application version In Elastic Beanstalk, an application version refers to a specific, labeled iteration of deployable code for a web application. An application version points to an Amazon Simple Storage Service (Amazon S3) object that contains the deployable code, such as a Java WAR file. An application version is part of an application. Applications can have many versions and each application version is unique. In a running environment, you can deploy any application version you already uploaded to the application, or you can upload and immediately deploy a new application version. For example, you could upload multiple application versions to test differences between them. Environment An environment is a collection of AWS resources running an application version. Each environment runs only one application version at a time, however, you can run the same application version or different application versions in many environments simultaneously. When you create an environment, Elastic Beanstalk provisions the resources needed in your AWS account to run the application version you specified. Environment tier When you launch an Elastic Beanstalk environment, you first choose an environment tier. The environment tier designates the type of application that the environment runs and determines what resources Elastic Beanstalk provisions to support it. An application that serves HTTP requests runs in a web server environment tier. A backend environment that pulls tasks from an Amazon Simple Queue Service (Amazon SQS) queue runs in a worker environment tier. Environment configuration An environment configuration identifies a collection of parameters and settings that define how an environment and its associated resources behave. When you update an environment’s Application 143 AWS Elastic Beanstalk Developer Guide configuration settings, Elastic Beanstalk automatically applies the changes to existing resources or deletes and deploys new resources (depending on the type of"} +{"global_id": 192, "doc_id": "beanstalk", "chunk_id": "9", "question_id": 1, "question": "What is a saved configuration?", "answer_span": "A saved configuration is a template that you can use as a starting point for creating unique environment configurations.", "chunk": "a collection of parameters and settings that define how an environment and its associated resources behave. When you update an environment’s Application 143 AWS Elastic Beanstalk Developer Guide configuration settings, Elastic Beanstalk automatically applies the changes to existing resources or deletes and deploys new resources (depending on the type of change). Saved configuration A saved configuration is a template that you can use as a starting point for creating unique environment configurations. You can create and modify saved configurations, and apply them to environments, using the Elastic Beanstalk console, EB CLI, AWS CLI, or API. The API and the AWS CLI refer to saved configurations as configuration templates. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and Elastic Beanstalk components. You design and target your web application to a platform. Elastic Beanstalk provides a variety of platforms on which you can build your applications. For details, see Elastic Beanstalk platforms. Elastic Beanstalk web server environments The following diagram shows an example Elastic Beanstalk architecture for a web server environment tier, and shows how the components in that type of environment tier work together. Saved configuration 144 AWS Elastic Beanstalk Developer Guide The environment is the heart of the application. In the diagram, the environment is shown within the top-level solid line. When you create an environment, Elastic Beanstalk provisions the resources required to run your application. AWS resources created for an environment include one elastic load balancer (ELB in the diagram), an Auto Scaling group, and one or more Amazon Elastic Compute Cloud (Amazon EC2) instances. Every environment has a CNAME (URL) that points to a load balancer. The environment has a URL, such as myapp.us-west-2.elasticbeanstalk.com. This URL is aliased in Amazon Route 53 to an Elastic Load Balancing URL—something like"} +{"global_id": 193, "doc_id": "beanstalk", "chunk_id": "9", "question_id": 2, "question": "What does Elastic Beanstalk do when you update an environment's configuration settings?", "answer_span": "Elastic Beanstalk automatically applies the changes to existing resources or deletes and deploys new resources (depending on the type of change).", "chunk": "a collection of parameters and settings that define how an environment and its associated resources behave. When you update an environment’s Application 143 AWS Elastic Beanstalk Developer Guide configuration settings, Elastic Beanstalk automatically applies the changes to existing resources or deletes and deploys new resources (depending on the type of change). Saved configuration A saved configuration is a template that you can use as a starting point for creating unique environment configurations. You can create and modify saved configurations, and apply them to environments, using the Elastic Beanstalk console, EB CLI, AWS CLI, or API. The API and the AWS CLI refer to saved configurations as configuration templates. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and Elastic Beanstalk components. You design and target your web application to a platform. Elastic Beanstalk provides a variety of platforms on which you can build your applications. For details, see Elastic Beanstalk platforms. Elastic Beanstalk web server environments The following diagram shows an example Elastic Beanstalk architecture for a web server environment tier, and shows how the components in that type of environment tier work together. Saved configuration 144 AWS Elastic Beanstalk Developer Guide The environment is the heart of the application. In the diagram, the environment is shown within the top-level solid line. When you create an environment, Elastic Beanstalk provisions the resources required to run your application. AWS resources created for an environment include one elastic load balancer (ELB in the diagram), an Auto Scaling group, and one or more Amazon Elastic Compute Cloud (Amazon EC2) instances. Every environment has a CNAME (URL) that points to a load balancer. The environment has a URL, such as myapp.us-west-2.elasticbeanstalk.com. This URL is aliased in Amazon Route 53 to an Elastic Load Balancing URL—something like"} +{"global_id": 194, "doc_id": "beanstalk", "chunk_id": "9", "question_id": 3, "question": "What AWS resources are created for an environment?", "answer_span": "AWS resources created for an environment include one elastic load balancer (ELB in the diagram), an Auto Scaling group, and one or more Amazon Elastic Compute Cloud (Amazon EC2) instances.", "chunk": "a collection of parameters and settings that define how an environment and its associated resources behave. When you update an environment’s Application 143 AWS Elastic Beanstalk Developer Guide configuration settings, Elastic Beanstalk automatically applies the changes to existing resources or deletes and deploys new resources (depending on the type of change). Saved configuration A saved configuration is a template that you can use as a starting point for creating unique environment configurations. You can create and modify saved configurations, and apply them to environments, using the Elastic Beanstalk console, EB CLI, AWS CLI, or API. The API and the AWS CLI refer to saved configurations as configuration templates. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and Elastic Beanstalk components. You design and target your web application to a platform. Elastic Beanstalk provides a variety of platforms on which you can build your applications. For details, see Elastic Beanstalk platforms. Elastic Beanstalk web server environments The following diagram shows an example Elastic Beanstalk architecture for a web server environment tier, and shows how the components in that type of environment tier work together. Saved configuration 144 AWS Elastic Beanstalk Developer Guide The environment is the heart of the application. In the diagram, the environment is shown within the top-level solid line. When you create an environment, Elastic Beanstalk provisions the resources required to run your application. AWS resources created for an environment include one elastic load balancer (ELB in the diagram), an Auto Scaling group, and one or more Amazon Elastic Compute Cloud (Amazon EC2) instances. Every environment has a CNAME (URL) that points to a load balancer. The environment has a URL, such as myapp.us-west-2.elasticbeanstalk.com. This URL is aliased in Amazon Route 53 to an Elastic Load Balancing URL—something like"} +{"global_id": 195, "doc_id": "beanstalk", "chunk_id": "9", "question_id": 4, "question": "What is the URL format for an environment in Elastic Beanstalk?", "answer_span": "The environment has a URL, such as myapp.us-west-2.elasticbeanstalk.com.", "chunk": "a collection of parameters and settings that define how an environment and its associated resources behave. When you update an environment’s Application 143 AWS Elastic Beanstalk Developer Guide configuration settings, Elastic Beanstalk automatically applies the changes to existing resources or deletes and deploys new resources (depending on the type of change). Saved configuration A saved configuration is a template that you can use as a starting point for creating unique environment configurations. You can create and modify saved configurations, and apply them to environments, using the Elastic Beanstalk console, EB CLI, AWS CLI, or API. The API and the AWS CLI refer to saved configurations as configuration templates. Platform A platform is a combination of an operating system, programming language runtime, web server, application server, and Elastic Beanstalk components. You design and target your web application to a platform. Elastic Beanstalk provides a variety of platforms on which you can build your applications. For details, see Elastic Beanstalk platforms. Elastic Beanstalk web server environments The following diagram shows an example Elastic Beanstalk architecture for a web server environment tier, and shows how the components in that type of environment tier work together. Saved configuration 144 AWS Elastic Beanstalk Developer Guide The environment is the heart of the application. In the diagram, the environment is shown within the top-level solid line. When you create an environment, Elastic Beanstalk provisions the resources required to run your application. AWS resources created for an environment include one elastic load balancer (ELB in the diagram), an Auto Scaling group, and one or more Amazon Elastic Compute Cloud (Amazon EC2) instances. Every environment has a CNAME (URL) that points to a load balancer. The environment has a URL, such as myapp.us-west-2.elasticbeanstalk.com. This URL is aliased in Amazon Route 53 to an Elastic Load Balancing URL—something like"} +{"global_id": 196, "doc_id": "beanstalk", "chunk_id": "10", "question_id": 1, "question": "What is used to point to a load balancer in an environment?", "answer_span": "Every environment has a CNAME (URL) that points to a load balancer.", "chunk": "an Auto Scaling group, and one or more Amazon Elastic Compute Cloud (Amazon EC2) instances. Every environment has a CNAME (URL) that points to a load balancer. The environment has a URL, such as myapp.us-west-2.elasticbeanstalk.com. This URL is aliased in Amazon Route 53 to an Elastic Load Balancing URL—something like abcdef-123456.uswest-2.elb.amazonaws.com—by using a CNAME record. Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. It provides secure and reliable routing to your infrastructure. Your domain name that you registered with your DNS provider will forward requests to the CNAME. The load balancer sits in front of the Amazon EC2 instances, which are part of an Auto Scaling group. Amazon EC2 Auto Scaling automatically starts additional Amazon EC2 instances to accommodate increasing load on your application. If the load on your application decreases, Amazon EC2 Auto Scaling stops instances, but always leaves at least one instance running. The software stack running on the Amazon EC2 instances is dependent on the container type. A container type defines the infrastructure topology and software stack to be used for that environment. For example, an Elastic Beanstalk environment with an Apache Tomcat container uses the Amazon Linux operating system, Apache web server, and Apache Tomcat software. For a list of supported container types, see Elastic Beanstalk supported platforms. Each Amazon EC2 instance that runs your application uses one of these container types. In addition, a software component called the host manager (HM) runs on each Amazon EC2 instance. The host manager is responsible for the following: • Deploying the application • Aggregating events and metrics for retrieval via the console, the API, or the command line • Generating instance-level events • Monitoring the application log files for critical errors • Monitoring the application server • Patching instance components •"} +{"global_id": 197, "doc_id": "beanstalk", "chunk_id": "10", "question_id": 2, "question": "What does Amazon EC2 Auto Scaling do when the load on your application decreases?", "answer_span": "If the load on your application decreases, Amazon EC2 Auto Scaling stops instances, but always leaves at least one instance running.", "chunk": "an Auto Scaling group, and one or more Amazon Elastic Compute Cloud (Amazon EC2) instances. Every environment has a CNAME (URL) that points to a load balancer. The environment has a URL, such as myapp.us-west-2.elasticbeanstalk.com. This URL is aliased in Amazon Route 53 to an Elastic Load Balancing URL—something like abcdef-123456.uswest-2.elb.amazonaws.com—by using a CNAME record. Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. It provides secure and reliable routing to your infrastructure. Your domain name that you registered with your DNS provider will forward requests to the CNAME. The load balancer sits in front of the Amazon EC2 instances, which are part of an Auto Scaling group. Amazon EC2 Auto Scaling automatically starts additional Amazon EC2 instances to accommodate increasing load on your application. If the load on your application decreases, Amazon EC2 Auto Scaling stops instances, but always leaves at least one instance running. The software stack running on the Amazon EC2 instances is dependent on the container type. A container type defines the infrastructure topology and software stack to be used for that environment. For example, an Elastic Beanstalk environment with an Apache Tomcat container uses the Amazon Linux operating system, Apache web server, and Apache Tomcat software. For a list of supported container types, see Elastic Beanstalk supported platforms. Each Amazon EC2 instance that runs your application uses one of these container types. In addition, a software component called the host manager (HM) runs on each Amazon EC2 instance. The host manager is responsible for the following: • Deploying the application • Aggregating events and metrics for retrieval via the console, the API, or the command line • Generating instance-level events • Monitoring the application log files for critical errors • Monitoring the application server • Patching instance components •"} +{"global_id": 198, "doc_id": "beanstalk", "chunk_id": "10", "question_id": 3, "question": "What is the role of the host manager (HM) on each Amazon EC2 instance?", "answer_span": "The host manager is responsible for the following: • Deploying the application • Aggregating events and metrics for retrieval via the console, the API, or the command line • Generating instance-level events • Monitoring the application log files for critical errors • Monitoring the application server • Patching instance components •", "chunk": "an Auto Scaling group, and one or more Amazon Elastic Compute Cloud (Amazon EC2) instances. Every environment has a CNAME (URL) that points to a load balancer. The environment has a URL, such as myapp.us-west-2.elasticbeanstalk.com. This URL is aliased in Amazon Route 53 to an Elastic Load Balancing URL—something like abcdef-123456.uswest-2.elb.amazonaws.com—by using a CNAME record. Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. It provides secure and reliable routing to your infrastructure. Your domain name that you registered with your DNS provider will forward requests to the CNAME. The load balancer sits in front of the Amazon EC2 instances, which are part of an Auto Scaling group. Amazon EC2 Auto Scaling automatically starts additional Amazon EC2 instances to accommodate increasing load on your application. If the load on your application decreases, Amazon EC2 Auto Scaling stops instances, but always leaves at least one instance running. The software stack running on the Amazon EC2 instances is dependent on the container type. A container type defines the infrastructure topology and software stack to be used for that environment. For example, an Elastic Beanstalk environment with an Apache Tomcat container uses the Amazon Linux operating system, Apache web server, and Apache Tomcat software. For a list of supported container types, see Elastic Beanstalk supported platforms. Each Amazon EC2 instance that runs your application uses one of these container types. In addition, a software component called the host manager (HM) runs on each Amazon EC2 instance. The host manager is responsible for the following: • Deploying the application • Aggregating events and metrics for retrieval via the console, the API, or the command line • Generating instance-level events • Monitoring the application log files for critical errors • Monitoring the application server • Patching instance components •"} +{"global_id": 199, "doc_id": "beanstalk", "chunk_id": "10", "question_id": 4, "question": "What operating system does an Elastic Beanstalk environment with an Apache Tomcat container use?", "answer_span": "an Elastic Beanstalk environment with an Apache Tomcat container uses the Amazon Linux operating system.", "chunk": "an Auto Scaling group, and one or more Amazon Elastic Compute Cloud (Amazon EC2) instances. Every environment has a CNAME (URL) that points to a load balancer. The environment has a URL, such as myapp.us-west-2.elasticbeanstalk.com. This URL is aliased in Amazon Route 53 to an Elastic Load Balancing URL—something like abcdef-123456.uswest-2.elb.amazonaws.com—by using a CNAME record. Amazon Route 53 is a highly available and scalable Domain Name System (DNS) web service. It provides secure and reliable routing to your infrastructure. Your domain name that you registered with your DNS provider will forward requests to the CNAME. The load balancer sits in front of the Amazon EC2 instances, which are part of an Auto Scaling group. Amazon EC2 Auto Scaling automatically starts additional Amazon EC2 instances to accommodate increasing load on your application. If the load on your application decreases, Amazon EC2 Auto Scaling stops instances, but always leaves at least one instance running. The software stack running on the Amazon EC2 instances is dependent on the container type. A container type defines the infrastructure topology and software stack to be used for that environment. For example, an Elastic Beanstalk environment with an Apache Tomcat container uses the Amazon Linux operating system, Apache web server, and Apache Tomcat software. For a list of supported container types, see Elastic Beanstalk supported platforms. Each Amazon EC2 instance that runs your application uses one of these container types. In addition, a software component called the host manager (HM) runs on each Amazon EC2 instance. The host manager is responsible for the following: • Deploying the application • Aggregating events and metrics for retrieval via the console, the API, or the command line • Generating instance-level events • Monitoring the application log files for critical errors • Monitoring the application server • Patching instance components •"} +{"global_id": 200, "doc_id": "beanstalk", "chunk_id": "11", "question_id": 1, "question": "What is the host manager responsible for?", "answer_span": "host manager is responsible for the following: • Deploying the application • Aggregating events and metrics for retrieval via the console, the API, or the command line • Generating instance-level events • Monitoring the application log files for critical errors • Monitoring the application server • Patching instance components • Rotating your application's log files and publishing them to Amazon S3", "chunk": "host manager is responsible for the following: • Deploying the application • Aggregating events and metrics for retrieval via the console, the API, or the command line • Generating instance-level events • Monitoring the application log files for critical errors • Monitoring the application server • Patching instance components • Rotating your application's log files and publishing them to Amazon S3 Web server environments 145 AWS Elastic Beanstalk Developer Guide The host manager reports metrics, errors and events, and server instance status, which are available via the Elastic Beanstalk console, APIs, and CLIs. The Amazon EC2 instances shown in the diagram are part of one security group. A security group defines the firewall rules for your instances. By default, Elastic Beanstalk defines a security group, which allows everyone to connect using port 80 (HTTP). You can define more than one security group. For example, you can define a security group for your database server. For more information about Amazon EC2 security groups and how to configure them for your Elastic Beanstalk application, see EC2 security groups. Elastic Beanstalk worker environments AWS resources created for a worker environment tier include an Auto Scaling group, one or more Amazon EC2 instances, and an IAM role. For the worker environment tier, Elastic Beanstalk also creates and provisions an Amazon SQS queue if you don’t already have one. When you launch a worker environment, Elastic Beanstalk installs the necessary support files for your programming language of choice and a daemon on each EC2 instance in the Auto Scaling group. The daemon reads messages from an Amazon SQS queue. The daemon sends data from each message that it reads to the web application running in the worker environment for processing. If you have multiple instances in your worker environment, each instance has its own daemon,"} +{"global_id": 201, "doc_id": "beanstalk", "chunk_id": "11", "question_id": 2, "question": "What does the host manager report?", "answer_span": "The host manager reports metrics, errors and events, and server instance status, which are available via the Elastic Beanstalk console, APIs, and CLIs.", "chunk": "host manager is responsible for the following: • Deploying the application • Aggregating events and metrics for retrieval via the console, the API, or the command line • Generating instance-level events • Monitoring the application log files for critical errors • Monitoring the application server • Patching instance components • Rotating your application's log files and publishing them to Amazon S3 Web server environments 145 AWS Elastic Beanstalk Developer Guide The host manager reports metrics, errors and events, and server instance status, which are available via the Elastic Beanstalk console, APIs, and CLIs. The Amazon EC2 instances shown in the diagram are part of one security group. A security group defines the firewall rules for your instances. By default, Elastic Beanstalk defines a security group, which allows everyone to connect using port 80 (HTTP). You can define more than one security group. For example, you can define a security group for your database server. For more information about Amazon EC2 security groups and how to configure them for your Elastic Beanstalk application, see EC2 security groups. Elastic Beanstalk worker environments AWS resources created for a worker environment tier include an Auto Scaling group, one or more Amazon EC2 instances, and an IAM role. For the worker environment tier, Elastic Beanstalk also creates and provisions an Amazon SQS queue if you don’t already have one. When you launch a worker environment, Elastic Beanstalk installs the necessary support files for your programming language of choice and a daemon on each EC2 instance in the Auto Scaling group. The daemon reads messages from an Amazon SQS queue. The daemon sends data from each message that it reads to the web application running in the worker environment for processing. If you have multiple instances in your worker environment, each instance has its own daemon,"} +{"global_id": 202, "doc_id": "beanstalk", "chunk_id": "11", "question_id": 3, "question": "What AWS resources are created for a worker environment tier?", "answer_span": "AWS resources created for a worker environment tier include an Auto Scaling group, one or more Amazon EC2 instances, and an IAM role.", "chunk": "host manager is responsible for the following: • Deploying the application • Aggregating events and metrics for retrieval via the console, the API, or the command line • Generating instance-level events • Monitoring the application log files for critical errors • Monitoring the application server • Patching instance components • Rotating your application's log files and publishing them to Amazon S3 Web server environments 145 AWS Elastic Beanstalk Developer Guide The host manager reports metrics, errors and events, and server instance status, which are available via the Elastic Beanstalk console, APIs, and CLIs. The Amazon EC2 instances shown in the diagram are part of one security group. A security group defines the firewall rules for your instances. By default, Elastic Beanstalk defines a security group, which allows everyone to connect using port 80 (HTTP). You can define more than one security group. For example, you can define a security group for your database server. For more information about Amazon EC2 security groups and how to configure them for your Elastic Beanstalk application, see EC2 security groups. Elastic Beanstalk worker environments AWS resources created for a worker environment tier include an Auto Scaling group, one or more Amazon EC2 instances, and an IAM role. For the worker environment tier, Elastic Beanstalk also creates and provisions an Amazon SQS queue if you don’t already have one. When you launch a worker environment, Elastic Beanstalk installs the necessary support files for your programming language of choice and a daemon on each EC2 instance in the Auto Scaling group. The daemon reads messages from an Amazon SQS queue. The daemon sends data from each message that it reads to the web application running in the worker environment for processing. If you have multiple instances in your worker environment, each instance has its own daemon,"} +{"global_id": 203, "doc_id": "beanstalk", "chunk_id": "11", "question_id": 4, "question": "What does the daemon in the worker environment do?", "answer_span": "The daemon reads messages from an Amazon SQS queue. The daemon sends data from each message that it reads to the web application running in the worker environment for processing.", "chunk": "host manager is responsible for the following: • Deploying the application • Aggregating events and metrics for retrieval via the console, the API, or the command line • Generating instance-level events • Monitoring the application log files for critical errors • Monitoring the application server • Patching instance components • Rotating your application's log files and publishing them to Amazon S3 Web server environments 145 AWS Elastic Beanstalk Developer Guide The host manager reports metrics, errors and events, and server instance status, which are available via the Elastic Beanstalk console, APIs, and CLIs. The Amazon EC2 instances shown in the diagram are part of one security group. A security group defines the firewall rules for your instances. By default, Elastic Beanstalk defines a security group, which allows everyone to connect using port 80 (HTTP). You can define more than one security group. For example, you can define a security group for your database server. For more information about Amazon EC2 security groups and how to configure them for your Elastic Beanstalk application, see EC2 security groups. Elastic Beanstalk worker environments AWS resources created for a worker environment tier include an Auto Scaling group, one or more Amazon EC2 instances, and an IAM role. For the worker environment tier, Elastic Beanstalk also creates and provisions an Amazon SQS queue if you don’t already have one. When you launch a worker environment, Elastic Beanstalk installs the necessary support files for your programming language of choice and a daemon on each EC2 instance in the Auto Scaling group. The daemon reads messages from an Amazon SQS queue. The daemon sends data from each message that it reads to the web application running in the worker environment for processing. If you have multiple instances in your worker environment, each instance has its own daemon,"} +{"global_id": 204, "doc_id": "beanstalk", "chunk_id": "12", "question_id": 1, "question": "What does the daemon read messages from?", "answer_span": "The daemon reads messages from an Amazon SQS queue.", "chunk": "in the Auto Scaling group. The daemon reads messages from an Amazon SQS queue. The daemon sends data from each message that it reads to the web application running in the worker environment for processing. If you have multiple instances in your worker environment, each instance has its own daemon, but they all read from the same Amazon SQS queue. The following diagram shows the different components and their interactions across environments and AWS services. Worker environments 146 AWS Elastic Beanstalk Developer Guide Amazon CloudWatch is used for alarms and health monitoring. For more information, go to Basic health reporting. For details about how the worker environment tier works, see Elastic Beanstalk worker environments. Design considerations for your Elastic Beanstalk applications Because applications deployed using AWS Elastic Beanstalk run on AWS Cloud resources, you should keep several configuration factors in mind to optimize your applications: scalability, security, persistent storage, fault tolerance, content delivery, software updates and patching, and connectivity. Each of these are covered separately in this topic. For a comprehensive list of technical AWS whitepapers, covering topics such as architecture, as well as security and economics, see AWS Cloud Computing Whitepapers. Design considerations 147 AWS Elastic Beanstalk Developer Guide Scalability When operating in a physical hardware environment, in contrast to a cloud environment, you can approach scalability in one of either two ways. Either you can scale up through vertical scaling or you can scale out through horizontal scaling. The scale-up approach requires that you invest in powerful hardware, which can support the increasing demands of your business. The scaleout approach requires that you follow a distributed model of investment. As such, your hardware and application acquisitions can be more targeted, your data sets are federated, and your design is service oriented. The scale-up approach can be expensive, and"} +{"global_id": 205, "doc_id": "beanstalk", "chunk_id": "12", "question_id": 2, "question": "What is used for alarms and health monitoring in the worker environment?", "answer_span": "Amazon CloudWatch is used for alarms and health monitoring.", "chunk": "in the Auto Scaling group. The daemon reads messages from an Amazon SQS queue. The daemon sends data from each message that it reads to the web application running in the worker environment for processing. If you have multiple instances in your worker environment, each instance has its own daemon, but they all read from the same Amazon SQS queue. The following diagram shows the different components and their interactions across environments and AWS services. Worker environments 146 AWS Elastic Beanstalk Developer Guide Amazon CloudWatch is used for alarms and health monitoring. For more information, go to Basic health reporting. For details about how the worker environment tier works, see Elastic Beanstalk worker environments. Design considerations for your Elastic Beanstalk applications Because applications deployed using AWS Elastic Beanstalk run on AWS Cloud resources, you should keep several configuration factors in mind to optimize your applications: scalability, security, persistent storage, fault tolerance, content delivery, software updates and patching, and connectivity. Each of these are covered separately in this topic. For a comprehensive list of technical AWS whitepapers, covering topics such as architecture, as well as security and economics, see AWS Cloud Computing Whitepapers. Design considerations 147 AWS Elastic Beanstalk Developer Guide Scalability When operating in a physical hardware environment, in contrast to a cloud environment, you can approach scalability in one of either two ways. Either you can scale up through vertical scaling or you can scale out through horizontal scaling. The scale-up approach requires that you invest in powerful hardware, which can support the increasing demands of your business. The scaleout approach requires that you follow a distributed model of investment. As such, your hardware and application acquisitions can be more targeted, your data sets are federated, and your design is service oriented. The scale-up approach can be expensive, and"} +{"global_id": 206, "doc_id": "beanstalk", "chunk_id": "12", "question_id": 3, "question": "What are the design considerations for your Elastic Beanstalk applications?", "answer_span": "scalability, security, persistent storage, fault tolerance, content delivery, software updates and patching, and connectivity.", "chunk": "in the Auto Scaling group. The daemon reads messages from an Amazon SQS queue. The daemon sends data from each message that it reads to the web application running in the worker environment for processing. If you have multiple instances in your worker environment, each instance has its own daemon, but they all read from the same Amazon SQS queue. The following diagram shows the different components and their interactions across environments and AWS services. Worker environments 146 AWS Elastic Beanstalk Developer Guide Amazon CloudWatch is used for alarms and health monitoring. For more information, go to Basic health reporting. For details about how the worker environment tier works, see Elastic Beanstalk worker environments. Design considerations for your Elastic Beanstalk applications Because applications deployed using AWS Elastic Beanstalk run on AWS Cloud resources, you should keep several configuration factors in mind to optimize your applications: scalability, security, persistent storage, fault tolerance, content delivery, software updates and patching, and connectivity. Each of these are covered separately in this topic. For a comprehensive list of technical AWS whitepapers, covering topics such as architecture, as well as security and economics, see AWS Cloud Computing Whitepapers. Design considerations 147 AWS Elastic Beanstalk Developer Guide Scalability When operating in a physical hardware environment, in contrast to a cloud environment, you can approach scalability in one of either two ways. Either you can scale up through vertical scaling or you can scale out through horizontal scaling. The scale-up approach requires that you invest in powerful hardware, which can support the increasing demands of your business. The scaleout approach requires that you follow a distributed model of investment. As such, your hardware and application acquisitions can be more targeted, your data sets are federated, and your design is service oriented. The scale-up approach can be expensive, and"} +{"global_id": 207, "doc_id": "beanstalk", "chunk_id": "12", "question_id": 4, "question": "What are the two ways to approach scalability in a physical hardware environment?", "answer_span": "Either you can scale up through vertical scaling or you can scale out through horizontal scaling.", "chunk": "in the Auto Scaling group. The daemon reads messages from an Amazon SQS queue. The daemon sends data from each message that it reads to the web application running in the worker environment for processing. If you have multiple instances in your worker environment, each instance has its own daemon, but they all read from the same Amazon SQS queue. The following diagram shows the different components and their interactions across environments and AWS services. Worker environments 146 AWS Elastic Beanstalk Developer Guide Amazon CloudWatch is used for alarms and health monitoring. For more information, go to Basic health reporting. For details about how the worker environment tier works, see Elastic Beanstalk worker environments. Design considerations for your Elastic Beanstalk applications Because applications deployed using AWS Elastic Beanstalk run on AWS Cloud resources, you should keep several configuration factors in mind to optimize your applications: scalability, security, persistent storage, fault tolerance, content delivery, software updates and patching, and connectivity. Each of these are covered separately in this topic. For a comprehensive list of technical AWS whitepapers, covering topics such as architecture, as well as security and economics, see AWS Cloud Computing Whitepapers. Design considerations 147 AWS Elastic Beanstalk Developer Guide Scalability When operating in a physical hardware environment, in contrast to a cloud environment, you can approach scalability in one of either two ways. Either you can scale up through vertical scaling or you can scale out through horizontal scaling. The scale-up approach requires that you invest in powerful hardware, which can support the increasing demands of your business. The scaleout approach requires that you follow a distributed model of investment. As such, your hardware and application acquisitions can be more targeted, your data sets are federated, and your design is service oriented. The scale-up approach can be expensive, and"} +{"global_id": 208, "doc_id": "beanstalk", "chunk_id": "13", "question_id": 1, "question": "What approach requires that you follow a distributed model of investment?", "answer_span": "The scaleout approach requires that you follow a distributed model of investment.", "chunk": "which can support the increasing demands of your business. The scaleout approach requires that you follow a distributed model of investment. As such, your hardware and application acquisitions can be more targeted, your data sets are federated, and your design is service oriented. The scale-up approach can be expensive, and there's also the risk that your demand could outgrow your capacity. In this regard, the scale-out approach is usually more effective. However, when using it, you must be able to predict demand at regular intervals and deploy infrastructure in chunks to meet that demand. As a result, this approach can often lead to unused capacity and might require some careful monitoring. By migrating to the cloud, you can make your infrastructure align well with demand by leveraging the elasticity of cloud. Elasticity helps to streamline resource acquisition and release. With it, your infrastructure can rapidly scale in and scale out as demand fluctuates. To use it, configure your Auto Scaling settings to scale up or down based on the metrics for the resources in your environment. For example, you can set metrics such as server utilization or network I/O. You can use Auto Scaling for compute capacity to be added automatically whenever usage rises and for it to be removed whenever usage drops. You can publish system metrics (for example, CPU, memory, disk I/O, and network I/O) to Amazon CloudWatch. Then, you can use CloudWatch to configure alarms to trigger Auto Scaling actions or send notifications based on these metrics. For instructions on how to configure Auto Scaling, see Auto Scaling your Elastic Beanstalk environment instances. We also recommend that you design all your Elastic Beanstalk applications as stateless as possible, using loosely coupled, fault-tolerant components that can be scaled out as needed. For more information about designing scalable application"} +{"global_id": 209, "doc_id": "beanstalk", "chunk_id": "13", "question_id": 2, "question": "What can help to streamline resource acquisition and release?", "answer_span": "Elasticity helps to streamline resource acquisition and release.", "chunk": "which can support the increasing demands of your business. The scaleout approach requires that you follow a distributed model of investment. As such, your hardware and application acquisitions can be more targeted, your data sets are federated, and your design is service oriented. The scale-up approach can be expensive, and there's also the risk that your demand could outgrow your capacity. In this regard, the scale-out approach is usually more effective. However, when using it, you must be able to predict demand at regular intervals and deploy infrastructure in chunks to meet that demand. As a result, this approach can often lead to unused capacity and might require some careful monitoring. By migrating to the cloud, you can make your infrastructure align well with demand by leveraging the elasticity of cloud. Elasticity helps to streamline resource acquisition and release. With it, your infrastructure can rapidly scale in and scale out as demand fluctuates. To use it, configure your Auto Scaling settings to scale up or down based on the metrics for the resources in your environment. For example, you can set metrics such as server utilization or network I/O. You can use Auto Scaling for compute capacity to be added automatically whenever usage rises and for it to be removed whenever usage drops. You can publish system metrics (for example, CPU, memory, disk I/O, and network I/O) to Amazon CloudWatch. Then, you can use CloudWatch to configure alarms to trigger Auto Scaling actions or send notifications based on these metrics. For instructions on how to configure Auto Scaling, see Auto Scaling your Elastic Beanstalk environment instances. We also recommend that you design all your Elastic Beanstalk applications as stateless as possible, using loosely coupled, fault-tolerant components that can be scaled out as needed. For more information about designing scalable application"} +{"global_id": 210, "doc_id": "beanstalk", "chunk_id": "13", "question_id": 3, "question": "What should you configure to scale up or down based on the metrics for the resources in your environment?", "answer_span": "To use it, configure your Auto Scaling settings to scale up or down based on the metrics for the resources in your environment.", "chunk": "which can support the increasing demands of your business. The scaleout approach requires that you follow a distributed model of investment. As such, your hardware and application acquisitions can be more targeted, your data sets are federated, and your design is service oriented. The scale-up approach can be expensive, and there's also the risk that your demand could outgrow your capacity. In this regard, the scale-out approach is usually more effective. However, when using it, you must be able to predict demand at regular intervals and deploy infrastructure in chunks to meet that demand. As a result, this approach can often lead to unused capacity and might require some careful monitoring. By migrating to the cloud, you can make your infrastructure align well with demand by leveraging the elasticity of cloud. Elasticity helps to streamline resource acquisition and release. With it, your infrastructure can rapidly scale in and scale out as demand fluctuates. To use it, configure your Auto Scaling settings to scale up or down based on the metrics for the resources in your environment. For example, you can set metrics such as server utilization or network I/O. You can use Auto Scaling for compute capacity to be added automatically whenever usage rises and for it to be removed whenever usage drops. You can publish system metrics (for example, CPU, memory, disk I/O, and network I/O) to Amazon CloudWatch. Then, you can use CloudWatch to configure alarms to trigger Auto Scaling actions or send notifications based on these metrics. For instructions on how to configure Auto Scaling, see Auto Scaling your Elastic Beanstalk environment instances. We also recommend that you design all your Elastic Beanstalk applications as stateless as possible, using loosely coupled, fault-tolerant components that can be scaled out as needed. For more information about designing scalable application"} +{"global_id": 211, "doc_id": "beanstalk", "chunk_id": "13", "question_id": 4, "question": "What do you need to design all your Elastic Beanstalk applications as?", "answer_span": "We also recommend that you design all your Elastic Beanstalk applications as stateless as possible.", "chunk": "which can support the increasing demands of your business. The scaleout approach requires that you follow a distributed model of investment. As such, your hardware and application acquisitions can be more targeted, your data sets are federated, and your design is service oriented. The scale-up approach can be expensive, and there's also the risk that your demand could outgrow your capacity. In this regard, the scale-out approach is usually more effective. However, when using it, you must be able to predict demand at regular intervals and deploy infrastructure in chunks to meet that demand. As a result, this approach can often lead to unused capacity and might require some careful monitoring. By migrating to the cloud, you can make your infrastructure align well with demand by leveraging the elasticity of cloud. Elasticity helps to streamline resource acquisition and release. With it, your infrastructure can rapidly scale in and scale out as demand fluctuates. To use it, configure your Auto Scaling settings to scale up or down based on the metrics for the resources in your environment. For example, you can set metrics such as server utilization or network I/O. You can use Auto Scaling for compute capacity to be added automatically whenever usage rises and for it to be removed whenever usage drops. You can publish system metrics (for example, CPU, memory, disk I/O, and network I/O) to Amazon CloudWatch. Then, you can use CloudWatch to configure alarms to trigger Auto Scaling actions or send notifications based on these metrics. For instructions on how to configure Auto Scaling, see Auto Scaling your Elastic Beanstalk environment instances. We also recommend that you design all your Elastic Beanstalk applications as stateless as possible, using loosely coupled, fault-tolerant components that can be scaled out as needed. For more information about designing scalable application"} +{"global_id": 212, "doc_id": "beanstalk", "chunk_id": "14", "question_id": 1, "question": "What is recommended for designing Elastic Beanstalk applications?", "answer_span": "We also recommend that you design all your Elastic Beanstalk applications as stateless as possible, using loosely coupled, fault-tolerant components that can be scaled out as needed.", "chunk": "For instructions on how to configure Auto Scaling, see Auto Scaling your Elastic Beanstalk environment instances. We also recommend that you design all your Elastic Beanstalk applications as stateless as possible, using loosely coupled, fault-tolerant components that can be scaled out as needed. For more information about designing scalable application architectures for AWS, see AWS Well-Architected Framework. Security Security on AWS is a shared responsibility. Amazon Web Services protects the physical resources in your environment and ensures that the Cloud is a safe place for you to run applications. You're responsible for the security of data coming in and out of your Elastic Beanstalk environment and the security of your application. Configure SSL to protect information that flows between your application and clients. To configure SSL, you need a free certificate from AWS Certificate Manager (ACM). If you already have a Scalability 148 AWS Elastic Beanstalk Developer Guide certificate from an external certificate authority (CA), you can use ACM to import that your certificate. Otherwise, you can import it using the AWS CLI. If ACM isn't available in your AWS Region, you can purchase a certificate from an external CA, such as VeriSign or Entrust. Then, use the AWS Command Line Interface (AWS CLI) to upload a thirdparty or self-signed certificate and private key to AWS Identity and Access Management (IAM). The public key of the certificate authenticates your server to the browser. It also serves as the basis for creating the shared session key that encrypts the data in both directions. For instructions on how to create, upload, and assign an SSL certificate to your environment, see Configuring HTTPS for your Elastic Beanstalk environment. When you configure an SSL certificate for your environment, data is encrypted between the client and the Elastic Load Balancing load balancer for your environment."} +{"global_id": 213, "doc_id": "beanstalk", "chunk_id": "14", "question_id": 2, "question": "What is the responsibility of Amazon Web Services regarding security?", "answer_span": "Amazon Web Services protects the physical resources in your environment and ensures that the Cloud is a safe place for you to run applications.", "chunk": "For instructions on how to configure Auto Scaling, see Auto Scaling your Elastic Beanstalk environment instances. We also recommend that you design all your Elastic Beanstalk applications as stateless as possible, using loosely coupled, fault-tolerant components that can be scaled out as needed. For more information about designing scalable application architectures for AWS, see AWS Well-Architected Framework. Security Security on AWS is a shared responsibility. Amazon Web Services protects the physical resources in your environment and ensures that the Cloud is a safe place for you to run applications. You're responsible for the security of data coming in and out of your Elastic Beanstalk environment and the security of your application. Configure SSL to protect information that flows between your application and clients. To configure SSL, you need a free certificate from AWS Certificate Manager (ACM). If you already have a Scalability 148 AWS Elastic Beanstalk Developer Guide certificate from an external certificate authority (CA), you can use ACM to import that your certificate. Otherwise, you can import it using the AWS CLI. If ACM isn't available in your AWS Region, you can purchase a certificate from an external CA, such as VeriSign or Entrust. Then, use the AWS Command Line Interface (AWS CLI) to upload a thirdparty or self-signed certificate and private key to AWS Identity and Access Management (IAM). The public key of the certificate authenticates your server to the browser. It also serves as the basis for creating the shared session key that encrypts the data in both directions. For instructions on how to create, upload, and assign an SSL certificate to your environment, see Configuring HTTPS for your Elastic Beanstalk environment. When you configure an SSL certificate for your environment, data is encrypted between the client and the Elastic Load Balancing load balancer for your environment."} +{"global_id": 214, "doc_id": "beanstalk", "chunk_id": "14", "question_id": 3, "question": "What do you need to configure SSL?", "answer_span": "To configure SSL, you need a free certificate from AWS Certificate Manager (ACM).", "chunk": "For instructions on how to configure Auto Scaling, see Auto Scaling your Elastic Beanstalk environment instances. We also recommend that you design all your Elastic Beanstalk applications as stateless as possible, using loosely coupled, fault-tolerant components that can be scaled out as needed. For more information about designing scalable application architectures for AWS, see AWS Well-Architected Framework. Security Security on AWS is a shared responsibility. Amazon Web Services protects the physical resources in your environment and ensures that the Cloud is a safe place for you to run applications. You're responsible for the security of data coming in and out of your Elastic Beanstalk environment and the security of your application. Configure SSL to protect information that flows between your application and clients. To configure SSL, you need a free certificate from AWS Certificate Manager (ACM). If you already have a Scalability 148 AWS Elastic Beanstalk Developer Guide certificate from an external certificate authority (CA), you can use ACM to import that your certificate. Otherwise, you can import it using the AWS CLI. If ACM isn't available in your AWS Region, you can purchase a certificate from an external CA, such as VeriSign or Entrust. Then, use the AWS Command Line Interface (AWS CLI) to upload a thirdparty or self-signed certificate and private key to AWS Identity and Access Management (IAM). The public key of the certificate authenticates your server to the browser. It also serves as the basis for creating the shared session key that encrypts the data in both directions. For instructions on how to create, upload, and assign an SSL certificate to your environment, see Configuring HTTPS for your Elastic Beanstalk environment. When you configure an SSL certificate for your environment, data is encrypted between the client and the Elastic Load Balancing load balancer for your environment."} +{"global_id": 215, "doc_id": "beanstalk", "chunk_id": "14", "question_id": 4, "question": "What happens when you configure an SSL certificate for your environment?", "answer_span": "When you configure an SSL certificate for your environment, data is encrypted between the client and the Elastic Load Balancing load balancer for your environment.", "chunk": "For instructions on how to configure Auto Scaling, see Auto Scaling your Elastic Beanstalk environment instances. We also recommend that you design all your Elastic Beanstalk applications as stateless as possible, using loosely coupled, fault-tolerant components that can be scaled out as needed. For more information about designing scalable application architectures for AWS, see AWS Well-Architected Framework. Security Security on AWS is a shared responsibility. Amazon Web Services protects the physical resources in your environment and ensures that the Cloud is a safe place for you to run applications. You're responsible for the security of data coming in and out of your Elastic Beanstalk environment and the security of your application. Configure SSL to protect information that flows between your application and clients. To configure SSL, you need a free certificate from AWS Certificate Manager (ACM). If you already have a Scalability 148 AWS Elastic Beanstalk Developer Guide certificate from an external certificate authority (CA), you can use ACM to import that your certificate. Otherwise, you can import it using the AWS CLI. If ACM isn't available in your AWS Region, you can purchase a certificate from an external CA, such as VeriSign or Entrust. Then, use the AWS Command Line Interface (AWS CLI) to upload a thirdparty or self-signed certificate and private key to AWS Identity and Access Management (IAM). The public key of the certificate authenticates your server to the browser. It also serves as the basis for creating the shared session key that encrypts the data in both directions. For instructions on how to create, upload, and assign an SSL certificate to your environment, see Configuring HTTPS for your Elastic Beanstalk environment. When you configure an SSL certificate for your environment, data is encrypted between the client and the Elastic Load Balancing load balancer for your environment."} +{"global_id": 216, "doc_id": "beanstalk", "chunk_id": "15", "question_id": 1, "question": "What happens to the local file system when Amazon EC2 instances terminate?", "answer_span": "When the Amazon EC2 instances terminate, the local file system isn't saved.", "chunk": "both directions. For instructions on how to create, upload, and assign an SSL certificate to your environment, see Configuring HTTPS for your Elastic Beanstalk environment. When you configure an SSL certificate for your environment, data is encrypted between the client and the Elastic Load Balancing load balancer for your environment. By default, encryption is terminated at the load balancer, and traffic between the load balancer and Amazon EC2 instances is unencrypted. Persistent storage Elastic Beanstalk applications run on Amazon EC2 instances that have no persistent local storage. When the Amazon EC2 instances terminate, the local file system isn't saved. New Amazon EC2 instances start with a default file system. We recommend that you configure your application to store data in a persistent data source. AWS offers a number of persistent storage services that you can use for your application. The following table lists them. Storage service Service documentation Elastic Beanstalk integration Amazon S3 Amazon Simple Storage Service Documentation Using Elastic Beanstalk with Amazon S3 Amazon Elastic File System Amazon Elastic File System Documentation Using Elastic Beanstalk with Amazon Elastic File System Amazon Elastic Block Store Amazon Elastic Block Store Amazon DynamoDB Amazon DynamoDB Documentation Persistent storage Feature Guide: Elastic Block Store Using Elastic Beanstalk with Amazon DynamoDB 149 AWS Elastic Beanstalk Developer Guide Storage service Service documentation Elastic Beanstalk integration Amazon Relational Database Service (RDS) Amazon Relational Database Service Documentation Using Elastic Beanstalk with Amazon RDS Note Elastic Beanstalk creates a webapp user for you to set up as the owner of application directories on EC2 instances. For Amazon Linux 2 platform versions that are released on or after Feburary 3, 2022, Elastic Beanstalk assigns the webapp user a uid (user id) and gid (group id) value of 900 for new environments. It does the same for existing environments following"} +{"global_id": 217, "doc_id": "beanstalk", "chunk_id": "15", "question_id": 2, "question": "What is the default file system for new Amazon EC2 instances?", "answer_span": "New Amazon EC2 instances start with a default file system.", "chunk": "both directions. For instructions on how to create, upload, and assign an SSL certificate to your environment, see Configuring HTTPS for your Elastic Beanstalk environment. When you configure an SSL certificate for your environment, data is encrypted between the client and the Elastic Load Balancing load balancer for your environment. By default, encryption is terminated at the load balancer, and traffic between the load balancer and Amazon EC2 instances is unencrypted. Persistent storage Elastic Beanstalk applications run on Amazon EC2 instances that have no persistent local storage. When the Amazon EC2 instances terminate, the local file system isn't saved. New Amazon EC2 instances start with a default file system. We recommend that you configure your application to store data in a persistent data source. AWS offers a number of persistent storage services that you can use for your application. The following table lists them. Storage service Service documentation Elastic Beanstalk integration Amazon S3 Amazon Simple Storage Service Documentation Using Elastic Beanstalk with Amazon S3 Amazon Elastic File System Amazon Elastic File System Documentation Using Elastic Beanstalk with Amazon Elastic File System Amazon Elastic Block Store Amazon Elastic Block Store Amazon DynamoDB Amazon DynamoDB Documentation Persistent storage Feature Guide: Elastic Block Store Using Elastic Beanstalk with Amazon DynamoDB 149 AWS Elastic Beanstalk Developer Guide Storage service Service documentation Elastic Beanstalk integration Amazon Relational Database Service (RDS) Amazon Relational Database Service Documentation Using Elastic Beanstalk with Amazon RDS Note Elastic Beanstalk creates a webapp user for you to set up as the owner of application directories on EC2 instances. For Amazon Linux 2 platform versions that are released on or after Feburary 3, 2022, Elastic Beanstalk assigns the webapp user a uid (user id) and gid (group id) value of 900 for new environments. It does the same for existing environments following"} +{"global_id": 218, "doc_id": "beanstalk", "chunk_id": "15", "question_id": 3, "question": "What does Elastic Beanstalk create for you on EC2 instances?", "answer_span": "Elastic Beanstalk creates a webapp user for you to set up as the owner of application directories on EC2 instances.", "chunk": "both directions. For instructions on how to create, upload, and assign an SSL certificate to your environment, see Configuring HTTPS for your Elastic Beanstalk environment. When you configure an SSL certificate for your environment, data is encrypted between the client and the Elastic Load Balancing load balancer for your environment. By default, encryption is terminated at the load balancer, and traffic between the load balancer and Amazon EC2 instances is unencrypted. Persistent storage Elastic Beanstalk applications run on Amazon EC2 instances that have no persistent local storage. When the Amazon EC2 instances terminate, the local file system isn't saved. New Amazon EC2 instances start with a default file system. We recommend that you configure your application to store data in a persistent data source. AWS offers a number of persistent storage services that you can use for your application. The following table lists them. Storage service Service documentation Elastic Beanstalk integration Amazon S3 Amazon Simple Storage Service Documentation Using Elastic Beanstalk with Amazon S3 Amazon Elastic File System Amazon Elastic File System Documentation Using Elastic Beanstalk with Amazon Elastic File System Amazon Elastic Block Store Amazon Elastic Block Store Amazon DynamoDB Amazon DynamoDB Documentation Persistent storage Feature Guide: Elastic Block Store Using Elastic Beanstalk with Amazon DynamoDB 149 AWS Elastic Beanstalk Developer Guide Storage service Service documentation Elastic Beanstalk integration Amazon Relational Database Service (RDS) Amazon Relational Database Service Documentation Using Elastic Beanstalk with Amazon RDS Note Elastic Beanstalk creates a webapp user for you to set up as the owner of application directories on EC2 instances. For Amazon Linux 2 platform versions that are released on or after Feburary 3, 2022, Elastic Beanstalk assigns the webapp user a uid (user id) and gid (group id) value of 900 for new environments. It does the same for existing environments following"} +{"global_id": 219, "doc_id": "beanstalk", "chunk_id": "15", "question_id": 4, "question": "What uid and gid values does Elastic Beanstalk assign to the webapp user for new environments?", "answer_span": "Elastic Beanstalk assigns the webapp user a uid (user id) and gid (group id) value of 900 for new environments.", "chunk": "both directions. For instructions on how to create, upload, and assign an SSL certificate to your environment, see Configuring HTTPS for your Elastic Beanstalk environment. When you configure an SSL certificate for your environment, data is encrypted between the client and the Elastic Load Balancing load balancer for your environment. By default, encryption is terminated at the load balancer, and traffic between the load balancer and Amazon EC2 instances is unencrypted. Persistent storage Elastic Beanstalk applications run on Amazon EC2 instances that have no persistent local storage. When the Amazon EC2 instances terminate, the local file system isn't saved. New Amazon EC2 instances start with a default file system. We recommend that you configure your application to store data in a persistent data source. AWS offers a number of persistent storage services that you can use for your application. The following table lists them. Storage service Service documentation Elastic Beanstalk integration Amazon S3 Amazon Simple Storage Service Documentation Using Elastic Beanstalk with Amazon S3 Amazon Elastic File System Amazon Elastic File System Documentation Using Elastic Beanstalk with Amazon Elastic File System Amazon Elastic Block Store Amazon Elastic Block Store Amazon DynamoDB Amazon DynamoDB Documentation Persistent storage Feature Guide: Elastic Block Store Using Elastic Beanstalk with Amazon DynamoDB 149 AWS Elastic Beanstalk Developer Guide Storage service Service documentation Elastic Beanstalk integration Amazon Relational Database Service (RDS) Amazon Relational Database Service Documentation Using Elastic Beanstalk with Amazon RDS Note Elastic Beanstalk creates a webapp user for you to set up as the owner of application directories on EC2 instances. For Amazon Linux 2 platform versions that are released on or after Feburary 3, 2022, Elastic Beanstalk assigns the webapp user a uid (user id) and gid (group id) value of 900 for new environments. It does the same for existing environments following"} +{"global_id": 220, "doc_id": "beanstalk", "chunk_id": "16", "question_id": 1, "question": "What uid and gid values does Elastic Beanstalk assign to the webapp user for new environments?", "answer_span": "Elastic Beanstalk assigns the webapp user a uid (user id) and gid (group id) value of 900 for new environments.", "chunk": "owner of application directories on EC2 instances. For Amazon Linux 2 platform versions that are released on or after Feburary 3, 2022, Elastic Beanstalk assigns the webapp user a uid (user id) and gid (group id) value of 900 for new environments. It does the same for existing environments following a platform version update. This approach keeps consistent access permission for the webapp user to permanent file system storage. In the unlikely situation that another user or process is already using 900, the operating system defaults the webapp user uid and gid to another value. Run the Linux command id webapp on your EC2 instances to verify the uid and gid values that are assigned to the webapp user. Fault tolerance As a rule of thumb, you should be a pessimist when designing architecture for the cloud. Leverage the elasticity that it offers. Always design, implement, and deploy for automated recovery from failure. Use multiple Availability Zones for your Amazon EC2 instances and for Amazon RDS. Availability Zones are conceptually like logical data centers. Use Amazon CloudWatch to get more visibility into the health of your Elastic Beanstalk application and take appropriate actions in case of hardware failure or performance degradation. Configure your Auto Scaling settings to maintain your fleet of Amazon EC2 instances at a fixed size so that unhealthy Amazon EC2 instances are replaced by new ones. If you're using Amazon RDS, then set the retention period for backups, so that Amazon RDS can perform automated backups. Content delivery When users connect to your website, their requests may be routed through a number of individual networks. As a result, users might experience poor performance due to high latency. Amazon CloudFront can help ameliorate latency issues by distributing your web content, such as images and video, across a network"} +{"global_id": 221, "doc_id": "beanstalk", "chunk_id": "16", "question_id": 2, "question": "What should you do if another user or process is already using uid and gid value 900?", "answer_span": "In the unlikely situation that another user or process is already using 900, the operating system defaults the webapp user uid and gid to another value.", "chunk": "owner of application directories on EC2 instances. For Amazon Linux 2 platform versions that are released on or after Feburary 3, 2022, Elastic Beanstalk assigns the webapp user a uid (user id) and gid (group id) value of 900 for new environments. It does the same for existing environments following a platform version update. This approach keeps consistent access permission for the webapp user to permanent file system storage. In the unlikely situation that another user or process is already using 900, the operating system defaults the webapp user uid and gid to another value. Run the Linux command id webapp on your EC2 instances to verify the uid and gid values that are assigned to the webapp user. Fault tolerance As a rule of thumb, you should be a pessimist when designing architecture for the cloud. Leverage the elasticity that it offers. Always design, implement, and deploy for automated recovery from failure. Use multiple Availability Zones for your Amazon EC2 instances and for Amazon RDS. Availability Zones are conceptually like logical data centers. Use Amazon CloudWatch to get more visibility into the health of your Elastic Beanstalk application and take appropriate actions in case of hardware failure or performance degradation. Configure your Auto Scaling settings to maintain your fleet of Amazon EC2 instances at a fixed size so that unhealthy Amazon EC2 instances are replaced by new ones. If you're using Amazon RDS, then set the retention period for backups, so that Amazon RDS can perform automated backups. Content delivery When users connect to your website, their requests may be routed through a number of individual networks. As a result, users might experience poor performance due to high latency. Amazon CloudFront can help ameliorate latency issues by distributing your web content, such as images and video, across a network"} +{"global_id": 222, "doc_id": "beanstalk", "chunk_id": "16", "question_id": 3, "question": "What is a recommended practice for designing architecture for the cloud?", "answer_span": "As a rule of thumb, you should be a pessimist when designing architecture for the cloud.", "chunk": "owner of application directories on EC2 instances. For Amazon Linux 2 platform versions that are released on or after Feburary 3, 2022, Elastic Beanstalk assigns the webapp user a uid (user id) and gid (group id) value of 900 for new environments. It does the same for existing environments following a platform version update. This approach keeps consistent access permission for the webapp user to permanent file system storage. In the unlikely situation that another user or process is already using 900, the operating system defaults the webapp user uid and gid to another value. Run the Linux command id webapp on your EC2 instances to verify the uid and gid values that are assigned to the webapp user. Fault tolerance As a rule of thumb, you should be a pessimist when designing architecture for the cloud. Leverage the elasticity that it offers. Always design, implement, and deploy for automated recovery from failure. Use multiple Availability Zones for your Amazon EC2 instances and for Amazon RDS. Availability Zones are conceptually like logical data centers. Use Amazon CloudWatch to get more visibility into the health of your Elastic Beanstalk application and take appropriate actions in case of hardware failure or performance degradation. Configure your Auto Scaling settings to maintain your fleet of Amazon EC2 instances at a fixed size so that unhealthy Amazon EC2 instances are replaced by new ones. If you're using Amazon RDS, then set the retention period for backups, so that Amazon RDS can perform automated backups. Content delivery When users connect to your website, their requests may be routed through a number of individual networks. As a result, users might experience poor performance due to high latency. Amazon CloudFront can help ameliorate latency issues by distributing your web content, such as images and video, across a network"} +{"global_id": 223, "doc_id": "beanstalk", "chunk_id": "16", "question_id": 4, "question": "How can Amazon CloudFront help users connecting to a website?", "answer_span": "Amazon CloudFront can help ameliorate latency issues by distributing your web content, such as images and video, across a network.", "chunk": "owner of application directories on EC2 instances. For Amazon Linux 2 platform versions that are released on or after Feburary 3, 2022, Elastic Beanstalk assigns the webapp user a uid (user id) and gid (group id) value of 900 for new environments. It does the same for existing environments following a platform version update. This approach keeps consistent access permission for the webapp user to permanent file system storage. In the unlikely situation that another user or process is already using 900, the operating system defaults the webapp user uid and gid to another value. Run the Linux command id webapp on your EC2 instances to verify the uid and gid values that are assigned to the webapp user. Fault tolerance As a rule of thumb, you should be a pessimist when designing architecture for the cloud. Leverage the elasticity that it offers. Always design, implement, and deploy for automated recovery from failure. Use multiple Availability Zones for your Amazon EC2 instances and for Amazon RDS. Availability Zones are conceptually like logical data centers. Use Amazon CloudWatch to get more visibility into the health of your Elastic Beanstalk application and take appropriate actions in case of hardware failure or performance degradation. Configure your Auto Scaling settings to maintain your fleet of Amazon EC2 instances at a fixed size so that unhealthy Amazon EC2 instances are replaced by new ones. If you're using Amazon RDS, then set the retention period for backups, so that Amazon RDS can perform automated backups. Content delivery When users connect to your website, their requests may be routed through a number of individual networks. As a result, users might experience poor performance due to high latency. Amazon CloudFront can help ameliorate latency issues by distributing your web content, such as images and video, across a network"} +{"global_id": 224, "doc_id": "beanstalk", "chunk_id": "17", "question_id": 1, "question": "What can Amazon CloudFront help ameliorate?", "answer_span": "Amazon CloudFront can help ameliorate latency issues by distributing your web content, such as images and video, across a network of edge locations around the world.", "chunk": "delivery When users connect to your website, their requests may be routed through a number of individual networks. As a result, users might experience poor performance due to high latency. Amazon CloudFront can help ameliorate latency issues by distributing your web content, such as images and video, across a network of edge locations around the world. Users' requests are routed to the Fault tolerance 150 AWS Elastic Beanstalk Developer Guide nearest edge location, so content is delivered with the best possible performance. CloudFront works seamlessly with Amazon S3, which durably stores the original, definitive versions of your files. For more information about Amazon CloudFront, see the Amazon CloudFront Developer Guide. Software updates and patching AWS Elastic Beanstalk regularly releases platform updates to provide fixes, software updates, and new features. Elastic Beanstalk offers several options to handle platform updates. With managed platform updates your environment automatically upgrades to the latest version of a platform during a scheduled maintenance window while your application remains in service. For environments created on November 25, 2019 or later using the Elastic Beanstalk console, managed updates are enabled by default whenever possible. You can also manually initiate updates using the Elastic Beanstalk console or EB CLI. Connectivity Elastic Beanstalk needs to be able to connect to the instances in your environment to complete deployments. When you deploy an Elastic Beanstalk application inside an Amazon VPC, the configuration required to enable connectivity depends on the type of Amazon VPC environment you create: • For single-instance environments, no additional configuration is required. This is because, with these environments, Elastic Beanstalk assigns each Amazon EC2 instance a public Elastic IP address that enables the instance to communicate directly with the internet. • For load-balanced, scalable environments in an Amazon VPC with both public and private subnets, you must do"} +{"global_id": 225, "doc_id": "beanstalk", "chunk_id": "17", "question_id": 2, "question": "What does Elastic Beanstalk regularly release?", "answer_span": "AWS Elastic Beanstalk regularly releases platform updates to provide fixes, software updates, and new features.", "chunk": "delivery When users connect to your website, their requests may be routed through a number of individual networks. As a result, users might experience poor performance due to high latency. Amazon CloudFront can help ameliorate latency issues by distributing your web content, such as images and video, across a network of edge locations around the world. Users' requests are routed to the Fault tolerance 150 AWS Elastic Beanstalk Developer Guide nearest edge location, so content is delivered with the best possible performance. CloudFront works seamlessly with Amazon S3, which durably stores the original, definitive versions of your files. For more information about Amazon CloudFront, see the Amazon CloudFront Developer Guide. Software updates and patching AWS Elastic Beanstalk regularly releases platform updates to provide fixes, software updates, and new features. Elastic Beanstalk offers several options to handle platform updates. With managed platform updates your environment automatically upgrades to the latest version of a platform during a scheduled maintenance window while your application remains in service. For environments created on November 25, 2019 or later using the Elastic Beanstalk console, managed updates are enabled by default whenever possible. You can also manually initiate updates using the Elastic Beanstalk console or EB CLI. Connectivity Elastic Beanstalk needs to be able to connect to the instances in your environment to complete deployments. When you deploy an Elastic Beanstalk application inside an Amazon VPC, the configuration required to enable connectivity depends on the type of Amazon VPC environment you create: • For single-instance environments, no additional configuration is required. This is because, with these environments, Elastic Beanstalk assigns each Amazon EC2 instance a public Elastic IP address that enables the instance to communicate directly with the internet. • For load-balanced, scalable environments in an Amazon VPC with both public and private subnets, you must do"} +{"global_id": 226, "doc_id": "beanstalk", "chunk_id": "17", "question_id": 3, "question": "What happens during a scheduled maintenance window with managed platform updates?", "answer_span": "With managed platform updates your environment automatically upgrades to the latest version of a platform during a scheduled maintenance window while your application remains in service.", "chunk": "delivery When users connect to your website, their requests may be routed through a number of individual networks. As a result, users might experience poor performance due to high latency. Amazon CloudFront can help ameliorate latency issues by distributing your web content, such as images and video, across a network of edge locations around the world. Users' requests are routed to the Fault tolerance 150 AWS Elastic Beanstalk Developer Guide nearest edge location, so content is delivered with the best possible performance. CloudFront works seamlessly with Amazon S3, which durably stores the original, definitive versions of your files. For more information about Amazon CloudFront, see the Amazon CloudFront Developer Guide. Software updates and patching AWS Elastic Beanstalk regularly releases platform updates to provide fixes, software updates, and new features. Elastic Beanstalk offers several options to handle platform updates. With managed platform updates your environment automatically upgrades to the latest version of a platform during a scheduled maintenance window while your application remains in service. For environments created on November 25, 2019 or later using the Elastic Beanstalk console, managed updates are enabled by default whenever possible. You can also manually initiate updates using the Elastic Beanstalk console or EB CLI. Connectivity Elastic Beanstalk needs to be able to connect to the instances in your environment to complete deployments. When you deploy an Elastic Beanstalk application inside an Amazon VPC, the configuration required to enable connectivity depends on the type of Amazon VPC environment you create: • For single-instance environments, no additional configuration is required. This is because, with these environments, Elastic Beanstalk assigns each Amazon EC2 instance a public Elastic IP address that enables the instance to communicate directly with the internet. • For load-balanced, scalable environments in an Amazon VPC with both public and private subnets, you must do"} +{"global_id": 227, "doc_id": "beanstalk", "chunk_id": "17", "question_id": 4, "question": "What is required for Elastic Beanstalk to connect to instances in your environment?", "answer_span": "Elastic Beanstalk needs to be able to connect to the instances in your environment to complete deployments.", "chunk": "delivery When users connect to your website, their requests may be routed through a number of individual networks. As a result, users might experience poor performance due to high latency. Amazon CloudFront can help ameliorate latency issues by distributing your web content, such as images and video, across a network of edge locations around the world. Users' requests are routed to the Fault tolerance 150 AWS Elastic Beanstalk Developer Guide nearest edge location, so content is delivered with the best possible performance. CloudFront works seamlessly with Amazon S3, which durably stores the original, definitive versions of your files. For more information about Amazon CloudFront, see the Amazon CloudFront Developer Guide. Software updates and patching AWS Elastic Beanstalk regularly releases platform updates to provide fixes, software updates, and new features. Elastic Beanstalk offers several options to handle platform updates. With managed platform updates your environment automatically upgrades to the latest version of a platform during a scheduled maintenance window while your application remains in service. For environments created on November 25, 2019 or later using the Elastic Beanstalk console, managed updates are enabled by default whenever possible. You can also manually initiate updates using the Elastic Beanstalk console or EB CLI. Connectivity Elastic Beanstalk needs to be able to connect to the instances in your environment to complete deployments. When you deploy an Elastic Beanstalk application inside an Amazon VPC, the configuration required to enable connectivity depends on the type of Amazon VPC environment you create: • For single-instance environments, no additional configuration is required. This is because, with these environments, Elastic Beanstalk assigns each Amazon EC2 instance a public Elastic IP address that enables the instance to communicate directly with the internet. • For load-balanced, scalable environments in an Amazon VPC with both public and private subnets, you must do"} +{"global_id": 228, "doc_id": "beanstalk", "chunk_id": "18", "question_id": 1, "question": "What is required for load-balanced, scalable environments in an Amazon VPC with both public and private subnets?", "answer_span": "you must do the following: • Create a load balancer in the public subnet to route inbound traffic from the internet to the Amazon EC2 instances. • Create a network address translation (NAT) device to route outbound traffic from the Amazon EC2 instances in private subnets to the internet. • Create inbound and outbound routing rules for the Amazon EC2 instances inside the private subnet. • If you're using a NAT instance, configure the security groups for the NAT instance and Amazon EC2 instances to enable internet communication.", "chunk": "additional configuration is required. This is because, with these environments, Elastic Beanstalk assigns each Amazon EC2 instance a public Elastic IP address that enables the instance to communicate directly with the internet. • For load-balanced, scalable environments in an Amazon VPC with both public and private subnets, you must do the following: • Create a load balancer in the public subnet to route inbound traffic from the internet to the Amazon EC2 instances. • Create a network address translation (NAT) device to route outbound traffic from the Amazon EC2 instances in private subnets to the internet. • Create inbound and outbound routing rules for the Amazon EC2 instances inside the private subnet. • If you're using a NAT instance, configure the security groups for the NAT instance and Amazon EC2 instances to enable internet communication. • For a load-balanced, scalable environment in an Amazon VPC that has one public subnet, no additional configuration is required. This is because, with this environment, your Amazon EC2 Software updates and patching 151 AWS Elastic Beanstalk Developer Guide instances are configured with a public IP address that enables the instances to communicate with the internet. For more information about using Elastic Beanstalk with Amazon VPC, see Using Elastic Beanstalk with Amazon VPC. Connectivity 152 AWS Elastic Beanstalk Developer Guide Elastic Beanstalk platforms AWS Elastic Beanstalk provides a variety of platforms on which you can build your applications. You design your web application to one of these platforms, and Elastic Beanstalk deploys your code to the platform version you selected to create an active application environment. Elastic Beanstalk provides platforms for different programming languages, application servers, and Docker containers. Some platforms have multiple concurrently-supported versions. Topics • Elastic Beanstalk platforms glossary • Shared responsibility model for Elastic Beanstalk platform maintenance • Elastic Beanstalk platform support"} +{"global_id": 229, "doc_id": "beanstalk", "chunk_id": "18", "question_id": 2, "question": "What is the reason no additional configuration is required for a load-balanced, scalable environment in an Amazon VPC that has one public subnet?", "answer_span": "This is because, with this environment, your Amazon EC2 instances are configured with a public IP address that enables the instances to communicate with the internet.", "chunk": "additional configuration is required. This is because, with these environments, Elastic Beanstalk assigns each Amazon EC2 instance a public Elastic IP address that enables the instance to communicate directly with the internet. • For load-balanced, scalable environments in an Amazon VPC with both public and private subnets, you must do the following: • Create a load balancer in the public subnet to route inbound traffic from the internet to the Amazon EC2 instances. • Create a network address translation (NAT) device to route outbound traffic from the Amazon EC2 instances in private subnets to the internet. • Create inbound and outbound routing rules for the Amazon EC2 instances inside the private subnet. • If you're using a NAT instance, configure the security groups for the NAT instance and Amazon EC2 instances to enable internet communication. • For a load-balanced, scalable environment in an Amazon VPC that has one public subnet, no additional configuration is required. This is because, with this environment, your Amazon EC2 Software updates and patching 151 AWS Elastic Beanstalk Developer Guide instances are configured with a public IP address that enables the instances to communicate with the internet. For more information about using Elastic Beanstalk with Amazon VPC, see Using Elastic Beanstalk with Amazon VPC. Connectivity 152 AWS Elastic Beanstalk Developer Guide Elastic Beanstalk platforms AWS Elastic Beanstalk provides a variety of platforms on which you can build your applications. You design your web application to one of these platforms, and Elastic Beanstalk deploys your code to the platform version you selected to create an active application environment. Elastic Beanstalk provides platforms for different programming languages, application servers, and Docker containers. Some platforms have multiple concurrently-supported versions. Topics • Elastic Beanstalk platforms glossary • Shared responsibility model for Elastic Beanstalk platform maintenance • Elastic Beanstalk platform support"} +{"global_id": 230, "doc_id": "beanstalk", "chunk_id": "18", "question_id": 3, "question": "What does Elastic Beanstalk provide for building applications?", "answer_span": "Elastic Beanstalk provides a variety of platforms on which you can build your applications.", "chunk": "additional configuration is required. This is because, with these environments, Elastic Beanstalk assigns each Amazon EC2 instance a public Elastic IP address that enables the instance to communicate directly with the internet. • For load-balanced, scalable environments in an Amazon VPC with both public and private subnets, you must do the following: • Create a load balancer in the public subnet to route inbound traffic from the internet to the Amazon EC2 instances. • Create a network address translation (NAT) device to route outbound traffic from the Amazon EC2 instances in private subnets to the internet. • Create inbound and outbound routing rules for the Amazon EC2 instances inside the private subnet. • If you're using a NAT instance, configure the security groups for the NAT instance and Amazon EC2 instances to enable internet communication. • For a load-balanced, scalable environment in an Amazon VPC that has one public subnet, no additional configuration is required. This is because, with this environment, your Amazon EC2 Software updates and patching 151 AWS Elastic Beanstalk Developer Guide instances are configured with a public IP address that enables the instances to communicate with the internet. For more information about using Elastic Beanstalk with Amazon VPC, see Using Elastic Beanstalk with Amazon VPC. Connectivity 152 AWS Elastic Beanstalk Developer Guide Elastic Beanstalk platforms AWS Elastic Beanstalk provides a variety of platforms on which you can build your applications. You design your web application to one of these platforms, and Elastic Beanstalk deploys your code to the platform version you selected to create an active application environment. Elastic Beanstalk provides platforms for different programming languages, application servers, and Docker containers. Some platforms have multiple concurrently-supported versions. Topics • Elastic Beanstalk platforms glossary • Shared responsibility model for Elastic Beanstalk platform maintenance • Elastic Beanstalk platform support"} +{"global_id": 231, "doc_id": "beanstalk", "chunk_id": "18", "question_id": 4, "question": "What can you design your web application to in Elastic Beanstalk?", "answer_span": "You design your web application to one of these platforms, and Elastic Beanstalk deploys your code to the platform version you selected to create an active application environment.", "chunk": "additional configuration is required. This is because, with these environments, Elastic Beanstalk assigns each Amazon EC2 instance a public Elastic IP address that enables the instance to communicate directly with the internet. • For load-balanced, scalable environments in an Amazon VPC with both public and private subnets, you must do the following: • Create a load balancer in the public subnet to route inbound traffic from the internet to the Amazon EC2 instances. • Create a network address translation (NAT) device to route outbound traffic from the Amazon EC2 instances in private subnets to the internet. • Create inbound and outbound routing rules for the Amazon EC2 instances inside the private subnet. • If you're using a NAT instance, configure the security groups for the NAT instance and Amazon EC2 instances to enable internet communication. • For a load-balanced, scalable environment in an Amazon VPC that has one public subnet, no additional configuration is required. This is because, with this environment, your Amazon EC2 Software updates and patching 151 AWS Elastic Beanstalk Developer Guide instances are configured with a public IP address that enables the instances to communicate with the internet. For more information about using Elastic Beanstalk with Amazon VPC, see Using Elastic Beanstalk with Amazon VPC. Connectivity 152 AWS Elastic Beanstalk Developer Guide Elastic Beanstalk platforms AWS Elastic Beanstalk provides a variety of platforms on which you can build your applications. You design your web application to one of these platforms, and Elastic Beanstalk deploys your code to the platform version you selected to create an active application environment. Elastic Beanstalk provides platforms for different programming languages, application servers, and Docker containers. Some platforms have multiple concurrently-supported versions. Topics • Elastic Beanstalk platforms glossary �� Shared responsibility model for Elastic Beanstalk platform maintenance • Elastic Beanstalk platform support"} +{"global_id": 232, "doc_id": "beanstalk", "chunk_id": "19", "question_id": 1, "question": "What does Elastic Beanstalk provide platforms for?", "answer_span": "Elastic Beanstalk provides platforms for different programming languages, application servers, and Docker containers.", "chunk": "the platform version you selected to create an active application environment. Elastic Beanstalk provides platforms for different programming languages, application servers, and Docker containers. Some platforms have multiple concurrently-supported versions. Topics • Elastic Beanstalk platforms glossary • Shared responsibility model for Elastic Beanstalk platform maintenance • Elastic Beanstalk platform support policy • Elastic Beanstalk platform release schedule • Elastic Beanstalk supported platforms • Elastic Beanstalk Linux platforms • Extending Elastic Beanstalk Linux platforms Elastic Beanstalk platforms glossary Following are key terms related to AWS Elastic Beanstalk platforms and their lifecycle. Runtime The programming language-specific runtime software (framework, libraries, interpreter, vm, etc.) required to run your application code. Elastic Beanstalk Components Software components that Elastic Beanstalk adds to a platform to enable Elastic Beanstalk functionality. For example, the enhanced health agent is necessary for gathering and reporting health information. Platform A combination of an operating system (OS), runtime, web server, application server, and Elastic Beanstalk components. Platforms provide components that are available to run your application. Platforms glossary 742 AWS Elastic Beanstalk Developer Guide Platform Version A combination of specific versions of an operating system (OS), runtime, web server, application server, and Elastic Beanstalk components. You create an Elastic Beanstalk environment based on a platform version and deploy your application to it. A platform version has a semantic version number of the form X.Y.Z, where X is the major version, Y is the minor version, and Z is the patch version. A platform version can be in one of the following states: • Recommended – The latest platform version in a supported platform branch. This version contains the most up-to-date components and is recommended for use in production environments. • Not Recommended – Any platform version that is not the latest version in its platform branch. While these versions may remain"} +{"global_id": 233, "doc_id": "beanstalk", "chunk_id": "19", "question_id": 2, "question": "What is a platform version?", "answer_span": "A platform version is a combination of specific versions of an operating system (OS), runtime, web server, application server, and Elastic Beanstalk components.", "chunk": "the platform version you selected to create an active application environment. Elastic Beanstalk provides platforms for different programming languages, application servers, and Docker containers. Some platforms have multiple concurrently-supported versions. Topics • Elastic Beanstalk platforms glossary • Shared responsibility model for Elastic Beanstalk platform maintenance • Elastic Beanstalk platform support policy • Elastic Beanstalk platform release schedule • Elastic Beanstalk supported platforms • Elastic Beanstalk Linux platforms • Extending Elastic Beanstalk Linux platforms Elastic Beanstalk platforms glossary Following are key terms related to AWS Elastic Beanstalk platforms and their lifecycle. Runtime The programming language-specific runtime software (framework, libraries, interpreter, vm, etc.) required to run your application code. Elastic Beanstalk Components Software components that Elastic Beanstalk adds to a platform to enable Elastic Beanstalk functionality. For example, the enhanced health agent is necessary for gathering and reporting health information. Platform A combination of an operating system (OS), runtime, web server, application server, and Elastic Beanstalk components. Platforms provide components that are available to run your application. Platforms glossary 742 AWS Elastic Beanstalk Developer Guide Platform Version A combination of specific versions of an operating system (OS), runtime, web server, application server, and Elastic Beanstalk components. You create an Elastic Beanstalk environment based on a platform version and deploy your application to it. A platform version has a semantic version number of the form X.Y.Z, where X is the major version, Y is the minor version, and Z is the patch version. A platform version can be in one of the following states: • Recommended – The latest platform version in a supported platform branch. This version contains the most up-to-date components and is recommended for use in production environments. • Not Recommended – Any platform version that is not the latest version in its platform branch. While these versions may remain"} +{"global_id": 234, "doc_id": "beanstalk", "chunk_id": "19", "question_id": 3, "question": "What is the semantic version number format for a platform version?", "answer_span": "A platform version has a semantic version number of the form X.Y.Z, where X is the major version, Y is the minor version, and Z is the patch version.", "chunk": "the platform version you selected to create an active application environment. Elastic Beanstalk provides platforms for different programming languages, application servers, and Docker containers. Some platforms have multiple concurrently-supported versions. Topics • Elastic Beanstalk platforms glossary • Shared responsibility model for Elastic Beanstalk platform maintenance • Elastic Beanstalk platform support policy • Elastic Beanstalk platform release schedule • Elastic Beanstalk supported platforms • Elastic Beanstalk Linux platforms • Extending Elastic Beanstalk Linux platforms Elastic Beanstalk platforms glossary Following are key terms related to AWS Elastic Beanstalk platforms and their lifecycle. Runtime The programming language-specific runtime software (framework, libraries, interpreter, vm, etc.) required to run your application code. Elastic Beanstalk Components Software components that Elastic Beanstalk adds to a platform to enable Elastic Beanstalk functionality. For example, the enhanced health agent is necessary for gathering and reporting health information. Platform A combination of an operating system (OS), runtime, web server, application server, and Elastic Beanstalk components. Platforms provide components that are available to run your application. Platforms glossary 742 AWS Elastic Beanstalk Developer Guide Platform Version A combination of specific versions of an operating system (OS), runtime, web server, application server, and Elastic Beanstalk components. You create an Elastic Beanstalk environment based on a platform version and deploy your application to it. A platform version has a semantic version number of the form X.Y.Z, where X is the major version, Y is the minor version, and Z is the patch version. A platform version can be in one of the following states: • Recommended – The latest platform version in a supported platform branch. This version contains the most up-to-date components and is recommended for use in production environments. • Not Recommended – Any platform version that is not the latest version in its platform branch. While these versions may remain"} +{"global_id": 235, "doc_id": "beanstalk", "chunk_id": "19", "question_id": 4, "question": "What state is a platform version in if it is the latest platform version in a supported platform branch?", "answer_span": "Recommended – The latest platform version in a supported platform branch.", "chunk": "the platform version you selected to create an active application environment. Elastic Beanstalk provides platforms for different programming languages, application servers, and Docker containers. Some platforms have multiple concurrently-supported versions. Topics • Elastic Beanstalk platforms glossary • Shared responsibility model for Elastic Beanstalk platform maintenance • Elastic Beanstalk platform support policy • Elastic Beanstalk platform release schedule • Elastic Beanstalk supported platforms • Elastic Beanstalk Linux platforms • Extending Elastic Beanstalk Linux platforms Elastic Beanstalk platforms glossary Following are key terms related to AWS Elastic Beanstalk platforms and their lifecycle. Runtime The programming language-specific runtime software (framework, libraries, interpreter, vm, etc.) required to run your application code. Elastic Beanstalk Components Software components that Elastic Beanstalk adds to a platform to enable Elastic Beanstalk functionality. For example, the enhanced health agent is necessary for gathering and reporting health information. Platform A combination of an operating system (OS), runtime, web server, application server, and Elastic Beanstalk components. Platforms provide components that are available to run your application. Platforms glossary 742 AWS Elastic Beanstalk Developer Guide Platform Version A combination of specific versions of an operating system (OS), runtime, web server, application server, and Elastic Beanstalk components. You create an Elastic Beanstalk environment based on a platform version and deploy your application to it. A platform version has a semantic version number of the form X.Y.Z, where X is the major version, Y is the minor version, and Z is the patch version. A platform version can be in one of the following states: • Recommended – The latest platform version in a supported platform branch. This version contains the most up-to-date components and is recommended for use in production environments. • Not Recommended – Any platform version that is not the latest version in its platform branch. While these versions may remain"} +{"global_id": 236, "doc_id": "beanstalk", "chunk_id": "20", "question_id": 1, "question": "What is the recommended platform version?", "answer_span": "The latest platform version in a supported platform branch.", "chunk": "states: • Recommended – The latest platform version in a supported platform branch. This version contains the most up-to-date components and is recommended for use in production environments. • Not Recommended – Any platform version that is not the latest version in its platform branch. While these versions may remain functional, we strongly recommend updating to the latest platform version. You can use managed platform updates to help stay up-to-date automatically. You can verify if a platform version is recommended using the AWS CLI command describeplatform-version and checking the PlatformLifecycleState field. Platform Branch A line of platform versions sharing specific (typically major) versions of some of their components, such as the operating system (OS), runtime, or Elastic Beanstalk components. For example: Python 3.13 running on 64bit Amazon Linux 2023; IIS 10.0 running on 64bit Windows Server 2025. Platform branches receive updates in the form of new platform versions. Each successive platform version in a branch is an update to the previous one. The recommended version in each supported platform branch is available to you unconditionally for environment creation. A previous platform version is available to you if you were using an environment with it at the time the platform version was superceded by a new platform version. Previous platform versions lack the most up-to-date components and aren't recommended for use. A platform branch can be in one of the following states: • Supported – A current platform branch. It consists entirely of supported components. Supported components have not reached End of Life (EOL), as designated by their suppliers. It receives ongoing platform updates, and is recommended for use in production Platforms glossary 743 AWS Elastic Beanstalk Developer Guide environments. For a list of supported platform branches, see Elastic Beanstalk supported platforms in the AWS Elastic Beanstalk Platforms guide. • Beta"} +{"global_id": 237, "doc_id": "beanstalk", "chunk_id": "20", "question_id": 2, "question": "What should you do if you are using a not recommended platform version?", "answer_span": "We strongly recommend updating to the latest platform version.", "chunk": "states: • Recommended – The latest platform version in a supported platform branch. This version contains the most up-to-date components and is recommended for use in production environments. • Not Recommended – Any platform version that is not the latest version in its platform branch. While these versions may remain functional, we strongly recommend updating to the latest platform version. You can use managed platform updates to help stay up-to-date automatically. You can verify if a platform version is recommended using the AWS CLI command describeplatform-version and checking the PlatformLifecycleState field. Platform Branch A line of platform versions sharing specific (typically major) versions of some of their components, such as the operating system (OS), runtime, or Elastic Beanstalk components. For example: Python 3.13 running on 64bit Amazon Linux 2023; IIS 10.0 running on 64bit Windows Server 2025. Platform branches receive updates in the form of new platform versions. Each successive platform version in a branch is an update to the previous one. The recommended version in each supported platform branch is available to you unconditionally for environment creation. A previous platform version is available to you if you were using an environment with it at the time the platform version was superceded by a new platform version. Previous platform versions lack the most up-to-date components and aren't recommended for use. A platform branch can be in one of the following states: • Supported – A current platform branch. It consists entirely of supported components. Supported components have not reached End of Life (EOL), as designated by their suppliers. It receives ongoing platform updates, and is recommended for use in production Platforms glossary 743 AWS Elastic Beanstalk Developer Guide environments. For a list of supported platform branches, see Elastic Beanstalk supported platforms in the AWS Elastic Beanstalk Platforms guide. • Beta"} +{"global_id": 238, "doc_id": "beanstalk", "chunk_id": "20", "question_id": 3, "question": "How can you verify if a platform version is recommended?", "answer_span": "You can verify if a platform version is recommended using the AWS CLI command describeplatform-version and checking the PlatformLifecycleState field.", "chunk": "states: • Recommended – The latest platform version in a supported platform branch. This version contains the most up-to-date components and is recommended for use in production environments. • Not Recommended – Any platform version that is not the latest version in its platform branch. While these versions may remain functional, we strongly recommend updating to the latest platform version. You can use managed platform updates to help stay up-to-date automatically. You can verify if a platform version is recommended using the AWS CLI command describeplatform-version and checking the PlatformLifecycleState field. Platform Branch A line of platform versions sharing specific (typically major) versions of some of their components, such as the operating system (OS), runtime, or Elastic Beanstalk components. For example: Python 3.13 running on 64bit Amazon Linux 2023; IIS 10.0 running on 64bit Windows Server 2025. Platform branches receive updates in the form of new platform versions. Each successive platform version in a branch is an update to the previous one. The recommended version in each supported platform branch is available to you unconditionally for environment creation. A previous platform version is available to you if you were using an environment with it at the time the platform version was superceded by a new platform version. Previous platform versions lack the most up-to-date components and aren't recommended for use. A platform branch can be in one of the following states: • Supported – A current platform branch. It consists entirely of supported components. Supported components have not reached End of Life (EOL), as designated by their suppliers. It receives ongoing platform updates, and is recommended for use in production Platforms glossary 743 AWS Elastic Beanstalk Developer Guide environments. For a list of supported platform branches, see Elastic Beanstalk supported platforms in the AWS Elastic Beanstalk Platforms guide. • Beta"} +{"global_id": 239, "doc_id": "beanstalk", "chunk_id": "20", "question_id": 4, "question": "What does a supported platform branch consist of?", "answer_span": "It consists entirely of supported components.", "chunk": "states: • Recommended – The latest platform version in a supported platform branch. This version contains the most up-to-date components and is recommended for use in production environments. • Not Recommended – Any platform version that is not the latest version in its platform branch. While these versions may remain functional, we strongly recommend updating to the latest platform version. You can use managed platform updates to help stay up-to-date automatically. You can verify if a platform version is recommended using the AWS CLI command describeplatform-version and checking the PlatformLifecycleState field. Platform Branch A line of platform versions sharing specific (typically major) versions of some of their components, such as the operating system (OS), runtime, or Elastic Beanstalk components. For example: Python 3.13 running on 64bit Amazon Linux 2023; IIS 10.0 running on 64bit Windows Server 2025. Platform branches receive updates in the form of new platform versions. Each successive platform version in a branch is an update to the previous one. The recommended version in each supported platform branch is available to you unconditionally for environment creation. A previous platform version is available to you if you were using an environment with it at the time the platform version was superceded by a new platform version. Previous platform versions lack the most up-to-date components and aren't recommended for use. A platform branch can be in one of the following states: • Supported – A current platform branch. It consists entirely of supported components. Supported components have not reached End of Life (EOL), as designated by their suppliers. It receives ongoing platform updates, and is recommended for use in production Platforms glossary 743 AWS Elastic Beanstalk Developer Guide environments. For a list of supported platform branches, see Elastic Beanstalk supported platforms in the AWS Elastic Beanstalk Platforms guide. • Beta"} +{"global_id": 240, "doc_id": "beanstalk", "chunk_id": "21", "question_id": 1, "question": "What is a beta platform branch?", "answer_span": "A beta platform branch isn't recommended for use in production environments.", "chunk": "of Life (EOL), as designated by their suppliers. It receives ongoing platform updates, and is recommended for use in production Platforms glossary 743 AWS Elastic Beanstalk Developer Guide environments. For a list of supported platform branches, see Elastic Beanstalk supported platforms in the AWS Elastic Beanstalk Platforms guide. • Beta – A preview, pre-release platform branch. It's experimental in nature. It may receive ongoing platform updates for a while, but has no long-term support. A beta platform branch isn't recommended for use in production environments. Use it only for evaluation. For a list of beta platform branches, see Elastic Beanstalk Platform Versions in Public Beta in the AWS Elastic Beanstalk Platforms guide. • Deprecated – A platform branch where one or more components (such as the runtime or operating system) are approaching End of Life (EOL) or have reached EOL, as designated by their suppliers. While a deprecated platform branch continues to receive new platform versions until its retirement date, components that have reached EOL don't receive updates. For example, if a runtime version reaches EOL, the platform branch will be marked as deprecated but will continue to receive operating system updates until the platform branch retirement date. The platform branch will not continue to receive updates to the EOL runtime version. A deprecated platform branch isn't recommended for use. • Retired – A platform branch that no longer receives any updates. Retired platform branches aren't available to create new Elastic Beanstalk environments using the Elastic Beanstalk console. If your environment uses a retired platform branch, you must update to a supported platform branch to continue receiving updates. A retired platform branch isn't recommended for use. For more details about retired platform branches, see the section called “Platform support policy”. For a list of platform branches scheduled for retirement, see"} +{"global_id": 241, "doc_id": "beanstalk", "chunk_id": "21", "question_id": 2, "question": "What happens to a deprecated platform branch that has components reaching EOL?", "answer_span": "While a deprecated platform branch continues to receive new platform versions until its retirement date, components that have reached EOL don't receive updates.", "chunk": "of Life (EOL), as designated by their suppliers. It receives ongoing platform updates, and is recommended for use in production Platforms glossary 743 AWS Elastic Beanstalk Developer Guide environments. For a list of supported platform branches, see Elastic Beanstalk supported platforms in the AWS Elastic Beanstalk Platforms guide. • Beta – A preview, pre-release platform branch. It's experimental in nature. It may receive ongoing platform updates for a while, but has no long-term support. A beta platform branch isn't recommended for use in production environments. Use it only for evaluation. For a list of beta platform branches, see Elastic Beanstalk Platform Versions in Public Beta in the AWS Elastic Beanstalk Platforms guide. • Deprecated – A platform branch where one or more components (such as the runtime or operating system) are approaching End of Life (EOL) or have reached EOL, as designated by their suppliers. While a deprecated platform branch continues to receive new platform versions until its retirement date, components that have reached EOL don't receive updates. For example, if a runtime version reaches EOL, the platform branch will be marked as deprecated but will continue to receive operating system updates until the platform branch retirement date. The platform branch will not continue to receive updates to the EOL runtime version. A deprecated platform branch isn't recommended for use. • Retired – A platform branch that no longer receives any updates. Retired platform branches aren't available to create new Elastic Beanstalk environments using the Elastic Beanstalk console. If your environment uses a retired platform branch, you must update to a supported platform branch to continue receiving updates. A retired platform branch isn't recommended for use. For more details about retired platform branches, see the section called “Platform support policy”. For a list of platform branches scheduled for retirement, see"} +{"global_id": 242, "doc_id": "beanstalk", "chunk_id": "21", "question_id": 3, "question": "What is the status of a retired platform branch?", "answer_span": "A platform branch that no longer receives any updates.", "chunk": "of Life (EOL), as designated by their suppliers. It receives ongoing platform updates, and is recommended for use in production Platforms glossary 743 AWS Elastic Beanstalk Developer Guide environments. For a list of supported platform branches, see Elastic Beanstalk supported platforms in the AWS Elastic Beanstalk Platforms guide. • Beta – A preview, pre-release platform branch. It's experimental in nature. It may receive ongoing platform updates for a while, but has no long-term support. A beta platform branch isn't recommended for use in production environments. Use it only for evaluation. For a list of beta platform branches, see Elastic Beanstalk Platform Versions in Public Beta in the AWS Elastic Beanstalk Platforms guide. • Deprecated – A platform branch where one or more components (such as the runtime or operating system) are approaching End of Life (EOL) or have reached EOL, as designated by their suppliers. While a deprecated platform branch continues to receive new platform versions until its retirement date, components that have reached EOL don't receive updates. For example, if a runtime version reaches EOL, the platform branch will be marked as deprecated but will continue to receive operating system updates until the platform branch retirement date. The platform branch will not continue to receive updates to the EOL runtime version. A deprecated platform branch isn't recommended for use. • Retired – A platform branch that no longer receives any updates. Retired platform branches aren't available to create new Elastic Beanstalk environments using the Elastic Beanstalk console. If your environment uses a retired platform branch, you must update to a supported platform branch to continue receiving updates. A retired platform branch isn't recommended for use. For more details about retired platform branches, see the section called “Platform support policy”. For a list of platform branches scheduled for retirement, see"} +{"global_id": 243, "doc_id": "beanstalk", "chunk_id": "21", "question_id": 4, "question": "What should you do if your environment uses a retired platform branch?", "answer_span": "If your environment uses a retired platform branch, you must update to a supported platform branch to continue receiving updates.", "chunk": "of Life (EOL), as designated by their suppliers. It receives ongoing platform updates, and is recommended for use in production Platforms glossary 743 AWS Elastic Beanstalk Developer Guide environments. For a list of supported platform branches, see Elastic Beanstalk supported platforms in the AWS Elastic Beanstalk Platforms guide. • Beta – A preview, pre-release platform branch. It's experimental in nature. It may receive ongoing platform updates for a while, but has no long-term support. A beta platform branch isn't recommended for use in production environments. Use it only for evaluation. For a list of beta platform branches, see Elastic Beanstalk Platform Versions in Public Beta in the AWS Elastic Beanstalk Platforms guide. • Deprecated – A platform branch where one or more components (such as the runtime or operating system) are approaching End of Life (EOL) or have reached EOL, as designated by their suppliers. While a deprecated platform branch continues to receive new platform versions until its retirement date, components that have reached EOL don't receive updates. For example, if a runtime version reaches EOL, the platform branch will be marked as deprecated but will continue to receive operating system updates until the platform branch retirement date. The platform branch will not continue to receive updates to the EOL runtime version. A deprecated platform branch isn't recommended for use. • Retired – A platform branch that no longer receives any updates. Retired platform branches aren't available to create new Elastic Beanstalk environments using the Elastic Beanstalk console. If your environment uses a retired platform branch, you must update to a supported platform branch to continue receiving updates. A retired platform branch isn't recommended for use. For more details about retired platform branches, see the section called “Platform support policy”. For a list of platform branches scheduled for retirement, see"} +{"global_id": 244, "doc_id": "beanstalk", "chunk_id": "22", "question_id": 1, "question": "What must you do if your environment uses a retired platform branch?", "answer_span": "you must update to a supported platform branch to continue receiving updates.", "chunk": "environment uses a retired platform branch, you must update to a supported platform branch to continue receiving updates. A retired platform branch isn't recommended for use. For more details about retired platform branches, see the section called “Platform support policy”. For a list of platform branches scheduled for retirement, see Retiring platform branch schedule. To see past retired platform branches, see Retired platform branch history. If your environment uses a deprecated or retired platform branch, we recommend that you update it to a platform version in a supported platform branch. For details, see the section called “Platform updates”. You can verify the state of a platform branch using the AWS CLI command describe-platformversion and checking the PlatformBranchLifecycleState field. Platform Update A release of new platform versions that contain updates to some components of the platform —OS, runtime, web server, application server, and Elastic Beanstalk components. Platform updates follow semantic version taxonomy, and can have three levels: Platforms glossary 744 AWS Elastic Beanstalk Developer Guide • Major update – An update that has changes that are incompatible with existing platform versions. You may need to modify your application to run correctly on a new major version. A major update has a new major platform version number. • Minor update – An update that has changes that are backward compatible with existing platform versions in most cases. Depending on your application, you may need to modify your application to run correctly on a new minor version. A minor update has a new minor platform version number. • Patch update – An update that consists of maintenance releases (bug fixes, security updates, and performance improvements) that are backward compatible with an existing platform version. A patch update has a new patch platform version number. Managed Updates An Elastic Beanstalk feature that automatically applies"} +{"global_id": 245, "doc_id": "beanstalk", "chunk_id": "22", "question_id": 2, "question": "What is not recommended for use?", "answer_span": "A retired platform branch isn't recommended for use.", "chunk": "environment uses a retired platform branch, you must update to a supported platform branch to continue receiving updates. A retired platform branch isn't recommended for use. For more details about retired platform branches, see the section called “Platform support policy”. For a list of platform branches scheduled for retirement, see Retiring platform branch schedule. To see past retired platform branches, see Retired platform branch history. If your environment uses a deprecated or retired platform branch, we recommend that you update it to a platform version in a supported platform branch. For details, see the section called “Platform updates”. You can verify the state of a platform branch using the AWS CLI command describe-platformversion and checking the PlatformBranchLifecycleState field. Platform Update A release of new platform versions that contain updates to some components of the platform —OS, runtime, web server, application server, and Elastic Beanstalk components. Platform updates follow semantic version taxonomy, and can have three levels: Platforms glossary 744 AWS Elastic Beanstalk Developer Guide • Major update – An update that has changes that are incompatible with existing platform versions. You may need to modify your application to run correctly on a new major version. A major update has a new major platform version number. • Minor update – An update that has changes that are backward compatible with existing platform versions in most cases. Depending on your application, you may need to modify your application to run correctly on a new minor version. A minor update has a new minor platform version number. • Patch update – An update that consists of maintenance releases (bug fixes, security updates, and performance improvements) that are backward compatible with an existing platform version. A patch update has a new patch platform version number. Managed Updates An Elastic Beanstalk feature that automatically applies"} +{"global_id": 246, "doc_id": "beanstalk", "chunk_id": "22", "question_id": 3, "question": "What is a major update?", "answer_span": "An update that has changes that are incompatible with existing platform versions.", "chunk": "environment uses a retired platform branch, you must update to a supported platform branch to continue receiving updates. A retired platform branch isn't recommended for use. For more details about retired platform branches, see the section called “Platform support policy”. For a list of platform branches scheduled for retirement, see Retiring platform branch schedule. To see past retired platform branches, see Retired platform branch history. If your environment uses a deprecated or retired platform branch, we recommend that you update it to a platform version in a supported platform branch. For details, see the section called “Platform updates”. You can verify the state of a platform branch using the AWS CLI command describe-platformversion and checking the PlatformBranchLifecycleState field. Platform Update A release of new platform versions that contain updates to some components of the platform —OS, runtime, web server, application server, and Elastic Beanstalk components. Platform updates follow semantic version taxonomy, and can have three levels: Platforms glossary 744 AWS Elastic Beanstalk Developer Guide • Major update – An update that has changes that are incompatible with existing platform versions. You may need to modify your application to run correctly on a new major version. A major update has a new major platform version number. • Minor update – An update that has changes that are backward compatible with existing platform versions in most cases. Depending on your application, you may need to modify your application to run correctly on a new minor version. A minor update has a new minor platform version number. • Patch update – An update that consists of maintenance releases (bug fixes, security updates, and performance improvements) that are backward compatible with an existing platform version. A patch update has a new patch platform version number. Managed Updates An Elastic Beanstalk feature that automatically applies"} +{"global_id": 247, "doc_id": "beanstalk", "chunk_id": "22", "question_id": 4, "question": "What does a patch update consist of?", "answer_span": "An update that consists of maintenance releases (bug fixes, security updates, and performance improvements) that are backward compatible with an existing platform version.", "chunk": "environment uses a retired platform branch, you must update to a supported platform branch to continue receiving updates. A retired platform branch isn't recommended for use. For more details about retired platform branches, see the section called “Platform support policy”. For a list of platform branches scheduled for retirement, see Retiring platform branch schedule. To see past retired platform branches, see Retired platform branch history. If your environment uses a deprecated or retired platform branch, we recommend that you update it to a platform version in a supported platform branch. For details, see the section called “Platform updates”. You can verify the state of a platform branch using the AWS CLI command describe-platformversion and checking the PlatformBranchLifecycleState field. Platform Update A release of new platform versions that contain updates to some components of the platform —OS, runtime, web server, application server, and Elastic Beanstalk components. Platform updates follow semantic version taxonomy, and can have three levels: Platforms glossary 744 AWS Elastic Beanstalk Developer Guide • Major update – An update that has changes that are incompatible with existing platform versions. You may need to modify your application to run correctly on a new major version. A major update has a new major platform version number. • Minor update – An update that has changes that are backward compatible with existing platform versions in most cases. Depending on your application, you may need to modify your application to run correctly on a new minor version. A minor update has a new minor platform version number. • Patch update – An update that consists of maintenance releases (bug fixes, security updates, and performance improvements) that are backward compatible with an existing platform version. A patch update has a new patch platform version number. Managed Updates An Elastic Beanstalk feature that automatically applies"} +{"global_id": 248, "doc_id": "beanstalk", "chunk_id": "23", "question_id": 1, "question": "What is a patch update?", "answer_span": "A patch update has a new patch platform version number.", "chunk": "minor platform version number. • Patch update – An update that consists of maintenance releases (bug fixes, security updates, and performance improvements) that are backward compatible with an existing platform version. A patch update has a new patch platform version number. Managed Updates An Elastic Beanstalk feature that automatically applies patch and minor updates to the operating system (OS), runtime, web server, application server, and Elastic Beanstalk components for an Elastic Beanstalk supported platform version. A managed update applies a newer platform version in the same platform branch to your environment. You can configure managed updates to apply only patch updates, or minor and patch updates. You can also disable managed updates completely. For more information, see Managed platform updates. Shared responsibility model for Elastic Beanstalk platform maintenance AWS and our customers share responsibility for achieving a high level of software component security and compliance. This shared model reduces your operational burden. For details, see the AWS Shared Responsibility Model. AWS Elastic Beanstalk helps you perform your side of the shared responsibility model by providing a managed updates feature. This feature automatically applies patch and minor updates for an Elastic Beanstalk supported platform version. If a managed update fails, Elastic Beanstalk notifies you of the failure to ensure that you are aware of it and can take immediate action. For more information, see Managed platform updates. In addition, Elastic Beanstalk does the following: Shared responsibility model 745 AWS Elastic Beanstalk Developer Guide • Publishes its platform support policy and retirement schedule for the coming 12 months. • Releases patch, minor, and major updates of operating system (OS), runtime, application server, and web server components typically within 30 days of their availability. Elastic Beanstalk is responsible for creating updates to Elastic Beanstalk components that are present on its supported platform versions."} +{"global_id": 249, "doc_id": "beanstalk", "chunk_id": "23", "question_id": 2, "question": "What does a managed update do?", "answer_span": "A managed update applies a newer platform version in the same platform branch to your environment.", "chunk": "minor platform version number. • Patch update – An update that consists of maintenance releases (bug fixes, security updates, and performance improvements) that are backward compatible with an existing platform version. A patch update has a new patch platform version number. Managed Updates An Elastic Beanstalk feature that automatically applies patch and minor updates to the operating system (OS), runtime, web server, application server, and Elastic Beanstalk components for an Elastic Beanstalk supported platform version. A managed update applies a newer platform version in the same platform branch to your environment. You can configure managed updates to apply only patch updates, or minor and patch updates. You can also disable managed updates completely. For more information, see Managed platform updates. Shared responsibility model for Elastic Beanstalk platform maintenance AWS and our customers share responsibility for achieving a high level of software component security and compliance. This shared model reduces your operational burden. For details, see the AWS Shared Responsibility Model. AWS Elastic Beanstalk helps you perform your side of the shared responsibility model by providing a managed updates feature. This feature automatically applies patch and minor updates for an Elastic Beanstalk supported platform version. If a managed update fails, Elastic Beanstalk notifies you of the failure to ensure that you are aware of it and can take immediate action. For more information, see Managed platform updates. In addition, Elastic Beanstalk does the following: Shared responsibility model 745 AWS Elastic Beanstalk Developer Guide • Publishes its platform support policy and retirement schedule for the coming 12 months. • Releases patch, minor, and major updates of operating system (OS), runtime, application server, and web server components typically within 30 days of their availability. Elastic Beanstalk is responsible for creating updates to Elastic Beanstalk components that are present on its supported platform versions."} +{"global_id": 250, "doc_id": "beanstalk", "chunk_id": "23", "question_id": 3, "question": "What does AWS Elastic Beanstalk help you with regarding the shared responsibility model?", "answer_span": "AWS Elastic Beanstalk helps you perform your side of the shared responsibility model by providing a managed updates feature.", "chunk": "minor platform version number. • Patch update – An update that consists of maintenance releases (bug fixes, security updates, and performance improvements) that are backward compatible with an existing platform version. A patch update has a new patch platform version number. Managed Updates An Elastic Beanstalk feature that automatically applies patch and minor updates to the operating system (OS), runtime, web server, application server, and Elastic Beanstalk components for an Elastic Beanstalk supported platform version. A managed update applies a newer platform version in the same platform branch to your environment. You can configure managed updates to apply only patch updates, or minor and patch updates. You can also disable managed updates completely. For more information, see Managed platform updates. Shared responsibility model for Elastic Beanstalk platform maintenance AWS and our customers share responsibility for achieving a high level of software component security and compliance. This shared model reduces your operational burden. For details, see the AWS Shared Responsibility Model. AWS Elastic Beanstalk helps you perform your side of the shared responsibility model by providing a managed updates feature. This feature automatically applies patch and minor updates for an Elastic Beanstalk supported platform version. If a managed update fails, Elastic Beanstalk notifies you of the failure to ensure that you are aware of it and can take immediate action. For more information, see Managed platform updates. In addition, Elastic Beanstalk does the following: Shared responsibility model 745 AWS Elastic Beanstalk Developer Guide • Publishes its platform support policy and retirement schedule for the coming 12 months. • Releases patch, minor, and major updates of operating system (OS), runtime, application server, and web server components typically within 30 days of their availability. Elastic Beanstalk is responsible for creating updates to Elastic Beanstalk components that are present on its supported platform versions."} +{"global_id": 251, "doc_id": "beanstalk", "chunk_id": "23", "question_id": 4, "question": "How often does Elastic Beanstalk release updates?", "answer_span": "Releases patch, minor, and major updates of operating system (OS), runtime, application server, and web server components typically within 30 days of their availability.", "chunk": "minor platform version number. • Patch update – An update that consists of maintenance releases (bug fixes, security updates, and performance improvements) that are backward compatible with an existing platform version. A patch update has a new patch platform version number. Managed Updates An Elastic Beanstalk feature that automatically applies patch and minor updates to the operating system (OS), runtime, web server, application server, and Elastic Beanstalk components for an Elastic Beanstalk supported platform version. A managed update applies a newer platform version in the same platform branch to your environment. You can configure managed updates to apply only patch updates, or minor and patch updates. You can also disable managed updates completely. For more information, see Managed platform updates. Shared responsibility model for Elastic Beanstalk platform maintenance AWS and our customers share responsibility for achieving a high level of software component security and compliance. This shared model reduces your operational burden. For details, see the AWS Shared Responsibility Model. AWS Elastic Beanstalk helps you perform your side of the shared responsibility model by providing a managed updates feature. This feature automatically applies patch and minor updates for an Elastic Beanstalk supported platform version. If a managed update fails, Elastic Beanstalk notifies you of the failure to ensure that you are aware of it and can take immediate action. For more information, see Managed platform updates. In addition, Elastic Beanstalk does the following: Shared responsibility model 745 AWS Elastic Beanstalk Developer Guide • Publishes its platform support policy and retirement schedule for the coming 12 months. • Releases patch, minor, and major updates of operating system (OS), runtime, application server, and web server components typically within 30 days of their availability. Elastic Beanstalk is responsible for creating updates to Elastic Beanstalk components that are present on its supported platform versions."} +{"global_id": 252, "doc_id": "beanstalk", "chunk_id": "24", "question_id": 1, "question": "What is Elastic Beanstalk responsible for?", "answer_span": "Elastic Beanstalk is responsible for creating updates to Elastic Beanstalk components that are present on its supported platform versions.", "chunk": "schedule for the coming 12 months. • Releases patch, minor, and major updates of operating system (OS), runtime, application server, and web server components typically within 30 days of their availability. Elastic Beanstalk is responsible for creating updates to Elastic Beanstalk components that are present on its supported platform versions. All other updates come directly from their suppliers (owners or community). We announce all updates to our supported platforms in our release notes in the AWS Elastic Beanstalk Release Notes guide. We also provide a list of all supported platforms and their components, along with a platform history, in the AWS Elastic Beanstalk Platforms guide. For more information see Supported platforms and component history. You are responsible to do the following: • Update all the components that you control (identified as Customer in the AWS Shared Responsibility Model). This includes ensuring the security of your application, your data, and any components that your application requires and that you downloaded. • Ensure that your Elastic Beanstalk environments are running on a supported platform version, and migrate any environment running on a retired platform version to a supported version. • If you’re using a custom Amazon machine image (AMI) for your Elastic Beanstalk environment, patch, maintain, and test your custom AMI so that it remains current and compatible with a supported Elastic Beanstalk platform version. For more information about managing environments with a custom AMI, see Using a custom Amazon machine image (AMI) in your Elastic Beanstalk environment. • Resolve all issues that come up in failed managed update attempts and retry the update. • Patch the OS, runtime, application server, and web server yourself if you opted out of Elastic Beanstalk managed updates. You can do this by applying platform updates manually or directly patching the components on all relevant environment"} +{"global_id": 253, "doc_id": "beanstalk", "chunk_id": "24", "question_id": 2, "question": "What should you ensure about your Elastic Beanstalk environments?", "answer_span": "Ensure that your Elastic Beanstalk environments are running on a supported platform version, and migrate any environment running on a retired platform version to a supported version.", "chunk": "schedule for the coming 12 months. • Releases patch, minor, and major updates of operating system (OS), runtime, application server, and web server components typically within 30 days of their availability. Elastic Beanstalk is responsible for creating updates to Elastic Beanstalk components that are present on its supported platform versions. All other updates come directly from their suppliers (owners or community). We announce all updates to our supported platforms in our release notes in the AWS Elastic Beanstalk Release Notes guide. We also provide a list of all supported platforms and their components, along with a platform history, in the AWS Elastic Beanstalk Platforms guide. For more information see Supported platforms and component history. You are responsible to do the following: • Update all the components that you control (identified as Customer in the AWS Shared Responsibility Model). This includes ensuring the security of your application, your data, and any components that your application requires and that you downloaded. • Ensure that your Elastic Beanstalk environments are running on a supported platform version, and migrate any environment running on a retired platform version to a supported version. • If you’re using a custom Amazon machine image (AMI) for your Elastic Beanstalk environment, patch, maintain, and test your custom AMI so that it remains current and compatible with a supported Elastic Beanstalk platform version. For more information about managing environments with a custom AMI, see Using a custom Amazon machine image (AMI) in your Elastic Beanstalk environment. • Resolve all issues that come up in failed managed update attempts and retry the update. • Patch the OS, runtime, application server, and web server yourself if you opted out of Elastic Beanstalk managed updates. You can do this by applying platform updates manually or directly patching the components on all relevant environment"} +{"global_id": 254, "doc_id": "beanstalk", "chunk_id": "24", "question_id": 3, "question": "What must you do if you’re using a custom Amazon machine image (AMI)?", "answer_span": "If you’re using a custom Amazon machine image (AMI) for your Elastic Beanstalk environment, patch, maintain, and test your custom AMI so that it remains current and compatible with a supported Elastic Beanstalk platform version.", "chunk": "schedule for the coming 12 months. • Releases patch, minor, and major updates of operating system (OS), runtime, application server, and web server components typically within 30 days of their availability. Elastic Beanstalk is responsible for creating updates to Elastic Beanstalk components that are present on its supported platform versions. All other updates come directly from their suppliers (owners or community). We announce all updates to our supported platforms in our release notes in the AWS Elastic Beanstalk Release Notes guide. We also provide a list of all supported platforms and their components, along with a platform history, in the AWS Elastic Beanstalk Platforms guide. For more information see Supported platforms and component history. You are responsible to do the following: • Update all the components that you control (identified as Customer in the AWS Shared Responsibility Model). This includes ensuring the security of your application, your data, and any components that your application requires and that you downloaded. • Ensure that your Elastic Beanstalk environments are running on a supported platform version, and migrate any environment running on a retired platform version to a supported version. • If you’re using a custom Amazon machine image (AMI) for your Elastic Beanstalk environment, patch, maintain, and test your custom AMI so that it remains current and compatible with a supported Elastic Beanstalk platform version. For more information about managing environments with a custom AMI, see Using a custom Amazon machine image (AMI) in your Elastic Beanstalk environment. • Resolve all issues that come up in failed managed update attempts and retry the update. • Patch the OS, runtime, application server, and web server yourself if you opted out of Elastic Beanstalk managed updates. You can do this by applying platform updates manually or directly patching the components on all relevant environment"} +{"global_id": 255, "doc_id": "beanstalk", "chunk_id": "24", "question_id": 4, "question": "What should you do if you opted out of Elastic Beanstalk managed updates?", "answer_span": "Patch the OS, runtime, application server, and web server yourself if you opted out of Elastic Beanstalk managed updates.", "chunk": "schedule for the coming 12 months. • Releases patch, minor, and major updates of operating system (OS), runtime, application server, and web server components typically within 30 days of their availability. Elastic Beanstalk is responsible for creating updates to Elastic Beanstalk components that are present on its supported platform versions. All other updates come directly from their suppliers (owners or community). We announce all updates to our supported platforms in our release notes in the AWS Elastic Beanstalk Release Notes guide. We also provide a list of all supported platforms and their components, along with a platform history, in the AWS Elastic Beanstalk Platforms guide. For more information see Supported platforms and component history. You are responsible to do the following: • Update all the components that you control (identified as Customer in the AWS Shared Responsibility Model). This includes ensuring the security of your application, your data, and any components that your application requires and that you downloaded. • Ensure that your Elastic Beanstalk environments are running on a supported platform version, and migrate any environment running on a retired platform version to a supported version. • If you’re using a custom Amazon machine image (AMI) for your Elastic Beanstalk environment, patch, maintain, and test your custom AMI so that it remains current and compatible with a supported Elastic Beanstalk platform version. For more information about managing environments with a custom AMI, see Using a custom Amazon machine image (AMI) in your Elastic Beanstalk environment. • Resolve all issues that come up in failed managed update attempts and retry the update. • Patch the OS, runtime, application server, and web server yourself if you opted out of Elastic Beanstalk managed updates. You can do this by applying platform updates manually or directly patching the components on all relevant environment"} +{"global_id": 256, "doc_id": "beanstalk", "chunk_id": "25", "question_id": 1, "question": "What should you do if you opted out of Elastic Beanstalk managed updates?", "answer_span": "Patch the OS, runtime, application server, and web server yourself if you opted out of Elastic Beanstalk managed updates.", "chunk": "that come up in failed managed update attempts and retry the update. • Patch the OS, runtime, application server, and web server yourself if you opted out of Elastic Beanstalk managed updates. You can do this by applying platform updates manually or directly patching the components on all relevant environment resources. • Manage the security and compliance of any AWS services that you use outside of Elastic Beanstalk according to the AWS Shared Responsibility Model. Shared responsibility model 746 AWS Elastic Beanstalk Developer Guide Elastic Beanstalk platform support policy Elastic Beanstalk supports platform branches that still receive ongoing minor and patch updates from their suppliers (owners or community). For a complete definition of related terms, see Elastic Beanstalk platforms glossary. Retired platform branches When a component of a supported platform branch is marked End of Life (EOL) by its supplier, Elastic Beanstalk marks the platform branch as retired. Components of a platform branch include the following: operating system (OS), runtime language version, application server, or web server. Once a platform branch is marked as retired the following policies apply: • Elastic Beanstalk stops providing maintenance updates, including security updates. • Elastic Beanstalk no longer provides technical support for retired platform branches. • Elastic Beanstalk no longer makes the platform branch available to new Elastic Beanstalk customers for deployments to new environments. There is a 90 day grace period from the published retirement date for existing customers with active environments that are running on retired platform branches. Note A retired platform branch will not be available in the Elastic Beanstalk console. However, it will be available through the AWS CLI, EB CLI and EB API for customers that have existing environments based on the retired platform branch. Existing customers can also use the Clone environment and Rebuild environment consoles. For a"} +{"global_id": 257, "doc_id": "beanstalk", "chunk_id": "25", "question_id": 2, "question": "What happens when a component of a supported platform branch is marked End of Life (EOL)?", "answer_span": "Elastic Beanstalk marks the platform branch as retired.", "chunk": "that come up in failed managed update attempts and retry the update. • Patch the OS, runtime, application server, and web server yourself if you opted out of Elastic Beanstalk managed updates. You can do this by applying platform updates manually or directly patching the components on all relevant environment resources. • Manage the security and compliance of any AWS services that you use outside of Elastic Beanstalk according to the AWS Shared Responsibility Model. Shared responsibility model 746 AWS Elastic Beanstalk Developer Guide Elastic Beanstalk platform support policy Elastic Beanstalk supports platform branches that still receive ongoing minor and patch updates from their suppliers (owners or community). For a complete definition of related terms, see Elastic Beanstalk platforms glossary. Retired platform branches When a component of a supported platform branch is marked End of Life (EOL) by its supplier, Elastic Beanstalk marks the platform branch as retired. Components of a platform branch include the following: operating system (OS), runtime language version, application server, or web server. Once a platform branch is marked as retired the following policies apply: • Elastic Beanstalk stops providing maintenance updates, including security updates. • Elastic Beanstalk no longer provides technical support for retired platform branches. • Elastic Beanstalk no longer makes the platform branch available to new Elastic Beanstalk customers for deployments to new environments. There is a 90 day grace period from the published retirement date for existing customers with active environments that are running on retired platform branches. Note A retired platform branch will not be available in the Elastic Beanstalk console. However, it will be available through the AWS CLI, EB CLI and EB API for customers that have existing environments based on the retired platform branch. Existing customers can also use the Clone environment and Rebuild environment consoles. For a"} +{"global_id": 258, "doc_id": "beanstalk", "chunk_id": "25", "question_id": 3, "question": "What policies apply once a platform branch is marked as retired?", "answer_span": "Once a platform branch is marked as retired the following policies apply: • Elastic Beanstalk stops providing maintenance updates, including security updates. • Elastic Beanstalk no longer provides technical support for retired platform branches. • Elastic Beanstalk no longer makes the platform branch available to new Elastic Beanstalk customers for deployments to new environments.", "chunk": "that come up in failed managed update attempts and retry the update. • Patch the OS, runtime, application server, and web server yourself if you opted out of Elastic Beanstalk managed updates. You can do this by applying platform updates manually or directly patching the components on all relevant environment resources. • Manage the security and compliance of any AWS services that you use outside of Elastic Beanstalk according to the AWS Shared Responsibility Model. Shared responsibility model 746 AWS Elastic Beanstalk Developer Guide Elastic Beanstalk platform support policy Elastic Beanstalk supports platform branches that still receive ongoing minor and patch updates from their suppliers (owners or community). For a complete definition of related terms, see Elastic Beanstalk platforms glossary. Retired platform branches When a component of a supported platform branch is marked End of Life (EOL) by its supplier, Elastic Beanstalk marks the platform branch as retired. Components of a platform branch include the following: operating system (OS), runtime language version, application server, or web server. Once a platform branch is marked as retired the following policies apply: • Elastic Beanstalk stops providing maintenance updates, including security updates. • Elastic Beanstalk no longer provides technical support for retired platform branches. • Elastic Beanstalk no longer makes the platform branch available to new Elastic Beanstalk customers for deployments to new environments. There is a 90 day grace period from the published retirement date for existing customers with active environments that are running on retired platform branches. Note A retired platform branch will not be available in the Elastic Beanstalk console. However, it will be available through the AWS CLI, EB CLI and EB API for customers that have existing environments based on the retired platform branch. Existing customers can also use the Clone environment and Rebuild environment consoles. For a"} +{"global_id": 259, "doc_id": "beanstalk", "chunk_id": "25", "question_id": 4, "question": "How long is the grace period for existing customers with active environments on retired platform branches?", "answer_span": "There is a 90 day grace period from the published retirement date for existing customers with active environments that are running on retired platform branches.", "chunk": "that come up in failed managed update attempts and retry the update. • Patch the OS, runtime, application server, and web server yourself if you opted out of Elastic Beanstalk managed updates. You can do this by applying platform updates manually or directly patching the components on all relevant environment resources. • Manage the security and compliance of any AWS services that you use outside of Elastic Beanstalk according to the AWS Shared Responsibility Model. Shared responsibility model 746 AWS Elastic Beanstalk Developer Guide Elastic Beanstalk platform support policy Elastic Beanstalk supports platform branches that still receive ongoing minor and patch updates from their suppliers (owners or community). For a complete definition of related terms, see Elastic Beanstalk platforms glossary. Retired platform branches When a component of a supported platform branch is marked End of Life (EOL) by its supplier, Elastic Beanstalk marks the platform branch as retired. Components of a platform branch include the following: operating system (OS), runtime language version, application server, or web server. Once a platform branch is marked as retired the following policies apply: • Elastic Beanstalk stops providing maintenance updates, including security updates. • Elastic Beanstalk no longer provides technical support for retired platform branches. • Elastic Beanstalk no longer makes the platform branch available to new Elastic Beanstalk customers for deployments to new environments. There is a 90 day grace period from the published retirement date for existing customers with active environments that are running on retired platform branches. Note A retired platform branch will not be available in the Elastic Beanstalk console. However, it will be available through the AWS CLI, EB CLI and EB API for customers that have existing environments based on the retired platform branch. Existing customers can also use the Clone environment and Rebuild environment consoles. For a"} +{"global_id": 260, "doc_id": "beanstalk", "chunk_id": "26", "question_id": 1, "question": "Where will the branch not be available?", "answer_span": "branch will not be available in the Elastic Beanstalk console.", "chunk": "branch will not be available in the Elastic Beanstalk console. However, it will be available through the AWS CLI, EB CLI and EB API for customers that have existing environments based on the retired platform branch. Existing customers can also use the Clone environment and Rebuild environment consoles. For a list of platform branches that are scheduled for retirement see the Retiring platform branch schedule in the Elastic Beanstalk platform schedule topic that follows. For more information about what to expect when your environment’s platform branch retires, see Platform retirement FAQ. Platform support policy 747"} +{"global_id": 261, "doc_id": "beanstalk", "chunk_id": "26", "question_id": 2, "question": "Through which interfaces will the branch be available?", "answer_span": "However, it will be available through the AWS CLI, EB CLI and EB API for customers that have existing environments based on the retired platform branch.", "chunk": "branch will not be available in the Elastic Beanstalk console. However, it will be available through the AWS CLI, EB CLI and EB API for customers that have existing environments based on the retired platform branch. Existing customers can also use the Clone environment and Rebuild environment consoles. For a list of platform branches that are scheduled for retirement see the Retiring platform branch schedule in the Elastic Beanstalk platform schedule topic that follows. For more information about what to expect when your environment’s platform branch retires, see Platform retirement FAQ. Platform support policy 747"} +{"global_id": 262, "doc_id": "beanstalk", "chunk_id": "26", "question_id": 3, "question": "What can existing customers use regarding their environments?", "answer_span": "Existing customers can also use the Clone environment and Rebuild environment consoles.", "chunk": "branch will not be available in the Elastic Beanstalk console. However, it will be available through the AWS CLI, EB CLI and EB API for customers that have existing environments based on the retired platform branch. Existing customers can also use the Clone environment and Rebuild environment consoles. For a list of platform branches that are scheduled for retirement see the Retiring platform branch schedule in the Elastic Beanstalk platform schedule topic that follows. For more information about what to expect when your environment’s platform branch retires, see Platform retirement FAQ. Platform support policy 747"} +{"global_id": 263, "doc_id": "beanstalk", "chunk_id": "26", "question_id": 4, "question": "Where can you find the schedule for retiring platform branches?", "answer_span": "For a list of platform branches that are scheduled for retirement see the Retiring platform branch schedule in the Elastic Beanstalk platform schedule topic that follows.", "chunk": "branch will not be available in the Elastic Beanstalk console. However, it will be available through the AWS CLI, EB CLI and EB API for customers that have existing environments based on the retired platform branch. Existing customers can also use the Clone environment and Rebuild environment consoles. For a list of platform branches that are scheduled for retirement see the Retiring platform branch schedule in the Elastic Beanstalk platform schedule topic that follows. For more information about what to expect when your environment’s platform branch retires, see Platform retirement FAQ. Platform support policy 747"} +{"global_id": 264, "doc_id": "api-gateway", "chunk_id": "0", "question_id": 1, "question": "What is Amazon API Gateway?", "answer_span": "Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale.", "chunk": "Amazon API Gateway Developer Guide What is Amazon API Gateway? Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale. API developers can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud. As an API Gateway API developer, you can create APIs for use in your own client applications. Or you can make your APIs available to third-party app developers. For more information, see the section called “Who uses API Gateway?”. API Gateway creates RESTful APIs that: • Are HTTP-based. • Enable stateless client-server communication. • Implement standard HTTP methods such as GET, POST, PUT, PATCH, and DELETE. For more information about API Gateway REST APIs and HTTP APIs, see the section called “Choose between REST APIs and HTTP APIs ”, API Gateway HTTP APIs, the section called “Use API Gateway to create REST APIs”, and the section called “Develop”. API Gateway creates WebSocket APIs that: • Adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server. • Route incoming messages based on message content. For more information about API Gateway WebSocket APIs, see the section called “Use API Gateway to create WebSocket APIs” and the section called “Overview of WebSocket APIs”. Topics • Architecture of API Gateway • Features of API Gateway • API Gateway use cases • Accessing API Gateway • Part of AWS serverless infrastructure • How to get started with Amazon API Gateway 1 Amazon API Gateway Developer Guide • Amazon API Gateway concepts • Choose between REST APIs and HTTP APIs • Get started with the REST API console Architecture of API Gateway The following diagram shows API Gateway architecture. This diagram illustrates how the APIs you build in Amazon"} +{"global_id": 265, "doc_id": "api-gateway", "chunk_id": "0", "question_id": 2, "question": "What types of APIs can API Gateway create?", "answer_span": "API Gateway creates RESTful APIs that: • Are HTTP-based. • Enable stateless client-server communication. • Implement standard HTTP methods such as GET, POST, PUT, PATCH, and DELETE.", "chunk": "Amazon API Gateway Developer Guide What is Amazon API Gateway? Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale. API developers can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud. As an API Gateway API developer, you can create APIs for use in your own client applications. Or you can make your APIs available to third-party app developers. For more information, see the section called “Who uses API Gateway?”. API Gateway creates RESTful APIs that: • Are HTTP-based. • Enable stateless client-server communication. • Implement standard HTTP methods such as GET, POST, PUT, PATCH, and DELETE. For more information about API Gateway REST APIs and HTTP APIs, see the section called “Choose between REST APIs and HTTP APIs ”, API Gateway HTTP APIs, the section called “Use API Gateway to create REST APIs”, and the section called “Develop”. API Gateway creates WebSocket APIs that: • Adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server. • Route incoming messages based on message content. For more information about API Gateway WebSocket APIs, see the section called “Use API Gateway to create WebSocket APIs” and the section called “Overview of WebSocket APIs”. Topics • Architecture of API Gateway • Features of API Gateway • API Gateway use cases • Accessing API Gateway • Part of AWS serverless infrastructure • How to get started with Amazon API Gateway 1 Amazon API Gateway Developer Guide • Amazon API Gateway concepts • Choose between REST APIs and HTTP APIs • Get started with the REST API console Architecture of API Gateway The following diagram shows API Gateway architecture. This diagram illustrates how the APIs you build in Amazon"} +{"global_id": 266, "doc_id": "api-gateway", "chunk_id": "0", "question_id": 3, "question": "What protocol do WebSocket APIs adhere to?", "answer_span": "API Gateway creates WebSocket APIs that: • Adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server.", "chunk": "Amazon API Gateway Developer Guide What is Amazon API Gateway? Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale. API developers can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud. As an API Gateway API developer, you can create APIs for use in your own client applications. Or you can make your APIs available to third-party app developers. For more information, see the section called “Who uses API Gateway?”. API Gateway creates RESTful APIs that: • Are HTTP-based. • Enable stateless client-server communication. • Implement standard HTTP methods such as GET, POST, PUT, PATCH, and DELETE. For more information about API Gateway REST APIs and HTTP APIs, see the section called “Choose between REST APIs and HTTP APIs ”, API Gateway HTTP APIs, the section called “Use API Gateway to create REST APIs”, and the section called “Develop”. API Gateway creates WebSocket APIs that: • Adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server. • Route incoming messages based on message content. For more information about API Gateway WebSocket APIs, see the section called “Use API Gateway to create WebSocket APIs” and the section called “Overview of WebSocket APIs”. Topics • Architecture of API Gateway • Features of API Gateway • API Gateway use cases • Accessing API Gateway • Part of AWS serverless infrastructure • How to get started with Amazon API Gateway 1 Amazon API Gateway Developer Guide • Amazon API Gateway concepts • Choose between REST APIs and HTTP APIs • Get started with the REST API console Architecture of API Gateway The following diagram shows API Gateway architecture. This diagram illustrates how the APIs you build in Amazon"} +{"global_id": 267, "doc_id": "api-gateway", "chunk_id": "0", "question_id": 4, "question": "What can API developers create APIs for?", "answer_span": "API developers can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud.", "chunk": "Amazon API Gateway Developer Guide What is Amazon API Gateway? Amazon API Gateway is an AWS service for creating, publishing, maintaining, monitoring, and securing REST, HTTP, and WebSocket APIs at any scale. API developers can create APIs that access AWS or other web services, as well as data stored in the AWS Cloud. As an API Gateway API developer, you can create APIs for use in your own client applications. Or you can make your APIs available to third-party app developers. For more information, see the section called “Who uses API Gateway?”. API Gateway creates RESTful APIs that: • Are HTTP-based. • Enable stateless client-server communication. • Implement standard HTTP methods such as GET, POST, PUT, PATCH, and DELETE. For more information about API Gateway REST APIs and HTTP APIs, see the section called “Choose between REST APIs and HTTP APIs ”, API Gateway HTTP APIs, the section called “Use API Gateway to create REST APIs”, and the section called “Develop”. API Gateway creates WebSocket APIs that: • Adhere to the WebSocket protocol, which enables stateful, full-duplex communication between client and server. • Route incoming messages based on message content. For more information about API Gateway WebSocket APIs, see the section called “Use API Gateway to create WebSocket APIs” and the section called “Overview of WebSocket APIs”. Topics • Architecture of API Gateway • Features of API Gateway • API Gateway use cases • Accessing API Gateway • Part of AWS serverless infrastructure • How to get started with Amazon API Gateway 1 Amazon API Gateway Developer Guide • Amazon API Gateway concepts • Choose between REST APIs and HTTP APIs • Get started with the REST API console Architecture of API Gateway The following diagram shows API Gateway architecture. This diagram illustrates how the APIs you build in Amazon"} +{"global_id": 268, "doc_id": "api-gateway", "chunk_id": "1", "question_id": 1, "question": "What does API Gateway handle?", "answer_span": "API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls.", "chunk": "API Gateway 1 Amazon API Gateway Developer Guide • Amazon API Gateway concepts • Choose between REST APIs and HTTP APIs • Get started with the REST API console Architecture of API Gateway The following diagram shows API Gateway architecture. This diagram illustrates how the APIs you build in Amazon API Gateway provide you or your developer customers with an integrated and consistent developer experience for building AWS serverless applications. API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls. These tasks include traffic management, authorization and access control, monitoring, and API version management. API Gateway acts as a \"front door\" for applications to access data, business logic, or functionality from your backend services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, any web application, or real-time communication applications. Features of API Gateway Amazon API Gateway offers features such as the following: Architecture of API Gateway 2 Amazon API Gateway Developer Guide • Support for stateful (WebSocket) and stateless (HTTP and REST) APIs. • Powerful, flexible authentication mechanisms, such as AWS Identity and Access Management policies, Lambda authorizer functions, and Amazon Cognito user pools. • Canary release deployments for safely rolling out changes. • CloudTrail logging and monitoring of API usage and API changes. • CloudWatch access logging and execution logging, including the ability to set alarms. For more information, see the section called “CloudWatch metrics” and the section called “Metrics”. • Ability to use AWS CloudFormation templates to enable API creation. For more information, see Amazon API Gateway Resource Types Reference and Amazon API Gateway V2 Resource Types Reference. • Support for custom domain names. • Integration with AWS WAF for protecting your APIs against common web exploits. • Integration with"} +{"global_id": 269, "doc_id": "api-gateway", "chunk_id": "1", "question_id": 2, "question": "What types of APIs does Amazon API Gateway support?", "answer_span": "Support for stateful (WebSocket) and stateless (HTTP and REST) APIs.", "chunk": "API Gateway 1 Amazon API Gateway Developer Guide • Amazon API Gateway concepts • Choose between REST APIs and HTTP APIs • Get started with the REST API console Architecture of API Gateway The following diagram shows API Gateway architecture. This diagram illustrates how the APIs you build in Amazon API Gateway provide you or your developer customers with an integrated and consistent developer experience for building AWS serverless applications. API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls. These tasks include traffic management, authorization and access control, monitoring, and API version management. API Gateway acts as a \"front door\" for applications to access data, business logic, or functionality from your backend services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, any web application, or real-time communication applications. Features of API Gateway Amazon API Gateway offers features such as the following: Architecture of API Gateway 2 Amazon API Gateway Developer Guide • Support for stateful (WebSocket) and stateless (HTTP and REST) APIs. • Powerful, flexible authentication mechanisms, such as AWS Identity and Access Management policies, Lambda authorizer functions, and Amazon Cognito user pools. • Canary release deployments for safely rolling out changes. • CloudTrail logging and monitoring of API usage and API changes. • CloudWatch access logging and execution logging, including the ability to set alarms. For more information, see the section called “CloudWatch metrics” and the section called “Metrics”. • Ability to use AWS CloudFormation templates to enable API creation. For more information, see Amazon API Gateway Resource Types Reference and Amazon API Gateway V2 Resource Types Reference. • Support for custom domain names. • Integration with AWS WAF for protecting your APIs against common web exploits. • Integration with"} +{"global_id": 270, "doc_id": "api-gateway", "chunk_id": "1", "question_id": 3, "question": "What authentication mechanisms does API Gateway offer?", "answer_span": "Powerful, flexible authentication mechanisms, such as AWS Identity and Access Management policies, Lambda authorizer functions, and Amazon Cognito user pools.", "chunk": "API Gateway 1 Amazon API Gateway Developer Guide • Amazon API Gateway concepts • Choose between REST APIs and HTTP APIs • Get started with the REST API console Architecture of API Gateway The following diagram shows API Gateway architecture. This diagram illustrates how the APIs you build in Amazon API Gateway provide you or your developer customers with an integrated and consistent developer experience for building AWS serverless applications. API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls. These tasks include traffic management, authorization and access control, monitoring, and API version management. API Gateway acts as a \"front door\" for applications to access data, business logic, or functionality from your backend services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, any web application, or real-time communication applications. Features of API Gateway Amazon API Gateway offers features such as the following: Architecture of API Gateway 2 Amazon API Gateway Developer Guide • Support for stateful (WebSocket) and stateless (HTTP and REST) APIs. • Powerful, flexible authentication mechanisms, such as AWS Identity and Access Management policies, Lambda authorizer functions, and Amazon Cognito user pools. • Canary release deployments for safely rolling out changes. • CloudTrail logging and monitoring of API usage and API changes. • CloudWatch access logging and execution logging, including the ability to set alarms. For more information, see the section called “CloudWatch metrics” and the section called “Metrics”. • Ability to use AWS CloudFormation templates to enable API creation. For more information, see Amazon API Gateway Resource Types Reference and Amazon API Gateway V2 Resource Types Reference. • Support for custom domain names. • Integration with AWS WAF for protecting your APIs against common web exploits. • Integration with"} +{"global_id": 271, "doc_id": "api-gateway", "chunk_id": "1", "question_id": 4, "question": "What is one feature of API Gateway related to logging?", "answer_span": "CloudTrail logging and monitoring of API usage and API changes.", "chunk": "API Gateway 1 Amazon API Gateway Developer Guide • Amazon API Gateway concepts • Choose between REST APIs and HTTP APIs • Get started with the REST API console Architecture of API Gateway The following diagram shows API Gateway architecture. This diagram illustrates how the APIs you build in Amazon API Gateway provide you or your developer customers with an integrated and consistent developer experience for building AWS serverless applications. API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls. These tasks include traffic management, authorization and access control, monitoring, and API version management. API Gateway acts as a \"front door\" for applications to access data, business logic, or functionality from your backend services, such as workloads running on Amazon Elastic Compute Cloud (Amazon EC2), code running on AWS Lambda, any web application, or real-time communication applications. Features of API Gateway Amazon API Gateway offers features such as the following: Architecture of API Gateway 2 Amazon API Gateway Developer Guide • Support for stateful (WebSocket) and stateless (HTTP and REST) APIs. • Powerful, flexible authentication mechanisms, such as AWS Identity and Access Management policies, Lambda authorizer functions, and Amazon Cognito user pools. • Canary release deployments for safely rolling out changes. • CloudTrail logging and monitoring of API usage and API changes. • CloudWatch access logging and execution logging, including the ability to set alarms. For more information, see the section called “CloudWatch metrics” and the section called “Metrics”. • Ability to use AWS CloudFormation templates to enable API creation. For more information, see Amazon API Gateway Resource Types Reference and Amazon API Gateway V2 Resource Types Reference. • Support for custom domain names. • Integration with AWS WAF for protecting your APIs against common web exploits. • Integration with"} +{"global_id": 272, "doc_id": "api-gateway", "chunk_id": "2", "question_id": 1, "question": "What ability does AWS CloudFormation templates provide?", "answer_span": "Ability to use AWS CloudFormation templates to enable API creation.", "chunk": "Ability to use AWS CloudFormation templates to enable API creation. For more information, see Amazon API Gateway Resource Types Reference and Amazon API Gateway V2 Resource Types Reference. • Support for custom domain names. • Integration with AWS WAF for protecting your APIs against common web exploits. • Integration with AWS X-Ray for understanding and triaging performance latencies. For a complete list of API Gateway feature releases, see Document history. API Gateway use cases The following use cases section presents an overview of the different types of API Gateway APIs and the different kinds of developers who use API Gateway. For more detailed information about the difference between REST APIs and HTTP APIs, see the section called “Choose between REST APIs and HTTP APIs ”. Topics • Use API Gateway to create REST APIs • Use API Gateway to create HTTP APIs • Use API Gateway to create WebSocket APIs • Who uses API Gateway? Use API Gateway to create REST APIs An API Gateway REST API is made up of resources and methods. A resource is a logical entity that an app can access through a resource path. A method corresponds to a REST API request that is submitted by the user of your API and the response returned to the user. API Gateway use cases 3 Amazon API Gateway Developer Guide For example, /incomes could be the path of a resource representing the income of the app user. A resource can have one or more operations that are defined by appropriate HTTP verbs such as GET, POST, PUT, PATCH, and DELETE. A combination of a resource path and an operation identifies a method of the API. For example, a POST /incomes method could add an income earned by the caller, and a GET /expenses method could query the"} +{"global_id": 273, "doc_id": "api-gateway", "chunk_id": "2", "question_id": 2, "question": "What integration does API Gateway have for protecting APIs?", "answer_span": "Integration with AWS WAF for protecting your APIs against common web exploits.", "chunk": "Ability to use AWS CloudFormation templates to enable API creation. For more information, see Amazon API Gateway Resource Types Reference and Amazon API Gateway V2 Resource Types Reference. • Support for custom domain names. • Integration with AWS WAF for protecting your APIs against common web exploits. • Integration with AWS X-Ray for understanding and triaging performance latencies. For a complete list of API Gateway feature releases, see Document history. API Gateway use cases The following use cases section presents an overview of the different types of API Gateway APIs and the different kinds of developers who use API Gateway. For more detailed information about the difference between REST APIs and HTTP APIs, see the section called “Choose between REST APIs and HTTP APIs ”. Topics • Use API Gateway to create REST APIs • Use API Gateway to create HTTP APIs • Use API Gateway to create WebSocket APIs • Who uses API Gateway? Use API Gateway to create REST APIs An API Gateway REST API is made up of resources and methods. A resource is a logical entity that an app can access through a resource path. A method corresponds to a REST API request that is submitted by the user of your API and the response returned to the user. API Gateway use cases 3 Amazon API Gateway Developer Guide For example, /incomes could be the path of a resource representing the income of the app user. A resource can have one or more operations that are defined by appropriate HTTP verbs such as GET, POST, PUT, PATCH, and DELETE. A combination of a resource path and an operation identifies a method of the API. For example, a POST /incomes method could add an income earned by the caller, and a GET /expenses method could query the"} +{"global_id": 274, "doc_id": "api-gateway", "chunk_id": "2", "question_id": 3, "question": "What is a resource in the context of an API Gateway REST API?", "answer_span": "A resource is a logical entity that an app can access through a resource path.", "chunk": "Ability to use AWS CloudFormation templates to enable API creation. For more information, see Amazon API Gateway Resource Types Reference and Amazon API Gateway V2 Resource Types Reference. • Support for custom domain names. • Integration with AWS WAF for protecting your APIs against common web exploits. • Integration with AWS X-Ray for understanding and triaging performance latencies. For a complete list of API Gateway feature releases, see Document history. API Gateway use cases The following use cases section presents an overview of the different types of API Gateway APIs and the different kinds of developers who use API Gateway. For more detailed information about the difference between REST APIs and HTTP APIs, see the section called “Choose between REST APIs and HTTP APIs ”. Topics • Use API Gateway to create REST APIs • Use API Gateway to create HTTP APIs • Use API Gateway to create WebSocket APIs • Who uses API Gateway? Use API Gateway to create REST APIs An API Gateway REST API is made up of resources and methods. A resource is a logical entity that an app can access through a resource path. A method corresponds to a REST API request that is submitted by the user of your API and the response returned to the user. API Gateway use cases 3 Amazon API Gateway Developer Guide For example, /incomes could be the path of a resource representing the income of the app user. A resource can have one or more operations that are defined by appropriate HTTP verbs such as GET, POST, PUT, PATCH, and DELETE. A combination of a resource path and an operation identifies a method of the API. For example, a POST /incomes method could add an income earned by the caller, and a GET /expenses method could query the"} +{"global_id": 275, "doc_id": "api-gateway", "chunk_id": "2", "question_id": 4, "question": "What HTTP verbs can define operations for a resource?", "answer_span": "appropriate HTTP verbs such as GET, POST, PUT, PATCH, and DELETE.", "chunk": "Ability to use AWS CloudFormation templates to enable API creation. For more information, see Amazon API Gateway Resource Types Reference and Amazon API Gateway V2 Resource Types Reference. • Support for custom domain names. • Integration with AWS WAF for protecting your APIs against common web exploits. • Integration with AWS X-Ray for understanding and triaging performance latencies. For a complete list of API Gateway feature releases, see Document history. API Gateway use cases The following use cases section presents an overview of the different types of API Gateway APIs and the different kinds of developers who use API Gateway. For more detailed information about the difference between REST APIs and HTTP APIs, see the section called “Choose between REST APIs and HTTP APIs ”. Topics • Use API Gateway to create REST APIs • Use API Gateway to create HTTP APIs • Use API Gateway to create WebSocket APIs • Who uses API Gateway? Use API Gateway to create REST APIs An API Gateway REST API is made up of resources and methods. A resource is a logical entity that an app can access through a resource path. A method corresponds to a REST API request that is submitted by the user of your API and the response returned to the user. API Gateway use cases 3 Amazon API Gateway Developer Guide For example, /incomes could be the path of a resource representing the income of the app user. A resource can have one or more operations that are defined by appropriate HTTP verbs such as GET, POST, PUT, PATCH, and DELETE. A combination of a resource path and an operation identifies a method of the API. For example, a POST /incomes method could add an income earned by the caller, and a GET /expenses method could query the"} +{"global_id": 276, "doc_id": "api-gateway", "chunk_id": "3", "question_id": 1, "question": "What HTTP verbs are defined in the chunk?", "answer_span": "defined by appropriate HTTP verbs such as GET, POST, PUT, PATCH, and DELETE.", "chunk": "defined by appropriate HTTP verbs such as GET, POST, PUT, PATCH, and DELETE. A combination of a resource path and an operation identifies a method of the API. For example, a POST /incomes method could add an income earned by the caller, and a GET /expenses method could query the reported expenses incurred by the caller. The app doesn't need to know where the requested data is stored and fetched from on the backend. In API Gateway REST APIs, the frontend is encapsulated by method requests and method responses. The API interfaces with the backend by means of integration requests and integration responses. For example, with DynamoDB as the backend, the API developer sets up the integration request to forward the incoming method request to the chosen backend. The setup includes specifications of an appropriate DynamoDB action, required IAM role and policies, and required input data transformation. The backend returns the result to API Gateway as an integration response. To route the integration response to an appropriate method response (of a given HTTP status code) to the client, you can configure the integration response to map required response parameters from integration to method. You then translate the output data format of the backend to that of the frontend, if necessary. API Gateway enables you to define a schema or model for the payload to facilitate setting up the body mapping template. API Gateway provides REST API management functionality such as the following: • Support for generating SDKs and creating API documentation using API Gateway extensions to OpenAPI • Throttling of HTTP requests Use API Gateway to create HTTP APIs HTTP APIs enable you to create RESTful APIs with lower latency and lower cost than REST APIs. You can use HTTP APIs to send requests to AWS Lambda functions or to"} +{"global_id": 277, "doc_id": "api-gateway", "chunk_id": "3", "question_id": 2, "question": "What does a POST /incomes method do?", "answer_span": "a POST /incomes method could add an income earned by the caller.", "chunk": "defined by appropriate HTTP verbs such as GET, POST, PUT, PATCH, and DELETE. A combination of a resource path and an operation identifies a method of the API. For example, a POST /incomes method could add an income earned by the caller, and a GET /expenses method could query the reported expenses incurred by the caller. The app doesn't need to know where the requested data is stored and fetched from on the backend. In API Gateway REST APIs, the frontend is encapsulated by method requests and method responses. The API interfaces with the backend by means of integration requests and integration responses. For example, with DynamoDB as the backend, the API developer sets up the integration request to forward the incoming method request to the chosen backend. The setup includes specifications of an appropriate DynamoDB action, required IAM role and policies, and required input data transformation. The backend returns the result to API Gateway as an integration response. To route the integration response to an appropriate method response (of a given HTTP status code) to the client, you can configure the integration response to map required response parameters from integration to method. You then translate the output data format of the backend to that of the frontend, if necessary. API Gateway enables you to define a schema or model for the payload to facilitate setting up the body mapping template. API Gateway provides REST API management functionality such as the following: • Support for generating SDKs and creating API documentation using API Gateway extensions to OpenAPI • Throttling of HTTP requests Use API Gateway to create HTTP APIs HTTP APIs enable you to create RESTful APIs with lower latency and lower cost than REST APIs. You can use HTTP APIs to send requests to AWS Lambda functions or to"} +{"global_id": 278, "doc_id": "api-gateway", "chunk_id": "3", "question_id": 3, "question": "What does API Gateway enable you to define for the payload?", "answer_span": "API Gateway enables you to define a schema or model for the payload to facilitate setting up the body mapping template.", "chunk": "defined by appropriate HTTP verbs such as GET, POST, PUT, PATCH, and DELETE. A combination of a resource path and an operation identifies a method of the API. For example, a POST /incomes method could add an income earned by the caller, and a GET /expenses method could query the reported expenses incurred by the caller. The app doesn't need to know where the requested data is stored and fetched from on the backend. In API Gateway REST APIs, the frontend is encapsulated by method requests and method responses. The API interfaces with the backend by means of integration requests and integration responses. For example, with DynamoDB as the backend, the API developer sets up the integration request to forward the incoming method request to the chosen backend. The setup includes specifications of an appropriate DynamoDB action, required IAM role and policies, and required input data transformation. The backend returns the result to API Gateway as an integration response. To route the integration response to an appropriate method response (of a given HTTP status code) to the client, you can configure the integration response to map required response parameters from integration to method. You then translate the output data format of the backend to that of the frontend, if necessary. API Gateway enables you to define a schema or model for the payload to facilitate setting up the body mapping template. API Gateway provides REST API management functionality such as the following: • Support for generating SDKs and creating API documentation using API Gateway extensions to OpenAPI • Throttling of HTTP requests Use API Gateway to create HTTP APIs HTTP APIs enable you to create RESTful APIs with lower latency and lower cost than REST APIs. You can use HTTP APIs to send requests to AWS Lambda functions or to"} +{"global_id": 279, "doc_id": "api-gateway", "chunk_id": "3", "question_id": 4, "question": "What is one functionality provided by API Gateway?", "answer_span": "API Gateway provides REST API management functionality such as the following: • Support for generating SDKs and creating API documentation using API Gateway extensions to OpenAPI.", "chunk": "defined by appropriate HTTP verbs such as GET, POST, PUT, PATCH, and DELETE. A combination of a resource path and an operation identifies a method of the API. For example, a POST /incomes method could add an income earned by the caller, and a GET /expenses method could query the reported expenses incurred by the caller. The app doesn't need to know where the requested data is stored and fetched from on the backend. In API Gateway REST APIs, the frontend is encapsulated by method requests and method responses. The API interfaces with the backend by means of integration requests and integration responses. For example, with DynamoDB as the backend, the API developer sets up the integration request to forward the incoming method request to the chosen backend. The setup includes specifications of an appropriate DynamoDB action, required IAM role and policies, and required input data transformation. The backend returns the result to API Gateway as an integration response. To route the integration response to an appropriate method response (of a given HTTP status code) to the client, you can configure the integration response to map required response parameters from integration to method. You then translate the output data format of the backend to that of the frontend, if necessary. API Gateway enables you to define a schema or model for the payload to facilitate setting up the body mapping template. API Gateway provides REST API management functionality such as the following: • Support for generating SDKs and creating API documentation using API Gateway extensions to OpenAPI • Throttling of HTTP requests Use API Gateway to create HTTP APIs HTTP APIs enable you to create RESTful APIs with lower latency and lower cost than REST APIs. You can use HTTP APIs to send requests to AWS Lambda functions or to"} +{"global_id": 280, "doc_id": "api-gateway", "chunk_id": "4", "question_id": 1, "question": "What do HTTP APIs enable you to create?", "answer_span": "HTTP APIs enable you to create RESTful APIs with lower latency and lower cost than REST APIs.", "chunk": "documentation using API Gateway extensions to OpenAPI • Throttling of HTTP requests Use API Gateway to create HTTP APIs HTTP APIs enable you to create RESTful APIs with lower latency and lower cost than REST APIs. You can use HTTP APIs to send requests to AWS Lambda functions or to any publicly routable HTTP endpoint. For example, you can create an HTTP API that integrates with a Lambda function on the backend. When a client calls your API, API Gateway sends the request to the Lambda function and returns the function's response to the client. Use API Gateway to create HTTP APIs 4 Amazon API Gateway Developer Guide HTTP APIs support OpenID Connect and OAuth 2.0 authorization. They come with built-in support for cross-origin resource sharing (CORS) and automatic deployments. To learn more, see the section called “Choose between REST APIs and HTTP APIs ”. Use API Gateway to create WebSocket APIs In a WebSocket API, the client and the server can both send messages to each other at any time. Backend servers can easily push data to connected users and devices, avoiding the need to implement complex polling mechanisms. For example, you could build a serverless application using an API Gateway WebSocket API and AWS Lambda to send and receive messages to and from individual users or groups of users in a chat room. Or you could invoke backend services such as AWS Lambda, Amazon Kinesis, or an HTTP endpoint based on message content. You can use API Gateway WebSocket APIs to build secure, real-time communication applications without having to provision or manage any servers to manage connections or large-scale data exchanges. Targeted use cases include real-time applications such as the following: • Chat applications • Real-time dashboards such as stock tickers • Real-time alerts and notifications API Gateway"} +{"global_id": 281, "doc_id": "api-gateway", "chunk_id": "4", "question_id": 2, "question": "What authorization methods do HTTP APIs support?", "answer_span": "HTTP APIs support OpenID Connect and OAuth 2.0 authorization.", "chunk": "documentation using API Gateway extensions to OpenAPI • Throttling of HTTP requests Use API Gateway to create HTTP APIs HTTP APIs enable you to create RESTful APIs with lower latency and lower cost than REST APIs. You can use HTTP APIs to send requests to AWS Lambda functions or to any publicly routable HTTP endpoint. For example, you can create an HTTP API that integrates with a Lambda function on the backend. When a client calls your API, API Gateway sends the request to the Lambda function and returns the function's response to the client. Use API Gateway to create HTTP APIs 4 Amazon API Gateway Developer Guide HTTP APIs support OpenID Connect and OAuth 2.0 authorization. They come with built-in support for cross-origin resource sharing (CORS) and automatic deployments. To learn more, see the section called “Choose between REST APIs and HTTP APIs ”. Use API Gateway to create WebSocket APIs In a WebSocket API, the client and the server can both send messages to each other at any time. Backend servers can easily push data to connected users and devices, avoiding the need to implement complex polling mechanisms. For example, you could build a serverless application using an API Gateway WebSocket API and AWS Lambda to send and receive messages to and from individual users or groups of users in a chat room. Or you could invoke backend services such as AWS Lambda, Amazon Kinesis, or an HTTP endpoint based on message content. You can use API Gateway WebSocket APIs to build secure, real-time communication applications without having to provision or manage any servers to manage connections or large-scale data exchanges. Targeted use cases include real-time applications such as the following: • Chat applications • Real-time dashboards such as stock tickers • Real-time alerts and notifications API Gateway"} +{"global_id": 282, "doc_id": "api-gateway", "chunk_id": "4", "question_id": 3, "question": "What can backend servers do in a WebSocket API?", "answer_span": "Backend servers can easily push data to connected users and devices, avoiding the need to implement complex polling mechanisms.", "chunk": "documentation using API Gateway extensions to OpenAPI • Throttling of HTTP requests Use API Gateway to create HTTP APIs HTTP APIs enable you to create RESTful APIs with lower latency and lower cost than REST APIs. You can use HTTP APIs to send requests to AWS Lambda functions or to any publicly routable HTTP endpoint. For example, you can create an HTTP API that integrates with a Lambda function on the backend. When a client calls your API, API Gateway sends the request to the Lambda function and returns the function's response to the client. Use API Gateway to create HTTP APIs 4 Amazon API Gateway Developer Guide HTTP APIs support OpenID Connect and OAuth 2.0 authorization. They come with built-in support for cross-origin resource sharing (CORS) and automatic deployments. To learn more, see the section called “Choose between REST APIs and HTTP APIs ”. Use API Gateway to create WebSocket APIs In a WebSocket API, the client and the server can both send messages to each other at any time. Backend servers can easily push data to connected users and devices, avoiding the need to implement complex polling mechanisms. For example, you could build a serverless application using an API Gateway WebSocket API and AWS Lambda to send and receive messages to and from individual users or groups of users in a chat room. Or you could invoke backend services such as AWS Lambda, Amazon Kinesis, or an HTTP endpoint based on message content. You can use API Gateway WebSocket APIs to build secure, real-time communication applications without having to provision or manage any servers to manage connections or large-scale data exchanges. Targeted use cases include real-time applications such as the following: • Chat applications • Real-time dashboards such as stock tickers • Real-time alerts and notifications API Gateway"} +{"global_id": 283, "doc_id": "api-gateway", "chunk_id": "4", "question_id": 4, "question": "What are some targeted use cases for API Gateway WebSocket APIs?", "answer_span": "Targeted use cases include real-time applications such as the following: • Chat applications • Real-time dashboards such as stock tickers • Real-time alerts and notifications.", "chunk": "documentation using API Gateway extensions to OpenAPI • Throttling of HTTP requests Use API Gateway to create HTTP APIs HTTP APIs enable you to create RESTful APIs with lower latency and lower cost than REST APIs. You can use HTTP APIs to send requests to AWS Lambda functions or to any publicly routable HTTP endpoint. For example, you can create an HTTP API that integrates with a Lambda function on the backend. When a client calls your API, API Gateway sends the request to the Lambda function and returns the function's response to the client. Use API Gateway to create HTTP APIs 4 Amazon API Gateway Developer Guide HTTP APIs support OpenID Connect and OAuth 2.0 authorization. They come with built-in support for cross-origin resource sharing (CORS) and automatic deployments. To learn more, see the section called “Choose between REST APIs and HTTP APIs ”. Use API Gateway to create WebSocket APIs In a WebSocket API, the client and the server can both send messages to each other at any time. Backend servers can easily push data to connected users and devices, avoiding the need to implement complex polling mechanisms. For example, you could build a serverless application using an API Gateway WebSocket API and AWS Lambda to send and receive messages to and from individual users or groups of users in a chat room. Or you could invoke backend services such as AWS Lambda, Amazon Kinesis, or an HTTP endpoint based on message content. You can use API Gateway WebSocket APIs to build secure, real-time communication applications without having to provision or manage any servers to manage connections or large-scale data exchanges. Targeted use cases include real-time applications such as the following: • Chat applications • Real-time dashboards such as stock tickers • Real-time alerts and notifications API Gateway"} +{"global_id": 284, "doc_id": "api-gateway", "chunk_id": "5", "question_id": 1, "question": "What are the targeted use cases for WebSocket APIs?", "answer_span": "Targeted use cases include real-time applications such as the following: • Chat applications • Real-time dashboards such as stock tickers • Real-time alerts and notifications", "chunk": "WebSocket APIs to build secure, real-time communication applications without having to provision or manage any servers to manage connections or large-scale data exchanges. Targeted use cases include real-time applications such as the following: • Chat applications • Real-time dashboards such as stock tickers • Real-time alerts and notifications API Gateway provides WebSocket API management functionality such as the following: • Monitoring and throttling of connections and messages • Using AWS X-Ray to trace messages as they travel through the APIs to backend services • Easy integration with HTTP/HTTPS endpoints Who uses API Gateway? There are two kinds of developers who use API Gateway: API developers and app developers. An API developer creates and deploys an API to enable the required functionality in API Gateway. The API developer must be a user in the AWS account that owns the API. An app developer builds a functioning application to call AWS services by invoking a WebSocket or REST API created by an API developer in API Gateway. Use API Gateway to create WebSocket APIs 5 Amazon API Gateway Developer Guide The app developer is the customer of the API developer. The app developer doesn't need to have an AWS account, provided that the API either doesn't require IAM permissions or supports authorization of users through third-party federated identity providers supported by Amazon Cognito user pool identity federation. Such identity providers include Amazon, Amazon Cognito user pools, Facebook, and Google. Creating and managing an API Gateway API An API developer works with the API Gateway service component for API management, named apigateway, to create, configure, and deploy an API. As an API developer, you can create and manage an API by using the API Gateway console, described in Get started with API Gateway, or by calling the API references. There are several ways"} +{"global_id": 285, "doc_id": "api-gateway", "chunk_id": "5", "question_id": 2, "question": "What functionality does API Gateway provide for WebSocket API management?", "answer_span": "API Gateway provides WebSocket API management functionality such as the following: • Monitoring and throttling of connections and messages • Using AWS X-Ray to trace messages as they travel through the APIs to backend services • Easy integration with HTTP/HTTPS endpoints", "chunk": "WebSocket APIs to build secure, real-time communication applications without having to provision or manage any servers to manage connections or large-scale data exchanges. Targeted use cases include real-time applications such as the following: • Chat applications • Real-time dashboards such as stock tickers • Real-time alerts and notifications API Gateway provides WebSocket API management functionality such as the following: • Monitoring and throttling of connections and messages • Using AWS X-Ray to trace messages as they travel through the APIs to backend services • Easy integration with HTTP/HTTPS endpoints Who uses API Gateway? There are two kinds of developers who use API Gateway: API developers and app developers. An API developer creates and deploys an API to enable the required functionality in API Gateway. The API developer must be a user in the AWS account that owns the API. An app developer builds a functioning application to call AWS services by invoking a WebSocket or REST API created by an API developer in API Gateway. Use API Gateway to create WebSocket APIs 5 Amazon API Gateway Developer Guide The app developer is the customer of the API developer. The app developer doesn't need to have an AWS account, provided that the API either doesn't require IAM permissions or supports authorization of users through third-party federated identity providers supported by Amazon Cognito user pool identity federation. Such identity providers include Amazon, Amazon Cognito user pools, Facebook, and Google. Creating and managing an API Gateway API An API developer works with the API Gateway service component for API management, named apigateway, to create, configure, and deploy an API. As an API developer, you can create and manage an API by using the API Gateway console, described in Get started with API Gateway, or by calling the API references. There are several ways"} +{"global_id": 286, "doc_id": "api-gateway", "chunk_id": "5", "question_id": 3, "question": "Who are the two kinds of developers who use API Gateway?", "answer_span": "There are two kinds of developers who use API Gateway: API developers and app developers.", "chunk": "WebSocket APIs to build secure, real-time communication applications without having to provision or manage any servers to manage connections or large-scale data exchanges. Targeted use cases include real-time applications such as the following: • Chat applications • Real-time dashboards such as stock tickers • Real-time alerts and notifications API Gateway provides WebSocket API management functionality such as the following: • Monitoring and throttling of connections and messages • Using AWS X-Ray to trace messages as they travel through the APIs to backend services • Easy integration with HTTP/HTTPS endpoints Who uses API Gateway? There are two kinds of developers who use API Gateway: API developers and app developers. An API developer creates and deploys an API to enable the required functionality in API Gateway. The API developer must be a user in the AWS account that owns the API. An app developer builds a functioning application to call AWS services by invoking a WebSocket or REST API created by an API developer in API Gateway. Use API Gateway to create WebSocket APIs 5 Amazon API Gateway Developer Guide The app developer is the customer of the API developer. The app developer doesn't need to have an AWS account, provided that the API either doesn't require IAM permissions or supports authorization of users through third-party federated identity providers supported by Amazon Cognito user pool identity federation. Such identity providers include Amazon, Amazon Cognito user pools, Facebook, and Google. Creating and managing an API Gateway API An API developer works with the API Gateway service component for API management, named apigateway, to create, configure, and deploy an API. As an API developer, you can create and manage an API by using the API Gateway console, described in Get started with API Gateway, or by calling the API references. There are several ways"} +{"global_id": 287, "doc_id": "api-gateway", "chunk_id": "5", "question_id": 4, "question": "What must an API developer be in order to create and deploy an API?", "answer_span": "The API developer must be a user in the AWS account that owns the API.", "chunk": "WebSocket APIs to build secure, real-time communication applications without having to provision or manage any servers to manage connections or large-scale data exchanges. Targeted use cases include real-time applications such as the following: • Chat applications • Real-time dashboards such as stock tickers • Real-time alerts and notifications API Gateway provides WebSocket API management functionality such as the following: • Monitoring and throttling of connections and messages • Using AWS X-Ray to trace messages as they travel through the APIs to backend services • Easy integration with HTTP/HTTPS endpoints Who uses API Gateway? There are two kinds of developers who use API Gateway: API developers and app developers. An API developer creates and deploys an API to enable the required functionality in API Gateway. The API developer must be a user in the AWS account that owns the API. An app developer builds a functioning application to call AWS services by invoking a WebSocket or REST API created by an API developer in API Gateway. Use API Gateway to create WebSocket APIs 5 Amazon API Gateway Developer Guide The app developer is the customer of the API developer. The app developer doesn't need to have an AWS account, provided that the API either doesn't require IAM permissions or supports authorization of users through third-party federated identity providers supported by Amazon Cognito user pool identity federation. Such identity providers include Amazon, Amazon Cognito user pools, Facebook, and Google. Creating and managing an API Gateway API An API developer works with the API Gateway service component for API management, named apigateway, to create, configure, and deploy an API. As an API developer, you can create and manage an API by using the API Gateway console, described in Get started with API Gateway, or by calling the API references. There are several ways"} +{"global_id": 288, "doc_id": "api-gateway", "chunk_id": "6", "question_id": 1, "question": "What is the API Gateway service component named for API management?", "answer_span": "API Gateway service component for API management, named apigateway, to create, configure, and deploy an API.", "chunk": "API Gateway service component for API management, named apigateway, to create, configure, and deploy an API. As an API developer, you can create and manage an API by using the API Gateway console, described in Get started with API Gateway, or by calling the API references. There are several ways to call this API. They include using the AWS Command Line Interface (AWS CLI), or by using an AWS SDK. In addition, you can enable API creation with AWS CloudFormation templates or (in the case of REST APIs and HTTP APIs) OpenAPI extensions for API Gateway. For a list of Regions where API Gateway is available, as well as the associated control service endpoints, see Amazon API Gateway Endpoints and Quotas. Calling an API Gateway API An app developer works with the API Gateway service component for API execution, named execute-api, to invoke an API that was created or deployed in API Gateway. The underlying programming entities are exposed by the created API. There are several ways to call such an API. To learn more, see Invoke REST APIs in API Gateway and Invoke WebSocket APIs. Accessing API Gateway You can access Amazon API Gateway in the following ways: • AWS Management Console – The AWS Management Console provides a web interface for creating and managing APIs. After you complete the steps in Prerequisites, you can access the API Gateway console at https://console.aws.amazon.com/apigateway. • AWS SDKs – If you're using a programming language that AWS provides an SDK for, you can use an SDK to access API Gateway. SDKs simplify authentication, integrate easily with your development environment, and provide access to API Gateway commands. For more information, see Tools for Amazon Web Services. Accessing API Gateway 6 Amazon API Gateway Developer Guide • API Gateway V1 and V2 APIs –"} +{"global_id": 289, "doc_id": "api-gateway", "chunk_id": "6", "question_id": 2, "question": "How can an API developer create and manage an API?", "answer_span": "you can create and manage an API by using the API Gateway console, described in Get started with API Gateway, or by calling the API references.", "chunk": "API Gateway service component for API management, named apigateway, to create, configure, and deploy an API. As an API developer, you can create and manage an API by using the API Gateway console, described in Get started with API Gateway, or by calling the API references. There are several ways to call this API. They include using the AWS Command Line Interface (AWS CLI), or by using an AWS SDK. In addition, you can enable API creation with AWS CloudFormation templates or (in the case of REST APIs and HTTP APIs) OpenAPI extensions for API Gateway. For a list of Regions where API Gateway is available, as well as the associated control service endpoints, see Amazon API Gateway Endpoints and Quotas. Calling an API Gateway API An app developer works with the API Gateway service component for API execution, named execute-api, to invoke an API that was created or deployed in API Gateway. The underlying programming entities are exposed by the created API. There are several ways to call such an API. To learn more, see Invoke REST APIs in API Gateway and Invoke WebSocket APIs. Accessing API Gateway You can access Amazon API Gateway in the following ways: • AWS Management Console – The AWS Management Console provides a web interface for creating and managing APIs. After you complete the steps in Prerequisites, you can access the API Gateway console at https://console.aws.amazon.com/apigateway. • AWS SDKs – If you're using a programming language that AWS provides an SDK for, you can use an SDK to access API Gateway. SDKs simplify authentication, integrate easily with your development environment, and provide access to API Gateway commands. For more information, see Tools for Amazon Web Services. Accessing API Gateway 6 Amazon API Gateway Developer Guide • API Gateway V1 and V2 APIs –"} +{"global_id": 290, "doc_id": "api-gateway", "chunk_id": "6", "question_id": 3, "question": "What are the ways to call the API Gateway API?", "answer_span": "They include using the AWS Command Line Interface (AWS CLI), or by using an AWS SDK.", "chunk": "API Gateway service component for API management, named apigateway, to create, configure, and deploy an API. As an API developer, you can create and manage an API by using the API Gateway console, described in Get started with API Gateway, or by calling the API references. There are several ways to call this API. They include using the AWS Command Line Interface (AWS CLI), or by using an AWS SDK. In addition, you can enable API creation with AWS CloudFormation templates or (in the case of REST APIs and HTTP APIs) OpenAPI extensions for API Gateway. For a list of Regions where API Gateway is available, as well as the associated control service endpoints, see Amazon API Gateway Endpoints and Quotas. Calling an API Gateway API An app developer works with the API Gateway service component for API execution, named execute-api, to invoke an API that was created or deployed in API Gateway. The underlying programming entities are exposed by the created API. There are several ways to call such an API. To learn more, see Invoke REST APIs in API Gateway and Invoke WebSocket APIs. Accessing API Gateway You can access Amazon API Gateway in the following ways: • AWS Management Console – The AWS Management Console provides a web interface for creating and managing APIs. After you complete the steps in Prerequisites, you can access the API Gateway console at https://console.aws.amazon.com/apigateway. • AWS SDKs – If you're using a programming language that AWS provides an SDK for, you can use an SDK to access API Gateway. SDKs simplify authentication, integrate easily with your development environment, and provide access to API Gateway commands. For more information, see Tools for Amazon Web Services. Accessing API Gateway 6 Amazon API Gateway Developer Guide • API Gateway V1 and V2 APIs –"} +{"global_id": 291, "doc_id": "api-gateway", "chunk_id": "6", "question_id": 4, "question": "What provides a web interface for creating and managing APIs?", "answer_span": "The AWS Management Console provides a web interface for creating and managing APIs.", "chunk": "API Gateway service component for API management, named apigateway, to create, configure, and deploy an API. As an API developer, you can create and manage an API by using the API Gateway console, described in Get started with API Gateway, or by calling the API references. There are several ways to call this API. They include using the AWS Command Line Interface (AWS CLI), or by using an AWS SDK. In addition, you can enable API creation with AWS CloudFormation templates or (in the case of REST APIs and HTTP APIs) OpenAPI extensions for API Gateway. For a list of Regions where API Gateway is available, as well as the associated control service endpoints, see Amazon API Gateway Endpoints and Quotas. Calling an API Gateway API An app developer works with the API Gateway service component for API execution, named execute-api, to invoke an API that was created or deployed in API Gateway. The underlying programming entities are exposed by the created API. There are several ways to call such an API. To learn more, see Invoke REST APIs in API Gateway and Invoke WebSocket APIs. Accessing API Gateway You can access Amazon API Gateway in the following ways: • AWS Management Console – The AWS Management Console provides a web interface for creating and managing APIs. After you complete the steps in Prerequisites, you can access the API Gateway console at https://console.aws.amazon.com/apigateway. • AWS SDKs – If you're using a programming language that AWS provides an SDK for, you can use an SDK to access API Gateway. SDKs simplify authentication, integrate easily with your development environment, and provide access to API Gateway commands. For more information, see Tools for Amazon Web Services. Accessing API Gateway 6 Amazon API Gateway Developer Guide • API Gateway V1 and V2 APIs –"} +{"global_id": 292, "doc_id": "api-gateway", "chunk_id": "7", "question_id": 1, "question": "What can be used to access API Gateway?", "answer_span": "can use an SDK to access API Gateway.", "chunk": "can use an SDK to access API Gateway. SDKs simplify authentication, integrate easily with your development environment, and provide access to API Gateway commands. For more information, see Tools for Amazon Web Services. Accessing API Gateway 6 Amazon API Gateway Developer Guide • API Gateway V1 and V2 APIs – If you're using a programming language that an SDK isn't available for, see the Amazon API Gateway Version 1 API Reference and Amazon API Gateway Version 2 API Reference. • AWS Command Line Interface – For more information, see Getting Set Up with the AWS Command Line Interface in the AWS Command Line Interface User Guide. • AWS Tools for Windows PowerShell – For more information, see Setting Up the AWS Tools for Windows PowerShell in the AWS Tools for PowerShell User Guide. Part of AWS serverless infrastructure Together with AWS Lambda, API Gateway forms the app-facing part of the AWS serverless infrastructure. To learn more about getting started with serverless, see the Serverless Developer Guide. For an app to call publicly available AWS services, you can use Lambda to interact with required services and expose Lambda functions through API methods in API Gateway. AWS Lambda runs your code on a highly available computing infrastructure. It performs the necessary execution and administration of computing resources. To enable serverless applications, API Gateway supports streamlined proxy integrations with AWS Lambda and HTTP endpoints. How to get started with Amazon API Gateway For an introduction to Amazon API Gateway, see the following: • Get started, which provides a walkthrough for creating an HTTP API. • Serverless land, which provides instructional videos. • Happy Little API Shorts, which is a series of brief instructional videos. Amazon API Gateway concepts The following section describes introductory concepts for using API Gateway. API Gateway API Gateway is"} +{"global_id": 293, "doc_id": "api-gateway", "chunk_id": "7", "question_id": 2, "question": "What do SDKs simplify?", "answer_span": "SDKs simplify authentication, integrate easily with your development environment, and provide access to API Gateway commands.", "chunk": "can use an SDK to access API Gateway. SDKs simplify authentication, integrate easily with your development environment, and provide access to API Gateway commands. For more information, see Tools for Amazon Web Services. Accessing API Gateway 6 Amazon API Gateway Developer Guide • API Gateway V1 and V2 APIs – If you're using a programming language that an SDK isn't available for, see the Amazon API Gateway Version 1 API Reference and Amazon API Gateway Version 2 API Reference. • AWS Command Line Interface – For more information, see Getting Set Up with the AWS Command Line Interface in the AWS Command Line Interface User Guide. • AWS Tools for Windows PowerShell – For more information, see Setting Up the AWS Tools for Windows PowerShell in the AWS Tools for PowerShell User Guide. Part of AWS serverless infrastructure Together with AWS Lambda, API Gateway forms the app-facing part of the AWS serverless infrastructure. To learn more about getting started with serverless, see the Serverless Developer Guide. For an app to call publicly available AWS services, you can use Lambda to interact with required services and expose Lambda functions through API methods in API Gateway. AWS Lambda runs your code on a highly available computing infrastructure. It performs the necessary execution and administration of computing resources. To enable serverless applications, API Gateway supports streamlined proxy integrations with AWS Lambda and HTTP endpoints. How to get started with Amazon API Gateway For an introduction to Amazon API Gateway, see the following: • Get started, which provides a walkthrough for creating an HTTP API. • Serverless land, which provides instructional videos. • Happy Little API Shorts, which is a series of brief instructional videos. Amazon API Gateway concepts The following section describes introductory concepts for using API Gateway. API Gateway API Gateway is"} +{"global_id": 294, "doc_id": "api-gateway", "chunk_id": "7", "question_id": 3, "question": "What forms the app-facing part of the AWS serverless infrastructure?", "answer_span": "API Gateway forms the app-facing part of the AWS serverless infrastructure.", "chunk": "can use an SDK to access API Gateway. SDKs simplify authentication, integrate easily with your development environment, and provide access to API Gateway commands. For more information, see Tools for Amazon Web Services. Accessing API Gateway 6 Amazon API Gateway Developer Guide • API Gateway V1 and V2 APIs – If you're using a programming language that an SDK isn't available for, see the Amazon API Gateway Version 1 API Reference and Amazon API Gateway Version 2 API Reference. • AWS Command Line Interface – For more information, see Getting Set Up with the AWS Command Line Interface in the AWS Command Line Interface User Guide. • AWS Tools for Windows PowerShell – For more information, see Setting Up the AWS Tools for Windows PowerShell in the AWS Tools for PowerShell User Guide. Part of AWS serverless infrastructure Together with AWS Lambda, API Gateway forms the app-facing part of the AWS serverless infrastructure. To learn more about getting started with serverless, see the Serverless Developer Guide. For an app to call publicly available AWS services, you can use Lambda to interact with required services and expose Lambda functions through API methods in API Gateway. AWS Lambda runs your code on a highly available computing infrastructure. It performs the necessary execution and administration of computing resources. To enable serverless applications, API Gateway supports streamlined proxy integrations with AWS Lambda and HTTP endpoints. How to get started with Amazon API Gateway For an introduction to Amazon API Gateway, see the following: • Get started, which provides a walkthrough for creating an HTTP API. • Serverless land, which provides instructional videos. • Happy Little API Shorts, which is a series of brief instructional videos. Amazon API Gateway concepts The following section describes introductory concepts for using API Gateway. API Gateway API Gateway is"} +{"global_id": 295, "doc_id": "api-gateway", "chunk_id": "7", "question_id": 4, "question": "What does API Gateway support for serverless applications?", "answer_span": "API Gateway supports streamlined proxy integrations with AWS Lambda and HTTP endpoints.", "chunk": "can use an SDK to access API Gateway. SDKs simplify authentication, integrate easily with your development environment, and provide access to API Gateway commands. For more information, see Tools for Amazon Web Services. Accessing API Gateway 6 Amazon API Gateway Developer Guide • API Gateway V1 and V2 APIs – If you're using a programming language that an SDK isn't available for, see the Amazon API Gateway Version 1 API Reference and Amazon API Gateway Version 2 API Reference. • AWS Command Line Interface – For more information, see Getting Set Up with the AWS Command Line Interface in the AWS Command Line Interface User Guide. • AWS Tools for Windows PowerShell – For more information, see Setting Up the AWS Tools for Windows PowerShell in the AWS Tools for PowerShell User Guide. Part of AWS serverless infrastructure Together with AWS Lambda, API Gateway forms the app-facing part of the AWS serverless infrastructure. To learn more about getting started with serverless, see the Serverless Developer Guide. For an app to call publicly available AWS services, you can use Lambda to interact with required services and expose Lambda functions through API methods in API Gateway. AWS Lambda runs your code on a highly available computing infrastructure. It performs the necessary execution and administration of computing resources. To enable serverless applications, API Gateway supports streamlined proxy integrations with AWS Lambda and HTTP endpoints. How to get started with Amazon API Gateway For an introduction to Amazon API Gateway, see the following: • Get started, which provides a walkthrough for creating an HTTP API. • Serverless land, which provides instructional videos. • Happy Little API Shorts, which is a series of brief instructional videos. Amazon API Gateway concepts The following section describes introductory concepts for using API Gateway. API Gateway API Gateway is"} +{"global_id": 296, "doc_id": "api-gateway", "chunk_id": "8", "question_id": 1, "question": "What is API Gateway?", "answer_span": "API Gateway is an AWS service that supports the following: Part of AWS serverless infrastructure", "chunk": "Get started, which provides a walkthrough for creating an HTTP API. • Serverless land, which provides instructional videos. • Happy Little API Shorts, which is a series of brief instructional videos. Amazon API Gateway concepts The following section describes introductory concepts for using API Gateway. API Gateway API Gateway is an AWS service that supports the following: Part of AWS serverless infrastructure 7 Amazon API Gateway Developer Guide • Creating, deploying, and managing a RESTful application programming interface (API) to expose backend HTTP endpoints, AWS Lambda functions, or other AWS services. • Creating, deploying, and managing a WebSocket API to expose AWS Lambda functions or other AWS services. • Invoking exposed API methods through the frontend HTTP and WebSocket endpoints. API Gateway REST API A collection of HTTP resources and methods that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can deploy this collection in one or more stages. Typically, API resources are organized in a resource tree according to the application logic. Each API resource can expose one or more API methods that have unique HTTP verbs supported by API Gateway. For more information, see the section called “Choose between REST APIs and HTTP APIs ”. API Gateway HTTP API A collection of routes and methods that are integrated with backend HTTP endpoints or Lambda functions. You can deploy this collection in one or more stages. Each route can expose one or more API methods that have unique HTTP verbs supported by API Gateway. For more information, see the section called “Choose between REST APIs and HTTP APIs ”. API Gateway WebSocket API A collection of WebSocket routes and route keys that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can deploy this collection in one or more stages."} +{"global_id": 297, "doc_id": "api-gateway", "chunk_id": "8", "question_id": 2, "question": "What can you create, deploy, and manage with API Gateway?", "answer_span": "Creating, deploying, and managing a RESTful application programming interface (API) to expose backend HTTP endpoints, AWS Lambda functions, or other AWS services.", "chunk": "Get started, which provides a walkthrough for creating an HTTP API. • Serverless land, which provides instructional videos. • Happy Little API Shorts, which is a series of brief instructional videos. Amazon API Gateway concepts The following section describes introductory concepts for using API Gateway. API Gateway API Gateway is an AWS service that supports the following: Part of AWS serverless infrastructure 7 Amazon API Gateway Developer Guide • Creating, deploying, and managing a RESTful application programming interface (API) to expose backend HTTP endpoints, AWS Lambda functions, or other AWS services. • Creating, deploying, and managing a WebSocket API to expose AWS Lambda functions or other AWS services. • Invoking exposed API methods through the frontend HTTP and WebSocket endpoints. API Gateway REST API A collection of HTTP resources and methods that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can deploy this collection in one or more stages. Typically, API resources are organized in a resource tree according to the application logic. Each API resource can expose one or more API methods that have unique HTTP verbs supported by API Gateway. For more information, see the section called “Choose between REST APIs and HTTP APIs ”. API Gateway HTTP API A collection of routes and methods that are integrated with backend HTTP endpoints or Lambda functions. You can deploy this collection in one or more stages. Each route can expose one or more API methods that have unique HTTP verbs supported by API Gateway. For more information, see the section called “Choose between REST APIs and HTTP APIs ”. API Gateway WebSocket API A collection of WebSocket routes and route keys that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can deploy this collection in one or more stages."} +{"global_id": 298, "doc_id": "api-gateway", "chunk_id": "8", "question_id": 3, "question": "What is an API Gateway REST API?", "answer_span": "A collection of HTTP resources and methods that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services.", "chunk": "Get started, which provides a walkthrough for creating an HTTP API. • Serverless land, which provides instructional videos. • Happy Little API Shorts, which is a series of brief instructional videos. Amazon API Gateway concepts The following section describes introductory concepts for using API Gateway. API Gateway API Gateway is an AWS service that supports the following: Part of AWS serverless infrastructure 7 Amazon API Gateway Developer Guide • Creating, deploying, and managing a RESTful application programming interface (API) to expose backend HTTP endpoints, AWS Lambda functions, or other AWS services. • Creating, deploying, and managing a WebSocket API to expose AWS Lambda functions or other AWS services. • Invoking exposed API methods through the frontend HTTP and WebSocket endpoints. API Gateway REST API A collection of HTTP resources and methods that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can deploy this collection in one or more stages. Typically, API resources are organized in a resource tree according to the application logic. Each API resource can expose one or more API methods that have unique HTTP verbs supported by API Gateway. For more information, see the section called “Choose between REST APIs and HTTP APIs ”. API Gateway HTTP API A collection of routes and methods that are integrated with backend HTTP endpoints or Lambda functions. You can deploy this collection in one or more stages. Each route can expose one or more API methods that have unique HTTP verbs supported by API Gateway. For more information, see the section called “Choose between REST APIs and HTTP APIs ”. API Gateway WebSocket API A collection of WebSocket routes and route keys that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can deploy this collection in one or more stages."} +{"global_id": 299, "doc_id": "api-gateway", "chunk_id": "8", "question_id": 4, "question": "What does an API Gateway WebSocket API consist of?", "answer_span": "A collection of WebSocket routes and route keys that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services.", "chunk": "Get started, which provides a walkthrough for creating an HTTP API. • Serverless land, which provides instructional videos. • Happy Little API Shorts, which is a series of brief instructional videos. Amazon API Gateway concepts The following section describes introductory concepts for using API Gateway. API Gateway API Gateway is an AWS service that supports the following: Part of AWS serverless infrastructure 7 Amazon API Gateway Developer Guide • Creating, deploying, and managing a RESTful application programming interface (API) to expose backend HTTP endpoints, AWS Lambda functions, or other AWS services. • Creating, deploying, and managing a WebSocket API to expose AWS Lambda functions or other AWS services. • Invoking exposed API methods through the frontend HTTP and WebSocket endpoints. API Gateway REST API A collection of HTTP resources and methods that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can deploy this collection in one or more stages. Typically, API resources are organized in a resource tree according to the application logic. Each API resource can expose one or more API methods that have unique HTTP verbs supported by API Gateway. For more information, see the section called “Choose between REST APIs and HTTP APIs ”. API Gateway HTTP API A collection of routes and methods that are integrated with backend HTTP endpoints or Lambda functions. You can deploy this collection in one or more stages. Each route can expose one or more API methods that have unique HTTP verbs supported by API Gateway. For more information, see the section called “Choose between REST APIs and HTTP APIs ”. API Gateway WebSocket API A collection of WebSocket routes and route keys that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can deploy this collection in one or more stages."} +{"global_id": 300, "doc_id": "api-gateway", "chunk_id": "9", "question_id": 1, "question": "What is an API Gateway WebSocket API?", "answer_span": "API Gateway WebSocket API A collection of WebSocket routes and route keys that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services.", "chunk": "For more information, see the section called “Choose between REST APIs and HTTP APIs ”. API Gateway WebSocket API A collection of WebSocket routes and route keys that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can deploy this collection in one or more stages. API methods are invoked through frontend WebSocket connections that you can associate with a registered custom domain name. API deployment A point-in-time snapshot of your API Gateway API. To be available for clients to use, the deployment must be associated with one or more API stages. API developer Your AWS account that owns an API Gateway deployment (for example, a service provider that also supports programmatic access). API endpoint A hostname for an API in API Gateway that is deployed to a specific Region. The hostname is of the form {api-id}.execute-api.{region}.amazonaws.com. The following types of API endpoints are supported: API Gateway concepts 8 Amazon API Gateway Developer Guide • Edge-optimized API endpoint • Private API endpoint • Regional API endpoint API key An alphanumeric string that API Gateway uses to identify an app developer who uses your REST or WebSocket API. API Gateway can generate API keys on your behalf, or you can import them from a CSV file. You can use API keys together with Lambda authorizers or usage plans to control access to your APIs. See API endpoints. API owner See API developer. API stage A logical reference to a lifecycle state of your API (for example, 'dev', 'prod', 'beta', 'v2'). API stages are identified by API ID and stage name. App developer An app creator who may or may not have an AWS account and interacts with the API that you, the API developer, have deployed. App developers are your customers. An app developer is typically identified"} +{"global_id": 301, "doc_id": "api-gateway", "chunk_id": "9", "question_id": 2, "question": "What must an API deployment be associated with to be available for clients to use?", "answer_span": "the deployment must be associated with one or more API stages.", "chunk": "For more information, see the section called “Choose between REST APIs and HTTP APIs ”. API Gateway WebSocket API A collection of WebSocket routes and route keys that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can deploy this collection in one or more stages. API methods are invoked through frontend WebSocket connections that you can associate with a registered custom domain name. API deployment A point-in-time snapshot of your API Gateway API. To be available for clients to use, the deployment must be associated with one or more API stages. API developer Your AWS account that owns an API Gateway deployment (for example, a service provider that also supports programmatic access). API endpoint A hostname for an API in API Gateway that is deployed to a specific Region. The hostname is of the form {api-id}.execute-api.{region}.amazonaws.com. The following types of API endpoints are supported: API Gateway concepts 8 Amazon API Gateway Developer Guide • Edge-optimized API endpoint • Private API endpoint • Regional API endpoint API key An alphanumeric string that API Gateway uses to identify an app developer who uses your REST or WebSocket API. API Gateway can generate API keys on your behalf, or you can import them from a CSV file. You can use API keys together with Lambda authorizers or usage plans to control access to your APIs. See API endpoints. API owner See API developer. API stage A logical reference to a lifecycle state of your API (for example, 'dev', 'prod', 'beta', 'v2'). API stages are identified by API ID and stage name. App developer An app creator who may or may not have an AWS account and interacts with the API that you, the API developer, have deployed. App developers are your customers. An app developer is typically identified"} +{"global_id": 302, "doc_id": "api-gateway", "chunk_id": "9", "question_id": 3, "question": "What is an API key?", "answer_span": "API key An alphanumeric string that API Gateway uses to identify an app developer who uses your REST or WebSocket API.", "chunk": "For more information, see the section called “Choose between REST APIs and HTTP APIs ”. API Gateway WebSocket API A collection of WebSocket routes and route keys that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can deploy this collection in one or more stages. API methods are invoked through frontend WebSocket connections that you can associate with a registered custom domain name. API deployment A point-in-time snapshot of your API Gateway API. To be available for clients to use, the deployment must be associated with one or more API stages. API developer Your AWS account that owns an API Gateway deployment (for example, a service provider that also supports programmatic access). API endpoint A hostname for an API in API Gateway that is deployed to a specific Region. The hostname is of the form {api-id}.execute-api.{region}.amazonaws.com. The following types of API endpoints are supported: API Gateway concepts 8 Amazon API Gateway Developer Guide • Edge-optimized API endpoint • Private API endpoint • Regional API endpoint API key An alphanumeric string that API Gateway uses to identify an app developer who uses your REST or WebSocket API. API Gateway can generate API keys on your behalf, or you can import them from a CSV file. You can use API keys together with Lambda authorizers or usage plans to control access to your APIs. See API endpoints. API owner See API developer. API stage A logical reference to a lifecycle state of your API (for example, 'dev', 'prod', 'beta', 'v2'). API stages are identified by API ID and stage name. App developer An app creator who may or may not have an AWS account and interacts with the API that you, the API developer, have deployed. App developers are your customers. An app developer is typically identified"} +{"global_id": 303, "doc_id": "api-gateway", "chunk_id": "9", "question_id": 4, "question": "Who is considered an app developer?", "answer_span": "An app developer is typically identified", "chunk": "For more information, see the section called “Choose between REST APIs and HTTP APIs ”. API Gateway WebSocket API A collection of WebSocket routes and route keys that are integrated with backend HTTP endpoints, Lambda functions, or other AWS services. You can deploy this collection in one or more stages. API methods are invoked through frontend WebSocket connections that you can associate with a registered custom domain name. API deployment A point-in-time snapshot of your API Gateway API. To be available for clients to use, the deployment must be associated with one or more API stages. API developer Your AWS account that owns an API Gateway deployment (for example, a service provider that also supports programmatic access). API endpoint A hostname for an API in API Gateway that is deployed to a specific Region. The hostname is of the form {api-id}.execute-api.{region}.amazonaws.com. The following types of API endpoints are supported: API Gateway concepts 8 Amazon API Gateway Developer Guide • Edge-optimized API endpoint • Private API endpoint • Regional API endpoint API key An alphanumeric string that API Gateway uses to identify an app developer who uses your REST or WebSocket API. API Gateway can generate API keys on your behalf, or you can import them from a CSV file. You can use API keys together with Lambda authorizers or usage plans to control access to your APIs. See API endpoints. API owner See API developer. API stage A logical reference to a lifecycle state of your API (for example, 'dev', 'prod', 'beta', 'v2'). API stages are identified by API ID and stage name. App developer An app creator who may or may not have an AWS account and interacts with the API that you, the API developer, have deployed. App developers are your customers. An app developer is typically identified"} +{"global_id": 304, "doc_id": "api-gateway", "chunk_id": "10", "question_id": 1, "question": "What identifies API stages?", "answer_span": "API stages are identified by API ID and stage name.", "chunk": "'prod', 'beta', 'v2'). API stages are identified by API ID and stage name. App developer An app creator who may or may not have an AWS account and interacts with the API that you, the API developer, have deployed. App developers are your customers. An app developer is typically identified by an API key. Callback URL When a new client is connected to through a WebSocket connection, you can call an integration in API Gateway to store the client's callback URL. You can then use that callback URL to send messages to the client from the backend system. Developer portal An application that allows your customers to register, discover, and subscribe to your API products (API Gateway usage plans), manage their API keys, and view their usage metrics for your APIs. Edge-optimized API endpoint The default hostname of an API Gateway API that is deployed to the specified Region while using a CloudFront distribution to facilitate client access typically from across AWS Regions. API API Gateway concepts 9 Amazon API Gateway Developer Guide requests are routed to the nearest CloudFront Point of Presence (POP), which typically improves connection time for geographically diverse clients. See API endpoints. Integration request The internal interface of a WebSocket API route or REST API method in API Gateway, in which you map the body of a route request or the parameters and body of a method request to the formats required by the backend. Integration response The internal interface of a WebSocket API route or REST API method in API Gateway, in which you map the status codes, headers, and payload that are received from the backend to the response format that is returned to a client app. Mapping template A script in Velocity Template Language (VTL) that transforms a request body from the frontend"} +{"global_id": 305, "doc_id": "api-gateway", "chunk_id": "10", "question_id": 2, "question": "Who is an app developer?", "answer_span": "An app developer is typically identified by an API key.", "chunk": "'prod', 'beta', 'v2'). API stages are identified by API ID and stage name. App developer An app creator who may or may not have an AWS account and interacts with the API that you, the API developer, have deployed. App developers are your customers. An app developer is typically identified by an API key. Callback URL When a new client is connected to through a WebSocket connection, you can call an integration in API Gateway to store the client's callback URL. You can then use that callback URL to send messages to the client from the backend system. Developer portal An application that allows your customers to register, discover, and subscribe to your API products (API Gateway usage plans), manage their API keys, and view their usage metrics for your APIs. Edge-optimized API endpoint The default hostname of an API Gateway API that is deployed to the specified Region while using a CloudFront distribution to facilitate client access typically from across AWS Regions. API API Gateway concepts 9 Amazon API Gateway Developer Guide requests are routed to the nearest CloudFront Point of Presence (POP), which typically improves connection time for geographically diverse clients. See API endpoints. Integration request The internal interface of a WebSocket API route or REST API method in API Gateway, in which you map the body of a route request or the parameters and body of a method request to the formats required by the backend. Integration response The internal interface of a WebSocket API route or REST API method in API Gateway, in which you map the status codes, headers, and payload that are received from the backend to the response format that is returned to a client app. Mapping template A script in Velocity Template Language (VTL) that transforms a request body from the frontend"} +{"global_id": 306, "doc_id": "api-gateway", "chunk_id": "10", "question_id": 3, "question": "What is a developer portal?", "answer_span": "An application that allows your customers to register, discover, and subscribe to your API products (API Gateway usage plans), manage their API keys, and view their usage metrics for your APIs.", "chunk": "'prod', 'beta', 'v2'). API stages are identified by API ID and stage name. App developer An app creator who may or may not have an AWS account and interacts with the API that you, the API developer, have deployed. App developers are your customers. An app developer is typically identified by an API key. Callback URL When a new client is connected to through a WebSocket connection, you can call an integration in API Gateway to store the client's callback URL. You can then use that callback URL to send messages to the client from the backend system. Developer portal An application that allows your customers to register, discover, and subscribe to your API products (API Gateway usage plans), manage their API keys, and view their usage metrics for your APIs. Edge-optimized API endpoint The default hostname of an API Gateway API that is deployed to the specified Region while using a CloudFront distribution to facilitate client access typically from across AWS Regions. API API Gateway concepts 9 Amazon API Gateway Developer Guide requests are routed to the nearest CloudFront Point of Presence (POP), which typically improves connection time for geographically diverse clients. See API endpoints. Integration request The internal interface of a WebSocket API route or REST API method in API Gateway, in which you map the body of a route request or the parameters and body of a method request to the formats required by the backend. Integration response The internal interface of a WebSocket API route or REST API method in API Gateway, in which you map the status codes, headers, and payload that are received from the backend to the response format that is returned to a client app. Mapping template A script in Velocity Template Language (VTL) that transforms a request body from the frontend"} +{"global_id": 307, "doc_id": "api-gateway", "chunk_id": "10", "question_id": 4, "question": "What is an edge-optimized API endpoint?", "answer_span": "The default hostname of an API Gateway API that is deployed to the specified Region while using a CloudFront distribution to facilitate client access typically from across AWS Regions.", "chunk": "'prod', 'beta', 'v2'). API stages are identified by API ID and stage name. App developer An app creator who may or may not have an AWS account and interacts with the API that you, the API developer, have deployed. App developers are your customers. An app developer is typically identified by an API key. Callback URL When a new client is connected to through a WebSocket connection, you can call an integration in API Gateway to store the client's callback URL. You can then use that callback URL to send messages to the client from the backend system. Developer portal An application that allows your customers to register, discover, and subscribe to your API products (API Gateway usage plans), manage their API keys, and view their usage metrics for your APIs. Edge-optimized API endpoint The default hostname of an API Gateway API that is deployed to the specified Region while using a CloudFront distribution to facilitate client access typically from across AWS Regions. API API Gateway concepts 9 Amazon API Gateway Developer Guide requests are routed to the nearest CloudFront Point of Presence (POP), which typically improves connection time for geographically diverse clients. See API endpoints. Integration request The internal interface of a WebSocket API route or REST API method in API Gateway, in which you map the body of a route request or the parameters and body of a method request to the formats required by the backend. Integration response The internal interface of a WebSocket API route or REST API method in API Gateway, in which you map the status codes, headers, and payload that are received from the backend to the response format that is returned to a client app. Mapping template A script in Velocity Template Language (VTL) that transforms a request body from the frontend"} +{"global_id": 308, "doc_id": "api-gateway", "chunk_id": "11", "question_id": 1, "question": "What is a mapping template?", "answer_span": "A script in Velocity Template Language (VTL) that transforms a request body from the frontend data format to the backend data format, or that transforms a response body from the backend data format to the frontend data format.", "chunk": "REST API method in API Gateway, in which you map the status codes, headers, and payload that are received from the backend to the response format that is returned to a client app. Mapping template A script in Velocity Template Language (VTL) that transforms a request body from the frontend data format to the backend data format, or that transforms a response body from the backend data format to the frontend data format. Mapping templates can be specified in the integration request or in the integration response. They can reference data made available at runtime as context and stage variables. The mapping can be as simple as an identity transform that passes the headers or body through the integration as-is from the client to the backend for a request. The same is true for a response, in which the payload is passed from the backend to the client. Method request The public interface of an API method in API Gateway that defines the parameters and body that an app developer must send in requests to access the backend through the API. Method response The public interface of a REST API that defines the status codes, headers, and body models that an app developer should expect in responses from the API. Mock integration In a mock integration, API responses are generated from API Gateway directly, without the need for an integration backend. As an API developer, you decide how API Gateway responds to a mock integration request. For this, you configure the method's integration request and integration response to associate a response with a given status code. API Gateway concepts 10 Amazon API Gateway Developer Guide Model A data schema specifying the data structure of a request or response payload. A model is required for generating a strongly typed SDK of"} +{"global_id": 309, "doc_id": "api-gateway", "chunk_id": "11", "question_id": 2, "question": "What does the method request define?", "answer_span": "The public interface of an API method in API Gateway that defines the parameters and body that an app developer must send in requests to access the backend through the API.", "chunk": "REST API method in API Gateway, in which you map the status codes, headers, and payload that are received from the backend to the response format that is returned to a client app. Mapping template A script in Velocity Template Language (VTL) that transforms a request body from the frontend data format to the backend data format, or that transforms a response body from the backend data format to the frontend data format. Mapping templates can be specified in the integration request or in the integration response. They can reference data made available at runtime as context and stage variables. The mapping can be as simple as an identity transform that passes the headers or body through the integration as-is from the client to the backend for a request. The same is true for a response, in which the payload is passed from the backend to the client. Method request The public interface of an API method in API Gateway that defines the parameters and body that an app developer must send in requests to access the backend through the API. Method response The public interface of a REST API that defines the status codes, headers, and body models that an app developer should expect in responses from the API. Mock integration In a mock integration, API responses are generated from API Gateway directly, without the need for an integration backend. As an API developer, you decide how API Gateway responds to a mock integration request. For this, you configure the method's integration request and integration response to associate a response with a given status code. API Gateway concepts 10 Amazon API Gateway Developer Guide Model A data schema specifying the data structure of a request or response payload. A model is required for generating a strongly typed SDK of"} +{"global_id": 310, "doc_id": "api-gateway", "chunk_id": "11", "question_id": 3, "question": "What is a mock integration?", "answer_span": "In a mock integration, API responses are generated from API Gateway directly, without the need for an integration backend.", "chunk": "REST API method in API Gateway, in which you map the status codes, headers, and payload that are received from the backend to the response format that is returned to a client app. Mapping template A script in Velocity Template Language (VTL) that transforms a request body from the frontend data format to the backend data format, or that transforms a response body from the backend data format to the frontend data format. Mapping templates can be specified in the integration request or in the integration response. They can reference data made available at runtime as context and stage variables. The mapping can be as simple as an identity transform that passes the headers or body through the integration as-is from the client to the backend for a request. The same is true for a response, in which the payload is passed from the backend to the client. Method request The public interface of an API method in API Gateway that defines the parameters and body that an app developer must send in requests to access the backend through the API. Method response The public interface of a REST API that defines the status codes, headers, and body models that an app developer should expect in responses from the API. Mock integration In a mock integration, API responses are generated from API Gateway directly, without the need for an integration backend. As an API developer, you decide how API Gateway responds to a mock integration request. For this, you configure the method's integration request and integration response to associate a response with a given status code. API Gateway concepts 10 Amazon API Gateway Developer Guide Model A data schema specifying the data structure of a request or response payload. A model is required for generating a strongly typed SDK of"} +{"global_id": 311, "doc_id": "api-gateway", "chunk_id": "11", "question_id": 4, "question": "What is a model in the context of API Gateway?", "answer_span": "A data schema specifying the data structure of a request or response payload.", "chunk": "REST API method in API Gateway, in which you map the status codes, headers, and payload that are received from the backend to the response format that is returned to a client app. Mapping template A script in Velocity Template Language (VTL) that transforms a request body from the frontend data format to the backend data format, or that transforms a response body from the backend data format to the frontend data format. Mapping templates can be specified in the integration request or in the integration response. They can reference data made available at runtime as context and stage variables. The mapping can be as simple as an identity transform that passes the headers or body through the integration as-is from the client to the backend for a request. The same is true for a response, in which the payload is passed from the backend to the client. Method request The public interface of an API method in API Gateway that defines the parameters and body that an app developer must send in requests to access the backend through the API. Method response The public interface of a REST API that defines the status codes, headers, and body models that an app developer should expect in responses from the API. Mock integration In a mock integration, API responses are generated from API Gateway directly, without the need for an integration backend. As an API developer, you decide how API Gateway responds to a mock integration request. For this, you configure the method's integration request and integration response to associate a response with a given status code. API Gateway concepts 10 Amazon API Gateway Developer Guide Model A data schema specifying the data structure of a request or response payload. A model is required for generating a strongly typed SDK of"} +{"global_id": 312, "doc_id": "api-gateway", "chunk_id": "12", "question_id": 1, "question": "What is a model used for in API Gateway?", "answer_span": "A model is required for generating a strongly typed SDK of an API.", "chunk": "the method's integration request and integration response to associate a response with a given status code. API Gateway concepts 10 Amazon API Gateway Developer Guide Model A data schema specifying the data structure of a request or response payload. A model is required for generating a strongly typed SDK of an API. It is also used to validate payloads. A model is convenient for generating a sample mapping template to initiate creation of a production mapping template. Although useful, a model is not required for creating a mapping template. Private API See Private API endpoint. Private API endpoint An API endpoint that is exposed through interface VPC endpoints and allows a client to securely access private API resources inside a VPC. Private APIs are isolated from the public internet, and they can only be accessed using VPC endpoints for API Gateway that have been granted access. Private integration An API Gateway integration type for a client to access resources inside a customer's VPC through a private REST API endpoint without exposing the resources to the public internet. Proxy integration A simplified API Gateway integration configuration. You can set up a proxy integration as an HTTP proxy integration or a Lambda proxy integration. For HTTP proxy integration, API Gateway passes the entire request and response between the frontend and an HTTP backend. For Lambda proxy integration, API Gateway sends the entire request as input to a backend Lambda function. API Gateway then transforms the Lambda function output to a frontend HTTP response. In REST APIs, proxy integration is most commonly used with a proxy resource, which is represented by a greedy path variable (for example, {proxy+}) combined with a catch-all ANY method. Quick create You can use quick create to simplify creating an HTTP API. Quick create creates an API with"} +{"global_id": 313, "doc_id": "api-gateway", "chunk_id": "12", "question_id": 2, "question": "What is a Private API endpoint?", "answer_span": "An API endpoint that is exposed through interface VPC endpoints and allows a client to securely access private API resources inside a VPC.", "chunk": "the method's integration request and integration response to associate a response with a given status code. API Gateway concepts 10 Amazon API Gateway Developer Guide Model A data schema specifying the data structure of a request or response payload. A model is required for generating a strongly typed SDK of an API. It is also used to validate payloads. A model is convenient for generating a sample mapping template to initiate creation of a production mapping template. Although useful, a model is not required for creating a mapping template. Private API See Private API endpoint. Private API endpoint An API endpoint that is exposed through interface VPC endpoints and allows a client to securely access private API resources inside a VPC. Private APIs are isolated from the public internet, and they can only be accessed using VPC endpoints for API Gateway that have been granted access. Private integration An API Gateway integration type for a client to access resources inside a customer's VPC through a private REST API endpoint without exposing the resources to the public internet. Proxy integration A simplified API Gateway integration configuration. You can set up a proxy integration as an HTTP proxy integration or a Lambda proxy integration. For HTTP proxy integration, API Gateway passes the entire request and response between the frontend and an HTTP backend. For Lambda proxy integration, API Gateway sends the entire request as input to a backend Lambda function. API Gateway then transforms the Lambda function output to a frontend HTTP response. In REST APIs, proxy integration is most commonly used with a proxy resource, which is represented by a greedy path variable (for example, {proxy+}) combined with a catch-all ANY method. Quick create You can use quick create to simplify creating an HTTP API. Quick create creates an API with"} +{"global_id": 314, "doc_id": "api-gateway", "chunk_id": "12", "question_id": 3, "question": "What does proxy integration do in API Gateway?", "answer_span": "For HTTP proxy integration, API Gateway passes the entire request and response between the frontend and an HTTP backend.", "chunk": "the method's integration request and integration response to associate a response with a given status code. API Gateway concepts 10 Amazon API Gateway Developer Guide Model A data schema specifying the data structure of a request or response payload. A model is required for generating a strongly typed SDK of an API. It is also used to validate payloads. A model is convenient for generating a sample mapping template to initiate creation of a production mapping template. Although useful, a model is not required for creating a mapping template. Private API See Private API endpoint. Private API endpoint An API endpoint that is exposed through interface VPC endpoints and allows a client to securely access private API resources inside a VPC. Private APIs are isolated from the public internet, and they can only be accessed using VPC endpoints for API Gateway that have been granted access. Private integration An API Gateway integration type for a client to access resources inside a customer's VPC through a private REST API endpoint without exposing the resources to the public internet. Proxy integration A simplified API Gateway integration configuration. You can set up a proxy integration as an HTTP proxy integration or a Lambda proxy integration. For HTTP proxy integration, API Gateway passes the entire request and response between the frontend and an HTTP backend. For Lambda proxy integration, API Gateway sends the entire request as input to a backend Lambda function. API Gateway then transforms the Lambda function output to a frontend HTTP response. In REST APIs, proxy integration is most commonly used with a proxy resource, which is represented by a greedy path variable (for example, {proxy+}) combined with a catch-all ANY method. Quick create You can use quick create to simplify creating an HTTP API. Quick create creates an API with"} +{"global_id": 315, "doc_id": "api-gateway", "chunk_id": "12", "question_id": 4, "question": "What is the purpose of quick create in API Gateway?", "answer_span": "You can use quick create to simplify creating an HTTP API.", "chunk": "the method's integration request and integration response to associate a response with a given status code. API Gateway concepts 10 Amazon API Gateway Developer Guide Model A data schema specifying the data structure of a request or response payload. A model is required for generating a strongly typed SDK of an API. It is also used to validate payloads. A model is convenient for generating a sample mapping template to initiate creation of a production mapping template. Although useful, a model is not required for creating a mapping template. Private API See Private API endpoint. Private API endpoint An API endpoint that is exposed through interface VPC endpoints and allows a client to securely access private API resources inside a VPC. Private APIs are isolated from the public internet, and they can only be accessed using VPC endpoints for API Gateway that have been granted access. Private integration An API Gateway integration type for a client to access resources inside a customer's VPC through a private REST API endpoint without exposing the resources to the public internet. Proxy integration A simplified API Gateway integration configuration. You can set up a proxy integration as an HTTP proxy integration or a Lambda proxy integration. For HTTP proxy integration, API Gateway passes the entire request and response between the frontend and an HTTP backend. For Lambda proxy integration, API Gateway sends the entire request as input to a backend Lambda function. API Gateway then transforms the Lambda function output to a frontend HTTP response. In REST APIs, proxy integration is most commonly used with a proxy resource, which is represented by a greedy path variable (for example, {proxy+}) combined with a catch-all ANY method. Quick create You can use quick create to simplify creating an HTTP API. Quick create creates an API with"} +{"global_id": 316, "doc_id": "api-gateway", "chunk_id": "13", "question_id": 1, "question": "What is proxy integration most commonly used with in REST APIs?", "answer_span": "proxy integration is most commonly used with a proxy resource", "chunk": "response. In REST APIs, proxy integration is most commonly used with a proxy resource, which is represented by a greedy path variable (for example, {proxy+}) combined with a catch-all ANY method. Quick create You can use quick create to simplify creating an HTTP API. Quick create creates an API with a Lambda or HTTP integration, a default catch-all route, and a default stage that is configured to automatically deploy changes. For more information, see the section called “Create an HTTP API by using the AWS CLI”. API Gateway concepts 11 Amazon API Gateway Developer Guide Regional API endpoint The host name of an API that is deployed to the specified Region and intended to serve clients, such as EC2 instances, in the same AWS Region. API requests are targeted directly to the Region-specific API Gateway API without going through any CloudFront distribution. For inRegion requests, a Regional endpoint bypasses the unnecessary round trip to a CloudFront distribution. In addition, you can apply latency-based routing on Regional endpoints to deploy an API to multiple Regions using the same Regional API endpoint configuration, set the same custom domain name for each deployed API, and configure latency-based DNS records in Route 53 to route client requests to the Region that has the lowest latency. See API endpoints. Route A WebSocket route in API Gateway is used to direct incoming messages to a specific integration, such as an AWS Lambda function, based on the content of the message. When you define your WebSocket API, you specify a route key and an integration backend. The route key is an attribute in the message body. When the route key is matched in an incoming message, the integration backend is invoked. A default route can also be set for non-matching route keys or to specify a proxy"} +{"global_id": 317, "doc_id": "api-gateway", "chunk_id": "13", "question_id": 2, "question": "What does quick create simplify?", "answer_span": "You can use quick create to simplify creating an HTTP API.", "chunk": "response. In REST APIs, proxy integration is most commonly used with a proxy resource, which is represented by a greedy path variable (for example, {proxy+}) combined with a catch-all ANY method. Quick create You can use quick create to simplify creating an HTTP API. Quick create creates an API with a Lambda or HTTP integration, a default catch-all route, and a default stage that is configured to automatically deploy changes. For more information, see the section called “Create an HTTP API by using the AWS CLI”. API Gateway concepts 11 Amazon API Gateway Developer Guide Regional API endpoint The host name of an API that is deployed to the specified Region and intended to serve clients, such as EC2 instances, in the same AWS Region. API requests are targeted directly to the Region-specific API Gateway API without going through any CloudFront distribution. For inRegion requests, a Regional endpoint bypasses the unnecessary round trip to a CloudFront distribution. In addition, you can apply latency-based routing on Regional endpoints to deploy an API to multiple Regions using the same Regional API endpoint configuration, set the same custom domain name for each deployed API, and configure latency-based DNS records in Route 53 to route client requests to the Region that has the lowest latency. See API endpoints. Route A WebSocket route in API Gateway is used to direct incoming messages to a specific integration, such as an AWS Lambda function, based on the content of the message. When you define your WebSocket API, you specify a route key and an integration backend. The route key is an attribute in the message body. When the route key is matched in an incoming message, the integration backend is invoked. A default route can also be set for non-matching route keys or to specify a proxy"} +{"global_id": 318, "doc_id": "api-gateway", "chunk_id": "13", "question_id": 3, "question": "What is a Regional API endpoint intended to serve?", "answer_span": "intended to serve clients, such as EC2 instances, in the same AWS Region.", "chunk": "response. In REST APIs, proxy integration is most commonly used with a proxy resource, which is represented by a greedy path variable (for example, {proxy+}) combined with a catch-all ANY method. Quick create You can use quick create to simplify creating an HTTP API. Quick create creates an API with a Lambda or HTTP integration, a default catch-all route, and a default stage that is configured to automatically deploy changes. For more information, see the section called “Create an HTTP API by using the AWS CLI”. API Gateway concepts 11 Amazon API Gateway Developer Guide Regional API endpoint The host name of an API that is deployed to the specified Region and intended to serve clients, such as EC2 instances, in the same AWS Region. API requests are targeted directly to the Region-specific API Gateway API without going through any CloudFront distribution. For inRegion requests, a Regional endpoint bypasses the unnecessary round trip to a CloudFront distribution. In addition, you can apply latency-based routing on Regional endpoints to deploy an API to multiple Regions using the same Regional API endpoint configuration, set the same custom domain name for each deployed API, and configure latency-based DNS records in Route 53 to route client requests to the Region that has the lowest latency. See API endpoints. Route A WebSocket route in API Gateway is used to direct incoming messages to a specific integration, such as an AWS Lambda function, based on the content of the message. When you define your WebSocket API, you specify a route key and an integration backend. The route key is an attribute in the message body. When the route key is matched in an incoming message, the integration backend is invoked. A default route can also be set for non-matching route keys or to specify a proxy"} +{"global_id": 319, "doc_id": "api-gateway", "chunk_id": "13", "question_id": 4, "question": "What is used to direct incoming messages to a specific integration in API Gateway?", "answer_span": "A WebSocket route in API Gateway is used to direct incoming messages to a specific integration", "chunk": "response. In REST APIs, proxy integration is most commonly used with a proxy resource, which is represented by a greedy path variable (for example, {proxy+}) combined with a catch-all ANY method. Quick create You can use quick create to simplify creating an HTTP API. Quick create creates an API with a Lambda or HTTP integration, a default catch-all route, and a default stage that is configured to automatically deploy changes. For more information, see the section called “Create an HTTP API by using the AWS CLI”. API Gateway concepts 11 Amazon API Gateway Developer Guide Regional API endpoint The host name of an API that is deployed to the specified Region and intended to serve clients, such as EC2 instances, in the same AWS Region. API requests are targeted directly to the Region-specific API Gateway API without going through any CloudFront distribution. For inRegion requests, a Regional endpoint bypasses the unnecessary round trip to a CloudFront distribution. In addition, you can apply latency-based routing on Regional endpoints to deploy an API to multiple Regions using the same Regional API endpoint configuration, set the same custom domain name for each deployed API, and configure latency-based DNS records in Route 53 to route client requests to the Region that has the lowest latency. See API endpoints. Route A WebSocket route in API Gateway is used to direct incoming messages to a specific integration, such as an AWS Lambda function, based on the content of the message. When you define your WebSocket API, you specify a route key and an integration backend. The route key is an attribute in the message body. When the route key is matched in an incoming message, the integration backend is invoked. A default route can also be set for non-matching route keys or to specify a proxy"} +{"global_id": 320, "doc_id": "api-gateway", "chunk_id": "14", "question_id": 1, "question": "What is the route key in the context of an integration backend?", "answer_span": "The route key is an attribute in the message body.", "chunk": "you specify a route key and an integration backend. The route key is an attribute in the message body. When the route key is matched in an incoming message, the integration backend is invoked. A default route can also be set for non-matching route keys or to specify a proxy model that passes the message through as-is to backend components that perform the routing and process the request. Route request The public interface of a WebSocket API method in API Gateway that defines the body that an app developer must send in the requests to access the backend through the API. Route response The public interface of a WebSocket API that defines the status codes, headers, and body models that an app developer should expect from API Gateway. Usage plan A usage plan provides selected API clients with access to one or more deployed REST or WebSocket APIs. You can use a usage plan to configure throttling and quota limits, which are enforced on individual client API keys. API Gateway concepts 12 Amazon API Gateway Developer Guide WebSocket connection API Gateway maintains a persistent connection between clients and API Gateway itself. There is no persistent connection between API Gateway and backend integrations such as Lambda functions. Backend services are invoked as needed, based on the content of messages received from clients. Choose between REST APIs and HTTP APIs REST APIs and HTTP APIs are both RESTful API products. REST APIs support more features than HTTP APIs, while HTTP APIs are designed with minimal features so that they can be offered at a lower price. Choose REST APIs if you need features such as API keys, per-client throttling, request validation, AWS WAF integration, or private API endpoints. Choose HTTP APIs if you don't need the features included with REST APIs. The"} +{"global_id": 321, "doc_id": "api-gateway", "chunk_id": "14", "question_id": 2, "question": "What does a usage plan provide to API clients?", "answer_span": "A usage plan provides selected API clients with access to one or more deployed REST or WebSocket APIs.", "chunk": "you specify a route key and an integration backend. The route key is an attribute in the message body. When the route key is matched in an incoming message, the integration backend is invoked. A default route can also be set for non-matching route keys or to specify a proxy model that passes the message through as-is to backend components that perform the routing and process the request. Route request The public interface of a WebSocket API method in API Gateway that defines the body that an app developer must send in the requests to access the backend through the API. Route response The public interface of a WebSocket API that defines the status codes, headers, and body models that an app developer should expect from API Gateway. Usage plan A usage plan provides selected API clients with access to one or more deployed REST or WebSocket APIs. You can use a usage plan to configure throttling and quota limits, which are enforced on individual client API keys. API Gateway concepts 12 Amazon API Gateway Developer Guide WebSocket connection API Gateway maintains a persistent connection between clients and API Gateway itself. There is no persistent connection between API Gateway and backend integrations such as Lambda functions. Backend services are invoked as needed, based on the content of messages received from clients. Choose between REST APIs and HTTP APIs REST APIs and HTTP APIs are both RESTful API products. REST APIs support more features than HTTP APIs, while HTTP APIs are designed with minimal features so that they can be offered at a lower price. Choose REST APIs if you need features such as API keys, per-client throttling, request validation, AWS WAF integration, or private API endpoints. Choose HTTP APIs if you don't need the features included with REST APIs. The"} +{"global_id": 322, "doc_id": "api-gateway", "chunk_id": "14", "question_id": 3, "question": "What is maintained between clients and API Gateway?", "answer_span": "API Gateway maintains a persistent connection between clients and API Gateway itself.", "chunk": "you specify a route key and an integration backend. The route key is an attribute in the message body. When the route key is matched in an incoming message, the integration backend is invoked. A default route can also be set for non-matching route keys or to specify a proxy model that passes the message through as-is to backend components that perform the routing and process the request. Route request The public interface of a WebSocket API method in API Gateway that defines the body that an app developer must send in the requests to access the backend through the API. Route response The public interface of a WebSocket API that defines the status codes, headers, and body models that an app developer should expect from API Gateway. Usage plan A usage plan provides selected API clients with access to one or more deployed REST or WebSocket APIs. You can use a usage plan to configure throttling and quota limits, which are enforced on individual client API keys. API Gateway concepts 12 Amazon API Gateway Developer Guide WebSocket connection API Gateway maintains a persistent connection between clients and API Gateway itself. There is no persistent connection between API Gateway and backend integrations such as Lambda functions. Backend services are invoked as needed, based on the content of messages received from clients. Choose between REST APIs and HTTP APIs REST APIs and HTTP APIs are both RESTful API products. REST APIs support more features than HTTP APIs, while HTTP APIs are designed with minimal features so that they can be offered at a lower price. Choose REST APIs if you need features such as API keys, per-client throttling, request validation, AWS WAF integration, or private API endpoints. Choose HTTP APIs if you don't need the features included with REST APIs. The"} +{"global_id": 323, "doc_id": "api-gateway", "chunk_id": "14", "question_id": 4, "question": "What should you choose if you need features such as API keys and request validation?", "answer_span": "Choose REST APIs if you need features such as API keys, per-client throttling, request validation, AWS WAF integration, or private API endpoints.", "chunk": "you specify a route key and an integration backend. The route key is an attribute in the message body. When the route key is matched in an incoming message, the integration backend is invoked. A default route can also be set for non-matching route keys or to specify a proxy model that passes the message through as-is to backend components that perform the routing and process the request. Route request The public interface of a WebSocket API method in API Gateway that defines the body that an app developer must send in the requests to access the backend through the API. Route response The public interface of a WebSocket API that defines the status codes, headers, and body models that an app developer should expect from API Gateway. Usage plan A usage plan provides selected API clients with access to one or more deployed REST or WebSocket APIs. You can use a usage plan to configure throttling and quota limits, which are enforced on individual client API keys. API Gateway concepts 12 Amazon API Gateway Developer Guide WebSocket connection API Gateway maintains a persistent connection between clients and API Gateway itself. There is no persistent connection between API Gateway and backend integrations such as Lambda functions. Backend services are invoked as needed, based on the content of messages received from clients. Choose between REST APIs and HTTP APIs REST APIs and HTTP APIs are both RESTful API products. REST APIs support more features than HTTP APIs, while HTTP APIs are designed with minimal features so that they can be offered at a lower price. Choose REST APIs if you need features such as API keys, per-client throttling, request validation, AWS WAF integration, or private API endpoints. Choose HTTP APIs if you don't need the features included with REST APIs. The"} +{"global_id": 324, "doc_id": "api-gateway", "chunk_id": "15", "question_id": 1, "question": "What are REST APIs designed for?", "answer_span": "designed with minimal features so that they can be offered at a lower price.", "chunk": "designed with minimal features so that they can be offered at a lower price. Choose REST APIs if you need features such as API keys, per-client throttling, request validation, AWS WAF integration, or private API endpoints. Choose HTTP APIs if you don't need the features included with REST APIs. The following sections summarize core features that are available in REST APIs and HTTP APIs. When necessary, additional links are provided to navigate between the REST API and HTTP API sections of the API Gateway Developer Guide. Endpoint type The endpoint type refers to the endpoint that API Gateway creates for your API. For more information, see the section called “API Gateway endpoint types”. Endpoint types REST API HTTP API Yes No Yes Yes Yes No Edge-optimized Regional Private Choose between REST APIs and HTTP APIs 13"} +{"global_id": 325, "doc_id": "api-gateway", "chunk_id": "15", "question_id": 2, "question": "What features should you choose REST APIs for?", "answer_span": "if you need features such as API keys, per-client throttling, request validation, AWS WAF integration, or private API endpoints.", "chunk": "designed with minimal features so that they can be offered at a lower price. Choose REST APIs if you need features such as API keys, per-client throttling, request validation, AWS WAF integration, or private API endpoints. Choose HTTP APIs if you don't need the features included with REST APIs. The following sections summarize core features that are available in REST APIs and HTTP APIs. When necessary, additional links are provided to navigate between the REST API and HTTP API sections of the API Gateway Developer Guide. Endpoint type The endpoint type refers to the endpoint that API Gateway creates for your API. For more information, see the section called “API Gateway endpoint types”. Endpoint types REST API HTTP API Yes No Yes Yes Yes No Edge-optimized Regional Private Choose between REST APIs and HTTP APIs 13"} +{"global_id": 326, "doc_id": "api-gateway", "chunk_id": "15", "question_id": 3, "question": "What should you choose if you don't need the features included with REST APIs?", "answer_span": "Choose HTTP APIs if you don't need the features included with REST APIs.", "chunk": "designed with minimal features so that they can be offered at a lower price. Choose REST APIs if you need features such as API keys, per-client throttling, request validation, AWS WAF integration, or private API endpoints. Choose HTTP APIs if you don't need the features included with REST APIs. The following sections summarize core features that are available in REST APIs and HTTP APIs. When necessary, additional links are provided to navigate between the REST API and HTTP API sections of the API Gateway Developer Guide. Endpoint type The endpoint type refers to the endpoint that API Gateway creates for your API. For more information, see the section called “API Gateway endpoint types”. Endpoint types REST API HTTP API Yes No Yes Yes Yes No Edge-optimized Regional Private Choose between REST APIs and HTTP APIs 13"} +{"global_id": 327, "doc_id": "api-gateway", "chunk_id": "15", "question_id": 4, "question": "What does the endpoint type refer to?", "answer_span": "The endpoint type refers to the endpoint that API Gateway creates for your API.", "chunk": "designed with minimal features so that they can be offered at a lower price. Choose REST APIs if you need features such as API keys, per-client throttling, request validation, AWS WAF integration, or private API endpoints. Choose HTTP APIs if you don't need the features included with REST APIs. The following sections summarize core features that are available in REST APIs and HTTP APIs. When necessary, additional links are provided to navigate between the REST API and HTTP API sections of the API Gateway Developer Guide. Endpoint type The endpoint type refers to the endpoint that API Gateway creates for your API. For more information, see the section called “API Gateway endpoint types”. Endpoint types REST API HTTP API Yes No Yes Yes Yes No Edge-optimized Regional Private Choose between REST APIs and HTTP APIs 13"} +{"global_id": 328, "doc_id": "qbusiness", "chunk_id": "0", "question_id": 1, "question": "What is Amazon Q Business?", "answer_span": "Amazon Q Business is a fully managed, generative-AI powered assistant that you can configure to answer questions, provide summaries, generate content, and complete tasks based on your enterprise data.", "chunk": "Amazon Q Business User Guide What is Amazon Q Business? Powered by Amazon Bedrock: AWS implements automated abuse detection. Because Amazon Q Business is built on Amazon Bedrock, users can take full advantage of the controls implement ed in Amazon Bedrock to enforce safety, security, and the responsible use of artificial intellige nce (AI). Amazon Q Business is a fully managed, generative-AI powered assistant that you can configure to answer questions, provide summaries, generate content, and complete tasks based on your enterprise data. It allows end users to receive immediate, permissions-aware responses from enterprise data sources with citations, for use cases such as IT, HR, and benefits help desks. Amazon Q Business also helps streamline tasks and accelerate problem solving. You can use Amazon Q Business to create and share task automation applications, or perform routine actions like submitting time-off requests and sending meeting invites. Amazon Q Business integrates with services like Amazon Kendra and other supported data sources such as Amazon S3, Microsoft SharePoint, and Salesforce. To get started with Amazon Q Business, visit Amazon Q Business. What is Amazon Q Business? Topics • Benefits of Amazon Q Business • Pricing and availability • Accessing Amazon Q Business • Related services • Are you a first-time Amazon Q Business user? Benefits of Amazon Q Business Some of the benefits of Amazon Q Business include: Benefits of Amazon Q Business 1 Amazon Q Business User Guide Accurate and comprehensive answers Amazon Q Business generates comprehensive responses to natural language queries from users by analyzing information across all enterprise content that it has access to. It can avoid incorrect statements by confining its generated responses to existing enterprise data, and provides citations to the sources that it used to generate its response. When a query results in conflicting or inconsistent answers,"} +{"global_id": 329, "doc_id": "qbusiness", "chunk_id": "0", "question_id": 2, "question": "What are some use cases for Amazon Q Business?", "answer_span": "It allows end users to receive immediate, permissions-aware responses from enterprise data sources with citations, for use cases such as IT, HR, and benefits help desks.", "chunk": "Amazon Q Business User Guide What is Amazon Q Business? Powered by Amazon Bedrock: AWS implements automated abuse detection. Because Amazon Q Business is built on Amazon Bedrock, users can take full advantage of the controls implement ed in Amazon Bedrock to enforce safety, security, and the responsible use of artificial intellige nce (AI). Amazon Q Business is a fully managed, generative-AI powered assistant that you can configure to answer questions, provide summaries, generate content, and complete tasks based on your enterprise data. It allows end users to receive immediate, permissions-aware responses from enterprise data sources with citations, for use cases such as IT, HR, and benefits help desks. Amazon Q Business also helps streamline tasks and accelerate problem solving. You can use Amazon Q Business to create and share task automation applications, or perform routine actions like submitting time-off requests and sending meeting invites. Amazon Q Business integrates with services like Amazon Kendra and other supported data sources such as Amazon S3, Microsoft SharePoint, and Salesforce. To get started with Amazon Q Business, visit Amazon Q Business. What is Amazon Q Business? Topics • Benefits of Amazon Q Business • Pricing and availability • Accessing Amazon Q Business • Related services • Are you a first-time Amazon Q Business user? Benefits of Amazon Q Business Some of the benefits of Amazon Q Business include: Benefits of Amazon Q Business 1 Amazon Q Business User Guide Accurate and comprehensive answers Amazon Q Business generates comprehensive responses to natural language queries from users by analyzing information across all enterprise content that it has access to. It can avoid incorrect statements by confining its generated responses to existing enterprise data, and provides citations to the sources that it used to generate its response. When a query results in conflicting or inconsistent answers,"} +{"global_id": 330, "doc_id": "qbusiness", "chunk_id": "0", "question_id": 3, "question": "What services does Amazon Q Business integrate with?", "answer_span": "Amazon Q Business integrates with services like Amazon Kendra and other supported data sources such as Amazon S3, Microsoft SharePoint, and Salesforce.", "chunk": "Amazon Q Business User Guide What is Amazon Q Business? Powered by Amazon Bedrock: AWS implements automated abuse detection. Because Amazon Q Business is built on Amazon Bedrock, users can take full advantage of the controls implement ed in Amazon Bedrock to enforce safety, security, and the responsible use of artificial intellige nce (AI). Amazon Q Business is a fully managed, generative-AI powered assistant that you can configure to answer questions, provide summaries, generate content, and complete tasks based on your enterprise data. It allows end users to receive immediate, permissions-aware responses from enterprise data sources with citations, for use cases such as IT, HR, and benefits help desks. Amazon Q Business also helps streamline tasks and accelerate problem solving. You can use Amazon Q Business to create and share task automation applications, or perform routine actions like submitting time-off requests and sending meeting invites. Amazon Q Business integrates with services like Amazon Kendra and other supported data sources such as Amazon S3, Microsoft SharePoint, and Salesforce. To get started with Amazon Q Business, visit Amazon Q Business. What is Amazon Q Business? Topics • Benefits of Amazon Q Business • Pricing and availability • Accessing Amazon Q Business • Related services • Are you a first-time Amazon Q Business user? Benefits of Amazon Q Business Some of the benefits of Amazon Q Business include: Benefits of Amazon Q Business 1 Amazon Q Business User Guide Accurate and comprehensive answers Amazon Q Business generates comprehensive responses to natural language queries from users by analyzing information across all enterprise content that it has access to. It can avoid incorrect statements by confining its generated responses to existing enterprise data, and provides citations to the sources that it used to generate its response. When a query results in conflicting or inconsistent answers,"} +{"global_id": 331, "doc_id": "qbusiness", "chunk_id": "0", "question_id": 4, "question": "How does Amazon Q Business generate answers?", "answer_span": "Amazon Q Business generates comprehensive responses to natural language queries from users by analyzing information across all enterprise content that it has access to.", "chunk": "Amazon Q Business User Guide What is Amazon Q Business? Powered by Amazon Bedrock: AWS implements automated abuse detection. Because Amazon Q Business is built on Amazon Bedrock, users can take full advantage of the controls implement ed in Amazon Bedrock to enforce safety, security, and the responsible use of artificial intellige nce (AI). Amazon Q Business is a fully managed, generative-AI powered assistant that you can configure to answer questions, provide summaries, generate content, and complete tasks based on your enterprise data. It allows end users to receive immediate, permissions-aware responses from enterprise data sources with citations, for use cases such as IT, HR, and benefits help desks. Amazon Q Business also helps streamline tasks and accelerate problem solving. You can use Amazon Q Business to create and share task automation applications, or perform routine actions like submitting time-off requests and sending meeting invites. Amazon Q Business integrates with services like Amazon Kendra and other supported data sources such as Amazon S3, Microsoft SharePoint, and Salesforce. To get started with Amazon Q Business, visit Amazon Q Business. What is Amazon Q Business? Topics • Benefits of Amazon Q Business • Pricing and availability • Accessing Amazon Q Business • Related services • Are you a first-time Amazon Q Business user? Benefits of Amazon Q Business Some of the benefits of Amazon Q Business include: Benefits of Amazon Q Business 1 Amazon Q Business User Guide Accurate and comprehensive answers Amazon Q Business generates comprehensive responses to natural language queries from users by analyzing information across all enterprise content that it has access to. It can avoid incorrect statements by confining its generated responses to existing enterprise data, and provides citations to the sources that it used to generate its response. When a query results in conflicting or inconsistent answers,"} +{"global_id": 332, "doc_id": "qbusiness", "chunk_id": "1", "question_id": 1, "question": "How does Amazon Q Business avoid incorrect statements?", "answer_span": "It can avoid incorrect statements by confining its generated responses to existing enterprise data, and provides citations to the sources that it used to generate its response.", "chunk": "from users by analyzing information across all enterprise content that it has access to. It can avoid incorrect statements by confining its generated responses to existing enterprise data, and provides citations to the sources that it used to generate its response. When a query results in conflicting or inconsistent answers, Amazon Q Business lists the details for each answer. With hallucination mitigation, Amazon Q Business checks supported chat responses for inconsistencies and automatically corrects them in real-time during conversations. Simple to deploy and manage Amazon Q Business takes care of the complex task of developing and managing machine learning infrastructure and models so that you can build your chat solution quickly. Amazon Q Business connects to your data and ingests it for processing using its pre-built connectors, document retrievers, document upload capabilities. Configurable and customizable Amazon Q Business provides you with the flexibility of choosing what sources should be used to respond to user queries. You can control whether the responses should only use your enterprise data, or use both enterprise data and model knowledge. For public-facing scenarios, you can create anonymous applications that don't require user authentication, allowing you to serve a broader audience. You can customize your chat interface with your organization's branding, visual themes, and conversation starters to create a tailored user interface. Data and application security Amazon Q Business supports access control for your data so that the right users can access the right content. Its responses to questions are based on the content that your end user has permissions to access. You can use AWS IAM Identity Center or AWS Identity and Access Management to manage end user access for Amazon Q Business. Broad connectivity Amazon Q Business offers out-of-the-box connections to multiple supported data sources. You can also connect Amazon Q to any third-party"} +{"global_id": 333, "doc_id": "qbusiness", "chunk_id": "1", "question_id": 2, "question": "What does Amazon Q Business do when a query results in conflicting answers?", "answer_span": "When a query results in conflicting or inconsistent answers, Amazon Q Business lists the details for each answer.", "chunk": "from users by analyzing information across all enterprise content that it has access to. It can avoid incorrect statements by confining its generated responses to existing enterprise data, and provides citations to the sources that it used to generate its response. When a query results in conflicting or inconsistent answers, Amazon Q Business lists the details for each answer. With hallucination mitigation, Amazon Q Business checks supported chat responses for inconsistencies and automatically corrects them in real-time during conversations. Simple to deploy and manage Amazon Q Business takes care of the complex task of developing and managing machine learning infrastructure and models so that you can build your chat solution quickly. Amazon Q Business connects to your data and ingests it for processing using its pre-built connectors, document retrievers, document upload capabilities. Configurable and customizable Amazon Q Business provides you with the flexibility of choosing what sources should be used to respond to user queries. You can control whether the responses should only use your enterprise data, or use both enterprise data and model knowledge. For public-facing scenarios, you can create anonymous applications that don't require user authentication, allowing you to serve a broader audience. You can customize your chat interface with your organization's branding, visual themes, and conversation starters to create a tailored user interface. Data and application security Amazon Q Business supports access control for your data so that the right users can access the right content. Its responses to questions are based on the content that your end user has permissions to access. You can use AWS IAM Identity Center or AWS Identity and Access Management to manage end user access for Amazon Q Business. Broad connectivity Amazon Q Business offers out-of-the-box connections to multiple supported data sources. You can also connect Amazon Q to any third-party"} +{"global_id": 334, "doc_id": "qbusiness", "chunk_id": "1", "question_id": 3, "question": "What capabilities does Amazon Q Business provide for data processing?", "answer_span": "Amazon Q Business connects to your data and ingests it for processing using its pre-built connectors, document retrievers, document upload capabilities.", "chunk": "from users by analyzing information across all enterprise content that it has access to. It can avoid incorrect statements by confining its generated responses to existing enterprise data, and provides citations to the sources that it used to generate its response. When a query results in conflicting or inconsistent answers, Amazon Q Business lists the details for each answer. With hallucination mitigation, Amazon Q Business checks supported chat responses for inconsistencies and automatically corrects them in real-time during conversations. Simple to deploy and manage Amazon Q Business takes care of the complex task of developing and managing machine learning infrastructure and models so that you can build your chat solution quickly. Amazon Q Business connects to your data and ingests it for processing using its pre-built connectors, document retrievers, document upload capabilities. Configurable and customizable Amazon Q Business provides you with the flexibility of choosing what sources should be used to respond to user queries. You can control whether the responses should only use your enterprise data, or use both enterprise data and model knowledge. For public-facing scenarios, you can create anonymous applications that don't require user authentication, allowing you to serve a broader audience. You can customize your chat interface with your organization's branding, visual themes, and conversation starters to create a tailored user interface. Data and application security Amazon Q Business supports access control for your data so that the right users can access the right content. Its responses to questions are based on the content that your end user has permissions to access. You can use AWS IAM Identity Center or AWS Identity and Access Management to manage end user access for Amazon Q Business. Broad connectivity Amazon Q Business offers out-of-the-box connections to multiple supported data sources. You can also connect Amazon Q to any third-party"} +{"global_id": 335, "doc_id": "qbusiness", "chunk_id": "1", "question_id": 4, "question": "What can you customize in your chat interface with Amazon Q Business?", "answer_span": "You can customize your chat interface with your organization's branding, visual themes, and conversation starters to create a tailored user interface.", "chunk": "from users by analyzing information across all enterprise content that it has access to. It can avoid incorrect statements by confining its generated responses to existing enterprise data, and provides citations to the sources that it used to generate its response. When a query results in conflicting or inconsistent answers, Amazon Q Business lists the details for each answer. With hallucination mitigation, Amazon Q Business checks supported chat responses for inconsistencies and automatically corrects them in real-time during conversations. Simple to deploy and manage Amazon Q Business takes care of the complex task of developing and managing machine learning infrastructure and models so that you can build your chat solution quickly. Amazon Q Business connects to your data and ingests it for processing using its pre-built connectors, document retrievers, document upload capabilities. Configurable and customizable Amazon Q Business provides you with the flexibility of choosing what sources should be used to respond to user queries. You can control whether the responses should only use your enterprise data, or use both enterprise data and model knowledge. For public-facing scenarios, you can create anonymous applications that don't require user authentication, allowing you to serve a broader audience. You can customize your chat interface with your organization's branding, visual themes, and conversation starters to create a tailored user interface. Data and application security Amazon Q Business supports access control for your data so that the right users can access the right content. Its responses to questions are based on the content that your end user has permissions to access. You can use AWS IAM Identity Center or AWS Identity and Access Management to manage end user access for Amazon Q Business. Broad connectivity Amazon Q Business offers out-of-the-box connections to multiple supported data sources. You can also connect Amazon Q to any third-party"} +{"global_id": 336, "doc_id": "qbusiness", "chunk_id": "2", "question_id": 1, "question": "What can you use to manage end user access for Amazon Q Business?", "answer_span": "You can use AWS IAM Identity Center or AWS Identity and Access Management to manage end user access for Amazon Q Business.", "chunk": "end user has permissions to access. You can use AWS IAM Identity Center or AWS Identity and Access Management to manage end user access for Amazon Q Business. Broad connectivity Amazon Q Business offers out-of-the-box connections to multiple supported data sources. You can also connect Amazon Q to any third-party application using plugins to perform actions and query application data. With data accessors, you can securely share your enterprise data with verified ISVs, allowing them to retrieve relevant content from your Amazon Q index. Amazon Q Business integrations bring AI-powered assistance directly into daily workflows through browser extensions and applications for Slack, Microsoft Teams, and Microsoft Office. You can Benefits of Amazon Q Business 2 Amazon Q Business User Guide also embed Amazon Q Business directly into your applications and websites to enhance user productivity. Pricing and availability Amazon Q Business charges you both for user subscriptions to applications, and for index capacity. For information about what's included in the tiers of user subscriptions and index capacity, see Subscription and index pricing. For pricing information, including examples of charges for index capacity, subscribing and unsubscribing users to Amazon Q Business tiers, upgrading and downgrading Amazon Q Business tiers, and more, see Amazon Q Business Pricing. For a list of regions where Amazon Q Business is currently available, see Supported regions. Accessing Amazon Q Business You can access Amazon Q Business in the following ways in the AWS Regions that it's available in: AWS Management Console You can use the AWS Management Console—a browser-based interface to interact with AWS services—to access the Amazon Q Business console and resources. You can perform most Amazon Q Business tasks using the Amazon Q Business console. Amazon Q Business API To access Amazon Q Business programmatically, you can use the Amazon Q API. For more"} +{"global_id": 337, "doc_id": "qbusiness", "chunk_id": "2", "question_id": 2, "question": "What does Amazon Q Business offer out-of-the-box?", "answer_span": "Amazon Q Business offers out-of-the-box connections to multiple supported data sources.", "chunk": "end user has permissions to access. You can use AWS IAM Identity Center or AWS Identity and Access Management to manage end user access for Amazon Q Business. Broad connectivity Amazon Q Business offers out-of-the-box connections to multiple supported data sources. You can also connect Amazon Q to any third-party application using plugins to perform actions and query application data. With data accessors, you can securely share your enterprise data with verified ISVs, allowing them to retrieve relevant content from your Amazon Q index. Amazon Q Business integrations bring AI-powered assistance directly into daily workflows through browser extensions and applications for Slack, Microsoft Teams, and Microsoft Office. You can Benefits of Amazon Q Business 2 Amazon Q Business User Guide also embed Amazon Q Business directly into your applications and websites to enhance user productivity. Pricing and availability Amazon Q Business charges you both for user subscriptions to applications, and for index capacity. For information about what's included in the tiers of user subscriptions and index capacity, see Subscription and index pricing. For pricing information, including examples of charges for index capacity, subscribing and unsubscribing users to Amazon Q Business tiers, upgrading and downgrading Amazon Q Business tiers, and more, see Amazon Q Business Pricing. For a list of regions where Amazon Q Business is currently available, see Supported regions. Accessing Amazon Q Business You can access Amazon Q Business in the following ways in the AWS Regions that it's available in: AWS Management Console You can use the AWS Management Console—a browser-based interface to interact with AWS services—to access the Amazon Q Business console and resources. You can perform most Amazon Q Business tasks using the Amazon Q Business console. Amazon Q Business API To access Amazon Q Business programmatically, you can use the Amazon Q API. For more"} +{"global_id": 338, "doc_id": "qbusiness", "chunk_id": "2", "question_id": 3, "question": "How can you access Amazon Q Business programmatically?", "answer_span": "To access Amazon Q Business programmatically, you can use the Amazon Q API.", "chunk": "end user has permissions to access. You can use AWS IAM Identity Center or AWS Identity and Access Management to manage end user access for Amazon Q Business. Broad connectivity Amazon Q Business offers out-of-the-box connections to multiple supported data sources. You can also connect Amazon Q to any third-party application using plugins to perform actions and query application data. With data accessors, you can securely share your enterprise data with verified ISVs, allowing them to retrieve relevant content from your Amazon Q index. Amazon Q Business integrations bring AI-powered assistance directly into daily workflows through browser extensions and applications for Slack, Microsoft Teams, and Microsoft Office. You can Benefits of Amazon Q Business 2 Amazon Q Business User Guide also embed Amazon Q Business directly into your applications and websites to enhance user productivity. Pricing and availability Amazon Q Business charges you both for user subscriptions to applications, and for index capacity. For information about what's included in the tiers of user subscriptions and index capacity, see Subscription and index pricing. For pricing information, including examples of charges for index capacity, subscribing and unsubscribing users to Amazon Q Business tiers, upgrading and downgrading Amazon Q Business tiers, and more, see Amazon Q Business Pricing. For a list of regions where Amazon Q Business is currently available, see Supported regions. Accessing Amazon Q Business You can access Amazon Q Business in the following ways in the AWS Regions that it's available in: AWS Management Console You can use the AWS Management Console—a browser-based interface to interact with AWS services—to access the Amazon Q Business console and resources. You can perform most Amazon Q Business tasks using the Amazon Q Business console. Amazon Q Business API To access Amazon Q Business programmatically, you can use the Amazon Q API. For more"} +{"global_id": 339, "doc_id": "qbusiness", "chunk_id": "2", "question_id": 4, "question": "What charges does Amazon Q Business impose?", "answer_span": "Amazon Q Business charges you both for user subscriptions to applications, and for index capacity.", "chunk": "end user has permissions to access. You can use AWS IAM Identity Center or AWS Identity and Access Management to manage end user access for Amazon Q Business. Broad connectivity Amazon Q Business offers out-of-the-box connections to multiple supported data sources. You can also connect Amazon Q to any third-party application using plugins to perform actions and query application data. With data accessors, you can securely share your enterprise data with verified ISVs, allowing them to retrieve relevant content from your Amazon Q index. Amazon Q Business integrations bring AI-powered assistance directly into daily workflows through browser extensions and applications for Slack, Microsoft Teams, and Microsoft Office. You can Benefits of Amazon Q Business 2 Amazon Q Business User Guide also embed Amazon Q Business directly into your applications and websites to enhance user productivity. Pricing and availability Amazon Q Business charges you both for user subscriptions to applications, and for index capacity. For information about what's included in the tiers of user subscriptions and index capacity, see Subscription and index pricing. For pricing information, including examples of charges for index capacity, subscribing and unsubscribing users to Amazon Q Business tiers, upgrading and downgrading Amazon Q Business tiers, and more, see Amazon Q Business Pricing. For a list of regions where Amazon Q Business is currently available, see Supported regions. Accessing Amazon Q Business You can access Amazon Q Business in the following ways in the AWS Regions that it's available in: AWS Management Console You can use the AWS Management Console—a browser-based interface to interact with AWS services—to access the Amazon Q Business console and resources. You can perform most Amazon Q Business tasks using the Amazon Q Business console. Amazon Q Business API To access Amazon Q Business programmatically, you can use the Amazon Q API. For more"} +{"global_id": 340, "doc_id": "qbusiness", "chunk_id": "3", "question_id": 1, "question": "What is the Management Console?", "answer_span": "Management Console—a browser-based interface to interact with AWS services—to access the Amazon Q Business console and resources.", "chunk": "Management Console—a browser-based interface to interact with AWS services—to access the Amazon Q Business console and resources. You can perform most Amazon Q Business tasks using the Amazon Q Business console. Amazon Q Business API To access Amazon Q Business programmatically, you can use the Amazon Q API. For more information, see the Amazon Q Business API Reference. AWS Command Line Interface The AWS Command Line Interface (AWS CLI) is an open source tool. You can use the AWS CLI to interact with AWS services using commands in your command line shell. If you want to build task-based scripts, using the command line can be faster and more convenient than using the console. SDKs AWS SDKs provide language APIs for AWS services to use programmatically. Pricing and availability 3 Amazon Q Business User Guide Related services The following are some of the other AWS services that Amazon Q Business integrates with: Amazon Kendra Amazon Kendra is an intelligent search service that uses natural language processing and machine learning algorithms to return specific answers from your data for end user queries. If you're already an Amazon Kendra user, you can use Amazon Kendra as a data retriever for your Amazon Q Business web application. Amazon S3 Amazon S3 is an object storage service. If you're an Amazon S3 user, you can use Amazon S3 as a data source for your Amazon Q Business application. QuickSight QuickSight is a business intelligence service that helps you create and share interactive dashboards and reports. You can integrate Amazon Q Business with QuickSight to enable users to ask natural language questions about their data and receive AI-generated insights directly within their dashboards. Are you a first-time Amazon Q Business user? If you're a first-time user of Amazon Q Business, we recommend that you read the"} +{"global_id": 341, "doc_id": "qbusiness", "chunk_id": "3", "question_id": 2, "question": "How can you access Amazon Q Business programmatically?", "answer_span": "To access Amazon Q Business programmatically, you can use the Amazon Q API.", "chunk": "Management Console—a browser-based interface to interact with AWS services—to access the Amazon Q Business console and resources. You can perform most Amazon Q Business tasks using the Amazon Q Business console. Amazon Q Business API To access Amazon Q Business programmatically, you can use the Amazon Q API. For more information, see the Amazon Q Business API Reference. AWS Command Line Interface The AWS Command Line Interface (AWS CLI) is an open source tool. You can use the AWS CLI to interact with AWS services using commands in your command line shell. If you want to build task-based scripts, using the command line can be faster and more convenient than using the console. SDKs AWS SDKs provide language APIs for AWS services to use programmatically. Pricing and availability 3 Amazon Q Business User Guide Related services The following are some of the other AWS services that Amazon Q Business integrates with: Amazon Kendra Amazon Kendra is an intelligent search service that uses natural language processing and machine learning algorithms to return specific answers from your data for end user queries. If you're already an Amazon Kendra user, you can use Amazon Kendra as a data retriever for your Amazon Q Business web application. Amazon S3 Amazon S3 is an object storage service. If you're an Amazon S3 user, you can use Amazon S3 as a data source for your Amazon Q Business application. QuickSight QuickSight is a business intelligence service that helps you create and share interactive dashboards and reports. You can integrate Amazon Q Business with QuickSight to enable users to ask natural language questions about their data and receive AI-generated insights directly within their dashboards. Are you a first-time Amazon Q Business user? If you're a first-time user of Amazon Q Business, we recommend that you read the"} +{"global_id": 342, "doc_id": "qbusiness", "chunk_id": "3", "question_id": 3, "question": "What is Amazon Kendra?", "answer_span": "Amazon Kendra is an intelligent search service that uses natural language processing and machine learning algorithms to return specific answers from your data for end user queries.", "chunk": "Management Console—a browser-based interface to interact with AWS services—to access the Amazon Q Business console and resources. You can perform most Amazon Q Business tasks using the Amazon Q Business console. Amazon Q Business API To access Amazon Q Business programmatically, you can use the Amazon Q API. For more information, see the Amazon Q Business API Reference. AWS Command Line Interface The AWS Command Line Interface (AWS CLI) is an open source tool. You can use the AWS CLI to interact with AWS services using commands in your command line shell. If you want to build task-based scripts, using the command line can be faster and more convenient than using the console. SDKs AWS SDKs provide language APIs for AWS services to use programmatically. Pricing and availability 3 Amazon Q Business User Guide Related services The following are some of the other AWS services that Amazon Q Business integrates with: Amazon Kendra Amazon Kendra is an intelligent search service that uses natural language processing and machine learning algorithms to return specific answers from your data for end user queries. If you're already an Amazon Kendra user, you can use Amazon Kendra as a data retriever for your Amazon Q Business web application. Amazon S3 Amazon S3 is an object storage service. If you're an Amazon S3 user, you can use Amazon S3 as a data source for your Amazon Q Business application. QuickSight QuickSight is a business intelligence service that helps you create and share interactive dashboards and reports. You can integrate Amazon Q Business with QuickSight to enable users to ask natural language questions about their data and receive AI-generated insights directly within their dashboards. Are you a first-time Amazon Q Business user? If you're a first-time user of Amazon Q Business, we recommend that you read the"} +{"global_id": 343, "doc_id": "qbusiness", "chunk_id": "3", "question_id": 4, "question": "What does QuickSight help you create?", "answer_span": "QuickSight is a business intelligence service that helps you create and share interactive dashboards and reports.", "chunk": "Management Console—a browser-based interface to interact with AWS services—to access the Amazon Q Business console and resources. You can perform most Amazon Q Business tasks using the Amazon Q Business console. Amazon Q Business API To access Amazon Q Business programmatically, you can use the Amazon Q API. For more information, see the Amazon Q Business API Reference. AWS Command Line Interface The AWS Command Line Interface (AWS CLI) is an open source tool. You can use the AWS CLI to interact with AWS services using commands in your command line shell. If you want to build task-based scripts, using the command line can be faster and more convenient than using the console. SDKs AWS SDKs provide language APIs for AWS services to use programmatically. Pricing and availability 3 Amazon Q Business User Guide Related services The following are some of the other AWS services that Amazon Q Business integrates with: Amazon Kendra Amazon Kendra is an intelligent search service that uses natural language processing and machine learning algorithms to return specific answers from your data for end user queries. If you're already an Amazon Kendra user, you can use Amazon Kendra as a data retriever for your Amazon Q Business web application. Amazon S3 Amazon S3 is an object storage service. If you're an Amazon S3 user, you can use Amazon S3 as a data source for your Amazon Q Business application. QuickSight QuickSight is a business intelligence service that helps you create and share interactive dashboards and reports. You can integrate Amazon Q Business with QuickSight to enable users to ask natural language questions about their data and receive AI-generated insights directly within their dashboards. Are you a first-time Amazon Q Business user? If you're a first-time user of Amazon Q Business, we recommend that you read the"} +{"global_id": 344, "doc_id": "qbusiness", "chunk_id": "4", "question_id": 1, "question": "What can you integrate Amazon Q Business with?", "answer_span": "You can integrate Amazon Q Business with QuickSight to enable users to ask natural language questions about their data and receive AI-generated insights directly within their dashboards.", "chunk": "You can integrate Amazon Q Business with QuickSight to enable users to ask natural language questions about their data and receive AI-generated insights directly within their dashboards. Are you a first-time Amazon Q Business user? If you're a first-time user of Amazon Q Business, we recommend that you read the following sections in order: How it works Introduces Amazon Q Business components and describes how they work to create your Retrieval Augmented Generation (RAG) solution. Key concepts Explains key concepts and important Amazon Q Business terminology. Setting up Outlines how to set up Amazon Q Business so that you can begin creating your Amazon Q Business application and web experience. Related services 4 Amazon Q Business User Guide Creating an application Explains how to create the Amazon Q Business application integrated with IAM Identity Center. Connecting Amazon Q Business data source connectors Configuration information for specific connectors to use with your Amazon Q Business web experience. Are you a first-time Amazon Q Business user? 5 Amazon Q Business User Guide Key concepts of Amazon Q Business This section describes the key concepts and terms related to Amazon Q Business. Topics • Application environment • ACLs (Access Control Lists) • Amazon Q Apps • Analytics dashboard • Audio and video extraction • Browser extensions • Chat orchestration • Custom document enrichment • Data accessors • Data source • Data source connector • Document • Document attributes • Field mappings • Filtering using document attributes • Foundation model • Guardrails • Hallucination • Hallucination mitigation • IAM Identity Center • Identity Federation through IAM • Identity provider • Index • Index capacity • Integrations • ISV integration Key concepts 12 Amazon Q Business User Guide • Large language model • Principal Mapping • Plugins • Quick prompts • Response personalization • Retriever"} +{"global_id": 345, "doc_id": "qbusiness", "chunk_id": "4", "question_id": 2, "question": "What do you recommend for first-time Amazon Q Business users?", "answer_span": "If you're a first-time user of Amazon Q Business, we recommend that you read the following sections in order: How it works Introduces Amazon Q Business components and describes how they work to create your Retrieval Augmented Generation (RAG) solution.", "chunk": "You can integrate Amazon Q Business with QuickSight to enable users to ask natural language questions about their data and receive AI-generated insights directly within their dashboards. Are you a first-time Amazon Q Business user? If you're a first-time user of Amazon Q Business, we recommend that you read the following sections in order: How it works Introduces Amazon Q Business components and describes how they work to create your Retrieval Augmented Generation (RAG) solution. Key concepts Explains key concepts and important Amazon Q Business terminology. Setting up Outlines how to set up Amazon Q Business so that you can begin creating your Amazon Q Business application and web experience. Related services 4 Amazon Q Business User Guide Creating an application Explains how to create the Amazon Q Business application integrated with IAM Identity Center. Connecting Amazon Q Business data source connectors Configuration information for specific connectors to use with your Amazon Q Business web experience. Are you a first-time Amazon Q Business user? 5 Amazon Q Business User Guide Key concepts of Amazon Q Business This section describes the key concepts and terms related to Amazon Q Business. Topics • Application environment • ACLs (Access Control Lists) • Amazon Q Apps • Analytics dashboard • Audio and video extraction • Browser extensions • Chat orchestration • Custom document enrichment • Data accessors • Data source • Data source connector • Document • Document attributes • Field mappings • Filtering using document attributes • Foundation model • Guardrails • Hallucination • Hallucination mitigation • IAM Identity Center • Identity Federation through IAM • Identity provider • Index • Index capacity • Integrations • ISV integration Key concepts 12 Amazon Q Business User Guide • Large language model • Principal Mapping • Plugins • Quick prompts • Response personalization • Retriever"} +{"global_id": 346, "doc_id": "qbusiness", "chunk_id": "4", "question_id": 3, "question": "What does the 'Creating an application' section explain?", "answer_span": "Explains how to create the Amazon Q Business application integrated with IAM Identity Center.", "chunk": "You can integrate Amazon Q Business with QuickSight to enable users to ask natural language questions about their data and receive AI-generated insights directly within their dashboards. Are you a first-time Amazon Q Business user? If you're a first-time user of Amazon Q Business, we recommend that you read the following sections in order: How it works Introduces Amazon Q Business components and describes how they work to create your Retrieval Augmented Generation (RAG) solution. Key concepts Explains key concepts and important Amazon Q Business terminology. Setting up Outlines how to set up Amazon Q Business so that you can begin creating your Amazon Q Business application and web experience. Related services 4 Amazon Q Business User Guide Creating an application Explains how to create the Amazon Q Business application integrated with IAM Identity Center. Connecting Amazon Q Business data source connectors Configuration information for specific connectors to use with your Amazon Q Business web experience. Are you a first-time Amazon Q Business user? 5 Amazon Q Business User Guide Key concepts of Amazon Q Business This section describes the key concepts and terms related to Amazon Q Business. Topics • Application environment • ACLs (Access Control Lists) • Amazon Q Apps • Analytics dashboard • Audio and video extraction • Browser extensions • Chat orchestration • Custom document enrichment • Data accessors • Data source • Data source connector • Document • Document attributes • Field mappings • Filtering using document attributes • Foundation model • Guardrails • Hallucination • Hallucination mitigation • IAM Identity Center • Identity Federation through IAM • Identity provider • Index • Index capacity • Integrations • ISV integration Key concepts 12 Amazon Q Business User Guide • Large language model • Principal Mapping • Plugins • Quick prompts • Response personalization • Retriever"} +{"global_id": 347, "doc_id": "qbusiness", "chunk_id": "4", "question_id": 4, "question": "What does the 'Key concepts' section describe?", "answer_span": "This section describes the key concepts and terms related to Amazon Q Business.", "chunk": "You can integrate Amazon Q Business with QuickSight to enable users to ask natural language questions about their data and receive AI-generated insights directly within their dashboards. Are you a first-time Amazon Q Business user? If you're a first-time user of Amazon Q Business, we recommend that you read the following sections in order: How it works Introduces Amazon Q Business components and describes how they work to create your Retrieval Augmented Generation (RAG) solution. Key concepts Explains key concepts and important Amazon Q Business terminology. Setting up Outlines how to set up Amazon Q Business so that you can begin creating your Amazon Q Business application and web experience. Related services 4 Amazon Q Business User Guide Creating an application Explains how to create the Amazon Q Business application integrated with IAM Identity Center. Connecting Amazon Q Business data source connectors Configuration information for specific connectors to use with your Amazon Q Business web experience. Are you a first-time Amazon Q Business user? 5 Amazon Q Business User Guide Key concepts of Amazon Q Business This section describes the key concepts and terms related to Amazon Q Business. Topics • Application environment • ACLs (Access Control Lists) • Amazon Q Apps • Analytics dashboard • Audio and video extraction • Browser extensions • Chat orchestration • Custom document enrichment • Data accessors • Data source • Data source connector • Document • Document attributes • Field mappings • Filtering using document attributes • Foundation model • Guardrails • Hallucination • Hallucination mitigation • IAM Identity Center • Identity Federation through IAM • Identity provider • Index • Index capacity • Integrations • ISV integration Key concepts 12 Amazon Q Business User Guide • Large language model • Principal Mapping • Plugins • Quick prompts • Response personalization • Retriever"} +{"global_id": 348, "doc_id": "qbusiness", "chunk_id": "5", "question_id": 1, "question": "What is the primary resource used to create a chat solution in Amazon Q Business?", "answer_span": "An Amazon Q Business application environment is the primary resource that you use to create a chat solution.", "chunk": "• Hallucination mitigation • IAM Identity Center • Identity Federation through IAM • Identity provider • Index • Index capacity • Integrations • ISV integration Key concepts 12 Amazon Q Business User Guide • Large language model • Principal Mapping • Plugins • Quick prompts • Response personalization • Retriever • Retrieval Augmented Generation • Relevance tuning • Subscription tiers • Tags • Visual content extraction • User store • Web experience Application environment An Amazon Q Business application environment is the primary resource that you use to create a chat solution. To create the application environment, you can use either the Amazon Q Business console or Amazon Q Business API actions. Amazon Q Business offers four distinct methods for creating applications: the standard approach with IAM Identity Center integration, an IAM federation option for AWS-centric environments, an anonymous application method for public-facing scenarios, and a specialized QuickSight integration for analytics-focused implementations. Each creation pathway provides different authentication mechanisms and integration capabilities, allowing organizations to select the most appropriate solution based on their security requirements and existing infrastructure. ACLs (Access Control Lists) ACLs control user and system actions for resources. Users can read, write, execute, or modify data based on ACL permissions. Amazon Q Apps Amazon Q Business allows web experience users to create lightweight, purpose-built Q Apps to fulfill specific tasks from within their web experience. For example, you can use Amazon Q Business Application environment 13 Amazon Q Business User Guide to create an app with a web experience that exclusively generates marketing-related content to improve your marketing team's productivity. Your marketing team members can, in turn, also create their own Amazon Q Apps with its own marketing content-generation capabilities—like writing customer emails and creating promotional content using a certain style of voice, tone, and branding. For more"} +{"global_id": 349, "doc_id": "qbusiness", "chunk_id": "5", "question_id": 2, "question": "What are the four distinct methods for creating applications in Amazon Q Business?", "answer_span": "Amazon Q Business offers four distinct methods for creating applications: the standard approach with IAM Identity Center integration, an IAM federation option for AWS-centric environments, an anonymous application method for public-facing scenarios, and a specialized QuickSight integration for analytics-focused implementations.", "chunk": "• Hallucination mitigation • IAM Identity Center • Identity Federation through IAM • Identity provider • Index • Index capacity • Integrations • ISV integration Key concepts 12 Amazon Q Business User Guide • Large language model • Principal Mapping • Plugins • Quick prompts • Response personalization • Retriever • Retrieval Augmented Generation • Relevance tuning • Subscription tiers • Tags • Visual content extraction • User store • Web experience Application environment An Amazon Q Business application environment is the primary resource that you use to create a chat solution. To create the application environment, you can use either the Amazon Q Business console or Amazon Q Business API actions. Amazon Q Business offers four distinct methods for creating applications: the standard approach with IAM Identity Center integration, an IAM federation option for AWS-centric environments, an anonymous application method for public-facing scenarios, and a specialized QuickSight integration for analytics-focused implementations. Each creation pathway provides different authentication mechanisms and integration capabilities, allowing organizations to select the most appropriate solution based on their security requirements and existing infrastructure. ACLs (Access Control Lists) ACLs control user and system actions for resources. Users can read, write, execute, or modify data based on ACL permissions. Amazon Q Apps Amazon Q Business allows web experience users to create lightweight, purpose-built Q Apps to fulfill specific tasks from within their web experience. For example, you can use Amazon Q Business Application environment 13 Amazon Q Business User Guide to create an app with a web experience that exclusively generates marketing-related content to improve your marketing team's productivity. Your marketing team members can, in turn, also create their own Amazon Q Apps with its own marketing content-generation capabilities—like writing customer emails and creating promotional content using a certain style of voice, tone, and branding. For more"} +{"global_id": 350, "doc_id": "qbusiness", "chunk_id": "5", "question_id": 3, "question": "What do ACLs control in Amazon Q Business?", "answer_span": "ACLs control user and system actions for resources.", "chunk": "• Hallucination mitigation • IAM Identity Center • Identity Federation through IAM • Identity provider • Index • Index capacity • Integrations • ISV integration Key concepts 12 Amazon Q Business User Guide • Large language model • Principal Mapping • Plugins • Quick prompts • Response personalization • Retriever • Retrieval Augmented Generation • Relevance tuning • Subscription tiers • Tags • Visual content extraction • User store • Web experience Application environment An Amazon Q Business application environment is the primary resource that you use to create a chat solution. To create the application environment, you can use either the Amazon Q Business console or Amazon Q Business API actions. Amazon Q Business offers four distinct methods for creating applications: the standard approach with IAM Identity Center integration, an IAM federation option for AWS-centric environments, an anonymous application method for public-facing scenarios, and a specialized QuickSight integration for analytics-focused implementations. Each creation pathway provides different authentication mechanisms and integration capabilities, allowing organizations to select the most appropriate solution based on their security requirements and existing infrastructure. ACLs (Access Control Lists) ACLs control user and system actions for resources. Users can read, write, execute, or modify data based on ACL permissions. Amazon Q Apps Amazon Q Business allows web experience users to create lightweight, purpose-built Q Apps to fulfill specific tasks from within their web experience. For example, you can use Amazon Q Business Application environment 13 Amazon Q Business User Guide to create an app with a web experience that exclusively generates marketing-related content to improve your marketing team's productivity. Your marketing team members can, in turn, also create their own Amazon Q Apps with its own marketing content-generation capabilities—like writing customer emails and creating promotional content using a certain style of voice, tone, and branding. For more"} +{"global_id": 351, "doc_id": "qbusiness", "chunk_id": "5", "question_id": 4, "question": "What can web experience users create using Amazon Q Business?", "answer_span": "Amazon Q Business allows web experience users to create lightweight, purpose-built Q Apps to fulfill specific tasks from within their web experience.", "chunk": "• Hallucination mitigation • IAM Identity Center • Identity Federation through IAM • Identity provider • Index • Index capacity • Integrations • ISV integration Key concepts 12 Amazon Q Business User Guide • Large language model • Principal Mapping • Plugins • Quick prompts • Response personalization • Retriever • Retrieval Augmented Generation • Relevance tuning • Subscription tiers • Tags • Visual content extraction • User store • Web experience Application environment An Amazon Q Business application environment is the primary resource that you use to create a chat solution. To create the application environment, you can use either the Amazon Q Business console or Amazon Q Business API actions. Amazon Q Business offers four distinct methods for creating applications: the standard approach with IAM Identity Center integration, an IAM federation option for AWS-centric environments, an anonymous application method for public-facing scenarios, and a specialized QuickSight integration for analytics-focused implementations. Each creation pathway provides different authentication mechanisms and integration capabilities, allowing organizations to select the most appropriate solution based on their security requirements and existing infrastructure. ACLs (Access Control Lists) ACLs control user and system actions for resources. Users can read, write, execute, or modify data based on ACL permissions. Amazon Q Apps Amazon Q Business allows web experience users to create lightweight, purpose-built Q Apps to fulfill specific tasks from within their web experience. For example, you can use Amazon Q Business Application environment 13 Amazon Q Business User Guide to create an app with a web experience that exclusively generates marketing-related content to improve your marketing team's productivity. Your marketing team members can, in turn, also create their own Amazon Q Apps with its own marketing content-generation capabilities—like writing customer emails and creating promotional content using a certain style of voice, tone, and branding. For more"} +{"global_id": 352, "doc_id": "qbusiness", "chunk_id": "6", "question_id": 1, "question": "What does the Amazon Q Business analytics dashboard provide?", "answer_span": "The Amazon Q Business analytics dashboard provides comprehensive insights into how users interact with your application.", "chunk": "experience that exclusively generates marketing-related content to improve your marketing team's productivity. Your marketing team members can, in turn, also create their own Amazon Q Apps with its own marketing content-generation capabilities—like writing customer emails and creating promotional content using a certain style of voice, tone, and branding. For more information, see Amazon Q Apps. Analytics dashboard The Amazon Q Business analytics dashboard provides comprehensive insights into how users interact with your application. It offers metrics and visualizations on conversation volumes, popular topics, user engagement patterns, and system performance. Administrators can use these analytics to identify trends, understand user needs, measure adoption rates, and make data-driven decisions to improve the application. The dashboard helps track the effectiveness of your Amazon Q Business implementation, identify areas for enhancement, and demonstrate the value it brings to your organization. For more information, see Using the analytics dashboard. Audio and video extraction Amazon Q Business extracts semantic information from audio and video files, making multimedia content queryable. This allows users to query audio and video content using natural language and explore deeper with follow-up questions, enhancing information retrieval from multimedia sources. For more information, see Extracting semantic meaning from audio and video content. Browser extensions The Amazon Q Business browser extension enhances users' web browsing experience by bringing AI-powered assistance directly into their daily workflows. Available for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers, the extension allows users to summarize web pages, ask questions about content, access company knowledge, and use other features available in the Amazon Q Business web experience. This integration is only available for Amazon Q Business Pro users and requires installation and authentication. For more information, see Enhancing web browsing with Amazon Q Business. Chat orchestration Chat orchestration is a Amazon Q Business feature that automatically manages chat requests"} +{"global_id": 353, "doc_id": "qbusiness", "chunk_id": "6", "question_id": 2, "question": "What can Amazon Q Business extract from audio and video files?", "answer_span": "Amazon Q Business extracts semantic information from audio and video files, making multimedia content queryable.", "chunk": "experience that exclusively generates marketing-related content to improve your marketing team's productivity. Your marketing team members can, in turn, also create their own Amazon Q Apps with its own marketing content-generation capabilities—like writing customer emails and creating promotional content using a certain style of voice, tone, and branding. For more information, see Amazon Q Apps. Analytics dashboard The Amazon Q Business analytics dashboard provides comprehensive insights into how users interact with your application. It offers metrics and visualizations on conversation volumes, popular topics, user engagement patterns, and system performance. Administrators can use these analytics to identify trends, understand user needs, measure adoption rates, and make data-driven decisions to improve the application. The dashboard helps track the effectiveness of your Amazon Q Business implementation, identify areas for enhancement, and demonstrate the value it brings to your organization. For more information, see Using the analytics dashboard. Audio and video extraction Amazon Q Business extracts semantic information from audio and video files, making multimedia content queryable. This allows users to query audio and video content using natural language and explore deeper with follow-up questions, enhancing information retrieval from multimedia sources. For more information, see Extracting semantic meaning from audio and video content. Browser extensions The Amazon Q Business browser extension enhances users' web browsing experience by bringing AI-powered assistance directly into their daily workflows. Available for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers, the extension allows users to summarize web pages, ask questions about content, access company knowledge, and use other features available in the Amazon Q Business web experience. This integration is only available for Amazon Q Business Pro users and requires installation and authentication. For more information, see Enhancing web browsing with Amazon Q Business. Chat orchestration Chat orchestration is a Amazon Q Business feature that automatically manages chat requests"} +{"global_id": 354, "doc_id": "qbusiness", "chunk_id": "6", "question_id": 3, "question": "What browsers is the Amazon Q Business browser extension available for?", "answer_span": "Available for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers.", "chunk": "experience that exclusively generates marketing-related content to improve your marketing team's productivity. Your marketing team members can, in turn, also create their own Amazon Q Apps with its own marketing content-generation capabilities—like writing customer emails and creating promotional content using a certain style of voice, tone, and branding. For more information, see Amazon Q Apps. Analytics dashboard The Amazon Q Business analytics dashboard provides comprehensive insights into how users interact with your application. It offers metrics and visualizations on conversation volumes, popular topics, user engagement patterns, and system performance. Administrators can use these analytics to identify trends, understand user needs, measure adoption rates, and make data-driven decisions to improve the application. The dashboard helps track the effectiveness of your Amazon Q Business implementation, identify areas for enhancement, and demonstrate the value it brings to your organization. For more information, see Using the analytics dashboard. Audio and video extraction Amazon Q Business extracts semantic information from audio and video files, making multimedia content queryable. This allows users to query audio and video content using natural language and explore deeper with follow-up questions, enhancing information retrieval from multimedia sources. For more information, see Extracting semantic meaning from audio and video content. Browser extensions The Amazon Q Business browser extension enhances users' web browsing experience by bringing AI-powered assistance directly into their daily workflows. Available for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers, the extension allows users to summarize web pages, ask questions about content, access company knowledge, and use other features available in the Amazon Q Business web experience. This integration is only available for Amazon Q Business Pro users and requires installation and authentication. For more information, see Enhancing web browsing with Amazon Q Business. Chat orchestration Chat orchestration is a Amazon Q Business feature that automatically manages chat requests"} +{"global_id": 355, "doc_id": "qbusiness", "chunk_id": "6", "question_id": 4, "question": "What does chat orchestration do in Amazon Q Business?", "answer_span": "Chat orchestration is a Amazon Q Business feature that automatically manages chat requests.", "chunk": "experience that exclusively generates marketing-related content to improve your marketing team's productivity. Your marketing team members can, in turn, also create their own Amazon Q Apps with its own marketing content-generation capabilities—like writing customer emails and creating promotional content using a certain style of voice, tone, and branding. For more information, see Amazon Q Apps. Analytics dashboard The Amazon Q Business analytics dashboard provides comprehensive insights into how users interact with your application. It offers metrics and visualizations on conversation volumes, popular topics, user engagement patterns, and system performance. Administrators can use these analytics to identify trends, understand user needs, measure adoption rates, and make data-driven decisions to improve the application. The dashboard helps track the effectiveness of your Amazon Q Business implementation, identify areas for enhancement, and demonstrate the value it brings to your organization. For more information, see Using the analytics dashboard. Audio and video extraction Amazon Q Business extracts semantic information from audio and video files, making multimedia content queryable. This allows users to query audio and video content using natural language and explore deeper with follow-up questions, enhancing information retrieval from multimedia sources. For more information, see Extracting semantic meaning from audio and video content. Browser extensions The Amazon Q Business browser extension enhances users' web browsing experience by bringing AI-powered assistance directly into their daily workflows. Available for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers, the extension allows users to summarize web pages, ask questions about content, access company knowledge, and use other features available in the Amazon Q Business web experience. This integration is only available for Amazon Q Business Pro users and requires installation and authentication. For more information, see Enhancing web browsing with Amazon Q Business. Chat orchestration Chat orchestration is a Amazon Q Business feature that automatically manages chat requests"} +{"global_id": 356, "doc_id": "qbusiness", "chunk_id": "7", "question_id": 1, "question": "What is required for the integration of Amazon Q Business?", "answer_span": "This integration is only available for Amazon Q Business Pro users and requires installation and authentication.", "chunk": "available in the Amazon Q Business web experience. This integration is only available for Amazon Q Business Pro users and requires installation and authentication. For more information, see Enhancing web browsing with Amazon Q Business. Chat orchestration Chat orchestration is a Amazon Q Business feature that automatically manages chat requests across configured plugins and data sources. When enabled, Amazon Q Business automatically routes chat requests to plugins, integrating enterprise data and relevant actions within a single Analytics dashboard 14 Amazon Q Business User Guide chat response. This feature provides unified response integration combining RAG workflow with plugin actions, intelligent action detection for read-only vs. write actions, and smart plugin management with user-driven experience through clarification requests when needed. For more information, see Chat orchestration settings. Custom document enrichment Document enrichment is an Amazon Q Business feature that you can use to manipulate your document content and document attributes. You can use document enrichment to perform optical character recognition (OCR) or translation. Document enrichment uses basic and Lambda operations. For more information see, Document attributes and types and Document enrichment. Data accessors The Amazon Q Business data accessors feature allows you to securely share your enterprise data with verified independent software vendors (ISVs) using Amazon Q. This feature enables ISVs to retrieve relevant content from your Amazon Q index, enhancing their applications with your organization's knowledge. By granting controlled access to your data, you can leverage thirdparty tools while maintaining security and data access compliance. Data accessors include verified software providers such as Asana, Miro, Zoom, and PagerDuty. For more information, see Share your enterprise data with data accessors. Data source A data source is a document repository. Data source connector A data source connector can crawl and synchronize a data source with an Amazon Q Business index at customizable intervals."} +{"global_id": 357, "doc_id": "qbusiness", "chunk_id": "7", "question_id": 2, "question": "What does chat orchestration in Amazon Q Business do?", "answer_span": "Chat orchestration is a Amazon Q Business feature that automatically manages chat requests across configured plugins and data sources.", "chunk": "available in the Amazon Q Business web experience. This integration is only available for Amazon Q Business Pro users and requires installation and authentication. For more information, see Enhancing web browsing with Amazon Q Business. Chat orchestration Chat orchestration is a Amazon Q Business feature that automatically manages chat requests across configured plugins and data sources. When enabled, Amazon Q Business automatically routes chat requests to plugins, integrating enterprise data and relevant actions within a single Analytics dashboard 14 Amazon Q Business User Guide chat response. This feature provides unified response integration combining RAG workflow with plugin actions, intelligent action detection for read-only vs. write actions, and smart plugin management with user-driven experience through clarification requests when needed. For more information, see Chat orchestration settings. Custom document enrichment Document enrichment is an Amazon Q Business feature that you can use to manipulate your document content and document attributes. You can use document enrichment to perform optical character recognition (OCR) or translation. Document enrichment uses basic and Lambda operations. For more information see, Document attributes and types and Document enrichment. Data accessors The Amazon Q Business data accessors feature allows you to securely share your enterprise data with verified independent software vendors (ISVs) using Amazon Q. This feature enables ISVs to retrieve relevant content from your Amazon Q index, enhancing their applications with your organization's knowledge. By granting controlled access to your data, you can leverage thirdparty tools while maintaining security and data access compliance. Data accessors include verified software providers such as Asana, Miro, Zoom, and PagerDuty. For more information, see Share your enterprise data with data accessors. Data source A data source is a document repository. Data source connector A data source connector can crawl and synchronize a data source with an Amazon Q Business index at customizable intervals."} +{"global_id": 358, "doc_id": "qbusiness", "chunk_id": "7", "question_id": 3, "question": "What can you use document enrichment for in Amazon Q Business?", "answer_span": "You can use document enrichment to perform optical character recognition (OCR) or translation.", "chunk": "available in the Amazon Q Business web experience. This integration is only available for Amazon Q Business Pro users and requires installation and authentication. For more information, see Enhancing web browsing with Amazon Q Business. Chat orchestration Chat orchestration is a Amazon Q Business feature that automatically manages chat requests across configured plugins and data sources. When enabled, Amazon Q Business automatically routes chat requests to plugins, integrating enterprise data and relevant actions within a single Analytics dashboard 14 Amazon Q Business User Guide chat response. This feature provides unified response integration combining RAG workflow with plugin actions, intelligent action detection for read-only vs. write actions, and smart plugin management with user-driven experience through clarification requests when needed. For more information, see Chat orchestration settings. Custom document enrichment Document enrichment is an Amazon Q Business feature that you can use to manipulate your document content and document attributes. You can use document enrichment to perform optical character recognition (OCR) or translation. Document enrichment uses basic and Lambda operations. For more information see, Document attributes and types and Document enrichment. Data accessors The Amazon Q Business data accessors feature allows you to securely share your enterprise data with verified independent software vendors (ISVs) using Amazon Q. This feature enables ISVs to retrieve relevant content from your Amazon Q index, enhancing their applications with your organization's knowledge. By granting controlled access to your data, you can leverage thirdparty tools while maintaining security and data access compliance. Data accessors include verified software providers such as Asana, Miro, Zoom, and PagerDuty. For more information, see Share your enterprise data with data accessors. Data source A data source is a document repository. Data source connector A data source connector can crawl and synchronize a data source with an Amazon Q Business index at customizable intervals."} +{"global_id": 359, "doc_id": "qbusiness", "chunk_id": "7", "question_id": 4, "question": "What do data accessors in Amazon Q Business allow you to do?", "answer_span": "The Amazon Q Business data accessors feature allows you to securely share your enterprise data with verified independent software vendors (ISVs) using Amazon Q.", "chunk": "available in the Amazon Q Business web experience. This integration is only available for Amazon Q Business Pro users and requires installation and authentication. For more information, see Enhancing web browsing with Amazon Q Business. Chat orchestration Chat orchestration is a Amazon Q Business feature that automatically manages chat requests across configured plugins and data sources. When enabled, Amazon Q Business automatically routes chat requests to plugins, integrating enterprise data and relevant actions within a single Analytics dashboard 14 Amazon Q Business User Guide chat response. This feature provides unified response integration combining RAG workflow with plugin actions, intelligent action detection for read-only vs. write actions, and smart plugin management with user-driven experience through clarification requests when needed. For more information, see Chat orchestration settings. Custom document enrichment Document enrichment is an Amazon Q Business feature that you can use to manipulate your document content and document attributes. You can use document enrichment to perform optical character recognition (OCR) or translation. Document enrichment uses basic and Lambda operations. For more information see, Document attributes and types and Document enrichment. Data accessors The Amazon Q Business data accessors feature allows you to securely share your enterprise data with verified independent software vendors (ISVs) using Amazon Q. This feature enables ISVs to retrieve relevant content from your Amazon Q index, enhancing their applications with your organization's knowledge. By granting controlled access to your data, you can leverage thirdparty tools while maintaining security and data access compliance. Data accessors include verified software providers such as Asana, Miro, Zoom, and PagerDuty. For more information, see Share your enterprise data with data accessors. Data source A data source is a document repository. Data source connector A data source connector can crawl and synchronize a data source with an Amazon Q Business index at customizable intervals."} +{"global_id": 360, "doc_id": "qbusiness", "chunk_id": "8", "question_id": 1, "question": "What is a data source in Amazon Q Business?", "answer_span": "A data source is a document repository.", "chunk": "such as Asana, Miro, Zoom, and PagerDuty. For more information, see Share your enterprise data with data accessors. Data source A data source is a document repository. Data source connector A data source connector can crawl and synchronize a data source with an Amazon Q Business index at customizable intervals. Amazon Q Business supports multiple connectors so that you can build your generative AI solution with minimal configuring. For a list of Amazon Q Business supported connectors, see Supported connectors. For an overview of Amazon Q Business connector features, see Amazon Q Business data source connector features. Document In Amazon Q Business, a document is a unit of data. Specific document formats supported include .csv, .docx, HTML, JSON, .pdf, plaintext, .ppt, .pptx, .rtf, and .xslx. For more information, see Supported document types. Custom document enrichment 15 Amazon Q Business User Guide Document attributes Document attributes are structural metadata associated with documents, such as document title, document type, and date and time created. Amazon Q Business extracts document attributes during the document ingestion process to provide customizable chat and data manipulation capabilities for your application environment. Amazon Q Business offers reserved document attributes that you can use. Or, you can create custom attributes. For more information, see Document attributes, Filtering using document attributes, Boosting using document attributes, and Custom document enrichment. Field mappings An Amazon Q Business index has fields that help you structure data to aid the retrieval process. You can map index fields to your document attributes when you add documents directly to an index, or use a data source connector. Filtering using document attributes Filtering using document attributes is an Amazon Q Business feature that you can use to filter your Amazon Q Business chat responses for your end user. For example, if you have a document attribute"} +{"global_id": 361, "doc_id": "qbusiness", "chunk_id": "8", "question_id": 2, "question": "What can a data source connector do?", "answer_span": "A data source connector can crawl and synchronize a data source with an Amazon Q Business index at customizable intervals.", "chunk": "such as Asana, Miro, Zoom, and PagerDuty. For more information, see Share your enterprise data with data accessors. Data source A data source is a document repository. Data source connector A data source connector can crawl and synchronize a data source with an Amazon Q Business index at customizable intervals. Amazon Q Business supports multiple connectors so that you can build your generative AI solution with minimal configuring. For a list of Amazon Q Business supported connectors, see Supported connectors. For an overview of Amazon Q Business connector features, see Amazon Q Business data source connector features. Document In Amazon Q Business, a document is a unit of data. Specific document formats supported include .csv, .docx, HTML, JSON, .pdf, plaintext, .ppt, .pptx, .rtf, and .xslx. For more information, see Supported document types. Custom document enrichment 15 Amazon Q Business User Guide Document attributes Document attributes are structural metadata associated with documents, such as document title, document type, and date and time created. Amazon Q Business extracts document attributes during the document ingestion process to provide customizable chat and data manipulation capabilities for your application environment. Amazon Q Business offers reserved document attributes that you can use. Or, you can create custom attributes. For more information, see Document attributes, Filtering using document attributes, Boosting using document attributes, and Custom document enrichment. Field mappings An Amazon Q Business index has fields that help you structure data to aid the retrieval process. You can map index fields to your document attributes when you add documents directly to an index, or use a data source connector. Filtering using document attributes Filtering using document attributes is an Amazon Q Business feature that you can use to filter your Amazon Q Business chat responses for your end user. For example, if you have a document attribute"} +{"global_id": 362, "doc_id": "qbusiness", "chunk_id": "8", "question_id": 3, "question": "What are document attributes in Amazon Q Business?", "answer_span": "Document attributes are structural metadata associated with documents, such as document title, document type, and date and time created.", "chunk": "such as Asana, Miro, Zoom, and PagerDuty. For more information, see Share your enterprise data with data accessors. Data source A data source is a document repository. Data source connector A data source connector can crawl and synchronize a data source with an Amazon Q Business index at customizable intervals. Amazon Q Business supports multiple connectors so that you can build your generative AI solution with minimal configuring. For a list of Amazon Q Business supported connectors, see Supported connectors. For an overview of Amazon Q Business connector features, see Amazon Q Business data source connector features. Document In Amazon Q Business, a document is a unit of data. Specific document formats supported include .csv, .docx, HTML, JSON, .pdf, plaintext, .ppt, .pptx, .rtf, and .xslx. For more information, see Supported document types. Custom document enrichment 15 Amazon Q Business User Guide Document attributes Document attributes are structural metadata associated with documents, such as document title, document type, and date and time created. Amazon Q Business extracts document attributes during the document ingestion process to provide customizable chat and data manipulation capabilities for your application environment. Amazon Q Business offers reserved document attributes that you can use. Or, you can create custom attributes. For more information, see Document attributes, Filtering using document attributes, Boosting using document attributes, and Custom document enrichment. Field mappings An Amazon Q Business index has fields that help you structure data to aid the retrieval process. You can map index fields to your document attributes when you add documents directly to an index, or use a data source connector. Filtering using document attributes Filtering using document attributes is an Amazon Q Business feature that you can use to filter your Amazon Q Business chat responses for your end user. For example, if you have a document attribute"} +{"global_id": 363, "doc_id": "qbusiness", "chunk_id": "8", "question_id": 4, "question": "What feature does Amazon Q Business offer for filtering chat responses?", "answer_span": "Filtering using document attributes is an Amazon Q Business feature that you can use to filter your Amazon Q Business chat responses for your end user.", "chunk": "such as Asana, Miro, Zoom, and PagerDuty. For more information, see Share your enterprise data with data accessors. Data source A data source is a document repository. Data source connector A data source connector can crawl and synchronize a data source with an Amazon Q Business index at customizable intervals. Amazon Q Business supports multiple connectors so that you can build your generative AI solution with minimal configuring. For a list of Amazon Q Business supported connectors, see Supported connectors. For an overview of Amazon Q Business connector features, see Amazon Q Business data source connector features. Document In Amazon Q Business, a document is a unit of data. Specific document formats supported include .csv, .docx, HTML, JSON, .pdf, plaintext, .ppt, .pptx, .rtf, and .xslx. For more information, see Supported document types. Custom document enrichment 15 Amazon Q Business User Guide Document attributes Document attributes are structural metadata associated with documents, such as document title, document type, and date and time created. Amazon Q Business extracts document attributes during the document ingestion process to provide customizable chat and data manipulation capabilities for your application environment. Amazon Q Business offers reserved document attributes that you can use. Or, you can create custom attributes. For more information, see Document attributes, Filtering using document attributes, Boosting using document attributes, and Custom document enrichment. Field mappings An Amazon Q Business index has fields that help you structure data to aid the retrieval process. You can map index fields to your document attributes when you add documents directly to an index, or use a data source connector. Filtering using document attributes Filtering using document attributes is an Amazon Q Business feature that you can use to filter your Amazon Q Business chat responses for your end user. For example, if you have a document attribute"} +{"global_id": 364, "doc_id": "qbusiness", "chunk_id": "9", "question_id": 1, "question": "What is a foundation model?", "answer_span": "A foundation model (FM) is a broad, function-based machine learning model (not specific to language systems).", "chunk": "add documents directly to an index, or use a data source connector. Filtering using document attributes Filtering using document attributes is an Amazon Q Business feature that you can use to filter your Amazon Q Business chat responses for your end user. For example, if you have a document attribute associated with a data source type, you can use the attribute to mandate that chat responses only be generated from a specific data source. For more information, see Filtering using document attributes. Foundation model A foundation model (FM) is a broad, function-based machine learning model (not specific to language systems). An FM is tuned to a large number (billions) of parameters and is trained on a large corpus of documents. Guardrails An Amazon Q Business feature that lets you define global controls and topic-level controls for your application environment. Using this feature, you can control what sources your application environment will use to generate responses from, and also control what topics it will respond to and how. For more information, see Guardrails. Document attributes 16 Amazon Q Business User Guide Hallucination A hallucination, in the machine learning context, is a confident response by an AI application environment that isn't justified by its training data. Think of a hallucination as instances where the response doesn't make sense in the context of the prompt, or when the responses are out of scope with the documents provided. Amazon Q Business offers you the ability to minimize hallucinations by allowing your retrieval system to generate responses only from your existing enterprise data. Hallucination mitigation Hallucination mitigation is a Amazon Q Business feature that checks chat responses for hallucinations and corrects inconsistencies in real-time during chat. If a hallucination is detected with high confidence, Amazon Q Business corrects the inconsistencies in its response and generates"} +{"global_id": 365, "doc_id": "qbusiness", "chunk_id": "9", "question_id": 2, "question": "What feature allows you to filter chat responses based on document attributes?", "answer_span": "Filtering using document attributes is an Amazon Q Business feature that you can use to filter your Amazon Q Business chat responses for your end user.", "chunk": "add documents directly to an index, or use a data source connector. Filtering using document attributes Filtering using document attributes is an Amazon Q Business feature that you can use to filter your Amazon Q Business chat responses for your end user. For example, if you have a document attribute associated with a data source type, you can use the attribute to mandate that chat responses only be generated from a specific data source. For more information, see Filtering using document attributes. Foundation model A foundation model (FM) is a broad, function-based machine learning model (not specific to language systems). An FM is tuned to a large number (billions) of parameters and is trained on a large corpus of documents. Guardrails An Amazon Q Business feature that lets you define global controls and topic-level controls for your application environment. Using this feature, you can control what sources your application environment will use to generate responses from, and also control what topics it will respond to and how. For more information, see Guardrails. Document attributes 16 Amazon Q Business User Guide Hallucination A hallucination, in the machine learning context, is a confident response by an AI application environment that isn't justified by its training data. Think of a hallucination as instances where the response doesn't make sense in the context of the prompt, or when the responses are out of scope with the documents provided. Amazon Q Business offers you the ability to minimize hallucinations by allowing your retrieval system to generate responses only from your existing enterprise data. Hallucination mitigation Hallucination mitigation is a Amazon Q Business feature that checks chat responses for hallucinations and corrects inconsistencies in real-time during chat. If a hallucination is detected with high confidence, Amazon Q Business corrects the inconsistencies in its response and generates"} +{"global_id": 366, "doc_id": "qbusiness", "chunk_id": "9", "question_id": 3, "question": "What does hallucination mean in the machine learning context?", "answer_span": "A hallucination, in the machine learning context, is a confident response by an AI application environment that isn't justified by its training data.", "chunk": "add documents directly to an index, or use a data source connector. Filtering using document attributes Filtering using document attributes is an Amazon Q Business feature that you can use to filter your Amazon Q Business chat responses for your end user. For example, if you have a document attribute associated with a data source type, you can use the attribute to mandate that chat responses only be generated from a specific data source. For more information, see Filtering using document attributes. Foundation model A foundation model (FM) is a broad, function-based machine learning model (not specific to language systems). An FM is tuned to a large number (billions) of parameters and is trained on a large corpus of documents. Guardrails An Amazon Q Business feature that lets you define global controls and topic-level controls for your application environment. Using this feature, you can control what sources your application environment will use to generate responses from, and also control what topics it will respond to and how. For more information, see Guardrails. Document attributes 16 Amazon Q Business User Guide Hallucination A hallucination, in the machine learning context, is a confident response by an AI application environment that isn't justified by its training data. Think of a hallucination as instances where the response doesn't make sense in the context of the prompt, or when the responses are out of scope with the documents provided. Amazon Q Business offers you the ability to minimize hallucinations by allowing your retrieval system to generate responses only from your existing enterprise data. Hallucination mitigation Hallucination mitigation is a Amazon Q Business feature that checks chat responses for hallucinations and corrects inconsistencies in real-time during chat. If a hallucination is detected with high confidence, Amazon Q Business corrects the inconsistencies in its response and generates"} +{"global_id": 367, "doc_id": "qbusiness", "chunk_id": "9", "question_id": 4, "question": "What is hallucination mitigation?", "answer_span": "Hallucination mitigation is a Amazon Q Business feature that checks chat responses for hallucinations and corrects inconsistencies in real-time during chat.", "chunk": "add documents directly to an index, or use a data source connector. Filtering using document attributes Filtering using document attributes is an Amazon Q Business feature that you can use to filter your Amazon Q Business chat responses for your end user. For example, if you have a document attribute associated with a data source type, you can use the attribute to mandate that chat responses only be generated from a specific data source. For more information, see Filtering using document attributes. Foundation model A foundation model (FM) is a broad, function-based machine learning model (not specific to language systems). An FM is tuned to a large number (billions) of parameters and is trained on a large corpus of documents. Guardrails An Amazon Q Business feature that lets you define global controls and topic-level controls for your application environment. Using this feature, you can control what sources your application environment will use to generate responses from, and also control what topics it will respond to and how. For more information, see Guardrails. Document attributes 16 Amazon Q Business User Guide Hallucination A hallucination, in the machine learning context, is a confident response by an AI application environment that isn't justified by its training data. Think of a hallucination as instances where the response doesn't make sense in the context of the prompt, or when the responses are out of scope with the documents provided. Amazon Q Business offers you the ability to minimize hallucinations by allowing your retrieval system to generate responses only from your existing enterprise data. Hallucination mitigation Hallucination mitigation is a Amazon Q Business feature that checks chat responses for hallucinations and corrects inconsistencies in real-time during chat. If a hallucination is detected with high confidence, Amazon Q Business corrects the inconsistencies in its response and generates"} +{"global_id": 368, "doc_id": "qbusiness", "chunk_id": "10", "question_id": 1, "question": "What is hallucination mitigation in Amazon Q Business?", "answer_span": "Hallucination mitigation is a Amazon Q Business feature that checks chat responses for hallucinations and corrects inconsistencies in real-time during chat.", "chunk": "generate responses only from your existing enterprise data. Hallucination mitigation Hallucination mitigation is a Amazon Q Business feature that checks chat responses for hallucinations and corrects inconsistencies in real-time during chat. If a hallucination is detected with high confidence, Amazon Q Business corrects the inconsistencies in its response and generates a new, edited message. This feature is only available for retrieval augmented generation (RAG) responses from data connected to the application and is not supported for chat orchestration, plugin workflows, or responses generated from tabular data or multimedia transcripts. For more information, see Response settings. IAM Identity Center You can manage user access to your Amazon Q Business application environment using IAM Identity Center as your AWS gateway to the identity provider of your choice. For more information on creating an Amazon Q Business application environment integrated with IAM Identity Center see Configuring an IAM Identity Center instance and Configuring an Amazon Q Business application. For more information about using IAM Identity Center to manage access to applications, see Manage access to applications in the IAM Identity Center User Guide. Identity Federation through IAM Amazon Q Business supports identity federation through AWS Identity and Access Management. When you use identity federation, you can manage users with your enterprise identity provider (IdP) and use AWS Identity and Access Management to authenticate users when they sign in to AWS Identity and Access Management. For more information on creating an Amazon Q Business application environment integrated with AWS Identity and Access Management see Configuring an Amazon Q Business application. Hallucination 17 Amazon Q Business User Guide Identity provider An identity provider (IdP) is a service that stores, manages, maintains, and verifies user identities for your application environment (in this case, Amazon Q Business). Some examples of IdPs are IAM Identity Center, Okta, and"} +{"global_id": 369, "doc_id": "qbusiness", "chunk_id": "10", "question_id": 2, "question": "What happens if a hallucination is detected with high confidence?", "answer_span": "If a hallucination is detected with high confidence, Amazon Q Business corrects the inconsistencies in its response and generates a new, edited message.", "chunk": "generate responses only from your existing enterprise data. Hallucination mitigation Hallucination mitigation is a Amazon Q Business feature that checks chat responses for hallucinations and corrects inconsistencies in real-time during chat. If a hallucination is detected with high confidence, Amazon Q Business corrects the inconsistencies in its response and generates a new, edited message. This feature is only available for retrieval augmented generation (RAG) responses from data connected to the application and is not supported for chat orchestration, plugin workflows, or responses generated from tabular data or multimedia transcripts. For more information, see Response settings. IAM Identity Center You can manage user access to your Amazon Q Business application environment using IAM Identity Center as your AWS gateway to the identity provider of your choice. For more information on creating an Amazon Q Business application environment integrated with IAM Identity Center see Configuring an IAM Identity Center instance and Configuring an Amazon Q Business application. For more information about using IAM Identity Center to manage access to applications, see Manage access to applications in the IAM Identity Center User Guide. Identity Federation through IAM Amazon Q Business supports identity federation through AWS Identity and Access Management. When you use identity federation, you can manage users with your enterprise identity provider (IdP) and use AWS Identity and Access Management to authenticate users when they sign in to AWS Identity and Access Management. For more information on creating an Amazon Q Business application environment integrated with AWS Identity and Access Management see Configuring an Amazon Q Business application. Hallucination 17 Amazon Q Business User Guide Identity provider An identity provider (IdP) is a service that stores, manages, maintains, and verifies user identities for your application environment (in this case, Amazon Q Business). Some examples of IdPs are IAM Identity Center, Okta, and"} +{"global_id": 370, "doc_id": "qbusiness", "chunk_id": "10", "question_id": 3, "question": "What is the purpose of IAM Identity Center in Amazon Q Business?", "answer_span": "You can manage user access to your Amazon Q Business application environment using IAM Identity Center as your AWS gateway to the identity provider of your choice.", "chunk": "generate responses only from your existing enterprise data. Hallucination mitigation Hallucination mitigation is a Amazon Q Business feature that checks chat responses for hallucinations and corrects inconsistencies in real-time during chat. If a hallucination is detected with high confidence, Amazon Q Business corrects the inconsistencies in its response and generates a new, edited message. This feature is only available for retrieval augmented generation (RAG) responses from data connected to the application and is not supported for chat orchestration, plugin workflows, or responses generated from tabular data or multimedia transcripts. For more information, see Response settings. IAM Identity Center You can manage user access to your Amazon Q Business application environment using IAM Identity Center as your AWS gateway to the identity provider of your choice. For more information on creating an Amazon Q Business application environment integrated with IAM Identity Center see Configuring an IAM Identity Center instance and Configuring an Amazon Q Business application. For more information about using IAM Identity Center to manage access to applications, see Manage access to applications in the IAM Identity Center User Guide. Identity Federation through IAM Amazon Q Business supports identity federation through AWS Identity and Access Management. When you use identity federation, you can manage users with your enterprise identity provider (IdP) and use AWS Identity and Access Management to authenticate users when they sign in to AWS Identity and Access Management. For more information on creating an Amazon Q Business application environment integrated with AWS Identity and Access Management see Configuring an Amazon Q Business application. Hallucination 17 Amazon Q Business User Guide Identity provider An identity provider (IdP) is a service that stores, manages, maintains, and verifies user identities for your application environment (in this case, Amazon Q Business). Some examples of IdPs are IAM Identity Center, Okta, and"} +{"global_id": 371, "doc_id": "qbusiness", "chunk_id": "10", "question_id": 4, "question": "What does an identity provider (IdP) do?", "answer_span": "An identity provider (IdP) is a service that stores, manages, maintains, and verifies user identities for your application environment (in this case, Amazon Q Business).", "chunk": "generate responses only from your existing enterprise data. Hallucination mitigation Hallucination mitigation is a Amazon Q Business feature that checks chat responses for hallucinations and corrects inconsistencies in real-time during chat. If a hallucination is detected with high confidence, Amazon Q Business corrects the inconsistencies in its response and generates a new, edited message. This feature is only available for retrieval augmented generation (RAG) responses from data connected to the application and is not supported for chat orchestration, plugin workflows, or responses generated from tabular data or multimedia transcripts. For more information, see Response settings. IAM Identity Center You can manage user access to your Amazon Q Business application environment using IAM Identity Center as your AWS gateway to the identity provider of your choice. For more information on creating an Amazon Q Business application environment integrated with IAM Identity Center see Configuring an IAM Identity Center instance and Configuring an Amazon Q Business application. For more information about using IAM Identity Center to manage access to applications, see Manage access to applications in the IAM Identity Center User Guide. Identity Federation through IAM Amazon Q Business supports identity federation through AWS Identity and Access Management. When you use identity federation, you can manage users with your enterprise identity provider (IdP) and use AWS Identity and Access Management to authenticate users when they sign in to AWS Identity and Access Management. For more information on creating an Amazon Q Business application environment integrated with AWS Identity and Access Management see Configuring an Amazon Q Business application. Hallucination 17 Amazon Q Business User Guide Identity provider An identity provider (IdP) is a service that stores, manages, maintains, and verifies user identities for your application environment (in this case, Amazon Q Business). Some examples of IdPs are IAM Identity Center, Okta, and"} +{"global_id": 372, "doc_id": "qbusiness", "chunk_id": "11", "question_id": 1, "question": "What is an identity provider (IdP)?", "answer_span": "An identity provider (IdP) is a service that stores, manages, maintains, and verifies user identities for your application environment (in this case, Amazon Q Business).", "chunk": "Configuring an Amazon Q Business application. Hallucination 17 Amazon Q Business User Guide Identity provider An identity provider (IdP) is a service that stores, manages, maintains, and verifies user identities for your application environment (in this case, Amazon Q Business). Some examples of IdPs are IAM Identity Center, Okta, and Microsoft EntraID (formerly Azure Active Directory). Index An index is a corpus of documents. Amazon Q Business supports its own index where you can add and sync documents. An index has fields that you can map your document attributes to, to enhance your end user's chat experience. Amazon Q Business creates retriever for you when it creates your Amazon Q Business index. Amazon Q Business provides two types of index: Enterprise and Starter. You can also use an Amazon Kendra index as a retriever for your generative AI application environment. Index capacity When you use an Amazon Q Business native index for your application environment, you must provision data storage capacity for it. Amazon Q Business provides two types of index: Enterprise and Starter. Both index types include 20,000 documents or 200 MB of total extracted text (whichever is reached first) and 100 hours of data connector usage (time that it takes to scan and index new, updated, or deleted documents) by default. For more information, see Amazon Q Business Index types and Pricing for subscriptions and indices. Integrations Amazon Q Business integrations enhance user productivity by bringing AI-powered assistance directly into daily workflows through third-party enterprise tools. These integrations include browser extensions for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers, as well as applications for Slack, Microsoft Teams, Microsoft Outlook, and Microsoft Word. Each integration must be configured and deployed to bring Amazon Q Business capabilities directly within those enterprise tools, allowing users to access Amazon Q's knowledge"} +{"global_id": 373, "doc_id": "qbusiness", "chunk_id": "11", "question_id": 2, "question": "What types of index does Amazon Q Business provide?", "answer_span": "Amazon Q Business provides two types of index: Enterprise and Starter.", "chunk": "Configuring an Amazon Q Business application. Hallucination 17 Amazon Q Business User Guide Identity provider An identity provider (IdP) is a service that stores, manages, maintains, and verifies user identities for your application environment (in this case, Amazon Q Business). Some examples of IdPs are IAM Identity Center, Okta, and Microsoft EntraID (formerly Azure Active Directory). Index An index is a corpus of documents. Amazon Q Business supports its own index where you can add and sync documents. An index has fields that you can map your document attributes to, to enhance your end user's chat experience. Amazon Q Business creates retriever for you when it creates your Amazon Q Business index. Amazon Q Business provides two types of index: Enterprise and Starter. You can also use an Amazon Kendra index as a retriever for your generative AI application environment. Index capacity When you use an Amazon Q Business native index for your application environment, you must provision data storage capacity for it. Amazon Q Business provides two types of index: Enterprise and Starter. Both index types include 20,000 documents or 200 MB of total extracted text (whichever is reached first) and 100 hours of data connector usage (time that it takes to scan and index new, updated, or deleted documents) by default. For more information, see Amazon Q Business Index types and Pricing for subscriptions and indices. Integrations Amazon Q Business integrations enhance user productivity by bringing AI-powered assistance directly into daily workflows through third-party enterprise tools. These integrations include browser extensions for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers, as well as applications for Slack, Microsoft Teams, Microsoft Outlook, and Microsoft Word. Each integration must be configured and deployed to bring Amazon Q Business capabilities directly within those enterprise tools, allowing users to access Amazon Q's knowledge"} +{"global_id": 374, "doc_id": "qbusiness", "chunk_id": "11", "question_id": 3, "question": "What is the default capacity for both index types in Amazon Q Business?", "answer_span": "Both index types include 20,000 documents or 200 MB of total extracted text (whichever is reached first) and 100 hours of data connector usage.", "chunk": "Configuring an Amazon Q Business application. Hallucination 17 Amazon Q Business User Guide Identity provider An identity provider (IdP) is a service that stores, manages, maintains, and verifies user identities for your application environment (in this case, Amazon Q Business). Some examples of IdPs are IAM Identity Center, Okta, and Microsoft EntraID (formerly Azure Active Directory). Index An index is a corpus of documents. Amazon Q Business supports its own index where you can add and sync documents. An index has fields that you can map your document attributes to, to enhance your end user's chat experience. Amazon Q Business creates retriever for you when it creates your Amazon Q Business index. Amazon Q Business provides two types of index: Enterprise and Starter. You can also use an Amazon Kendra index as a retriever for your generative AI application environment. Index capacity When you use an Amazon Q Business native index for your application environment, you must provision data storage capacity for it. Amazon Q Business provides two types of index: Enterprise and Starter. Both index types include 20,000 documents or 200 MB of total extracted text (whichever is reached first) and 100 hours of data connector usage (time that it takes to scan and index new, updated, or deleted documents) by default. For more information, see Amazon Q Business Index types and Pricing for subscriptions and indices. Integrations Amazon Q Business integrations enhance user productivity by bringing AI-powered assistance directly into daily workflows through third-party enterprise tools. These integrations include browser extensions for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers, as well as applications for Slack, Microsoft Teams, Microsoft Outlook, and Microsoft Word. Each integration must be configured and deployed to bring Amazon Q Business capabilities directly within those enterprise tools, allowing users to access Amazon Q's knowledge"} +{"global_id": 375, "doc_id": "qbusiness", "chunk_id": "11", "question_id": 4, "question": "What integrations does Amazon Q Business offer?", "answer_span": "These integrations include browser extensions for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers, as well as applications for Slack, Microsoft Teams, Microsoft Outlook, and Microsoft Word.", "chunk": "Configuring an Amazon Q Business application. Hallucination 17 Amazon Q Business User Guide Identity provider An identity provider (IdP) is a service that stores, manages, maintains, and verifies user identities for your application environment (in this case, Amazon Q Business). Some examples of IdPs are IAM Identity Center, Okta, and Microsoft EntraID (formerly Azure Active Directory). Index An index is a corpus of documents. Amazon Q Business supports its own index where you can add and sync documents. An index has fields that you can map your document attributes to, to enhance your end user's chat experience. Amazon Q Business creates retriever for you when it creates your Amazon Q Business index. Amazon Q Business provides two types of index: Enterprise and Starter. You can also use an Amazon Kendra index as a retriever for your generative AI application environment. Index capacity When you use an Amazon Q Business native index for your application environment, you must provision data storage capacity for it. Amazon Q Business provides two types of index: Enterprise and Starter. Both index types include 20,000 documents or 200 MB of total extracted text (whichever is reached first) and 100 hours of data connector usage (time that it takes to scan and index new, updated, or deleted documents) by default. For more information, see Amazon Q Business Index types and Pricing for subscriptions and indices. Integrations Amazon Q Business integrations enhance user productivity by bringing AI-powered assistance directly into daily workflows through third-party enterprise tools. These integrations include browser extensions for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers, as well as applications for Slack, Microsoft Teams, Microsoft Outlook, and Microsoft Word. Each integration must be configured and deployed to bring Amazon Q Business capabilities directly within those enterprise tools, allowing users to access Amazon Q's knowledge"} +{"global_id": 376, "doc_id": "qbusiness", "chunk_id": "12", "question_id": 1, "question": "What browsers are included for Amazon Q Business integrations?", "answer_span": "include browser extensions for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers", "chunk": "include browser extensions for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers, as well as applications for Slack, Microsoft Teams, Microsoft Outlook, and Microsoft Word. Each integration must be configured and deployed to bring Amazon Q Business capabilities directly within those enterprise tools, allowing users to access Amazon Q's knowledge without context switching during their work. For more information, see Integrations. Identity provider 18 Amazon Q Business User Guide ISV integration Amazon Q Business enables independent software vendors (ISVs) to leverage customer enterprise data through the Amazon Q index to enhance their applications with generative AI capabilities. ISVs can access this data through two methods: either by being added as a data accessor to an existing customer's Amazon Q index, or by creating a Amazon Q application on behalf of the customer. The SearchRelevantContent API operation allows ISVs to retrieve relevant content from the customer's data sources while maintaining security and access controls, ensuring users only see content they have permission to access. This integration enables software providers to build enhanced application experiences without having to directly connect to or index individual data sources. For more information, see Amazon Q index for independent software vendors (ISVs). Large language model A large language model (LLM) is a language-based, machine learning model that's tuned to a large number (billions) of parameters and trained on a large corpus of documents. Principal Mapping Principal mapping is used to connect users and groups with their user ids and group membership information in data sources connected to the application. Plugins Amazon Q Business includes a plugins feature that you can use to interact with third-party services such as Jira and Salesforce. With the plugins feature, you can perform actions specific to that service (like creating a ticket) from within your Amazon Q Business web experience chat."} +{"global_id": 377, "doc_id": "qbusiness", "chunk_id": "12", "question_id": 2, "question": "What applications are included for Amazon Q Business integrations?", "answer_span": "as well as applications for Slack, Microsoft Teams, Microsoft Outlook, and Microsoft Word", "chunk": "include browser extensions for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers, as well as applications for Slack, Microsoft Teams, Microsoft Outlook, and Microsoft Word. Each integration must be configured and deployed to bring Amazon Q Business capabilities directly within those enterprise tools, allowing users to access Amazon Q's knowledge without context switching during their work. For more information, see Integrations. Identity provider 18 Amazon Q Business User Guide ISV integration Amazon Q Business enables independent software vendors (ISVs) to leverage customer enterprise data through the Amazon Q index to enhance their applications with generative AI capabilities. ISVs can access this data through two methods: either by being added as a data accessor to an existing customer's Amazon Q index, or by creating a Amazon Q application on behalf of the customer. The SearchRelevantContent API operation allows ISVs to retrieve relevant content from the customer's data sources while maintaining security and access controls, ensuring users only see content they have permission to access. This integration enables software providers to build enhanced application experiences without having to directly connect to or index individual data sources. For more information, see Amazon Q index for independent software vendors (ISVs). Large language model A large language model (LLM) is a language-based, machine learning model that's tuned to a large number (billions) of parameters and trained on a large corpus of documents. Principal Mapping Principal mapping is used to connect users and groups with their user ids and group membership information in data sources connected to the application. Plugins Amazon Q Business includes a plugins feature that you can use to interact with third-party services such as Jira and Salesforce. With the plugins feature, you can perform actions specific to that service (like creating a ticket) from within your Amazon Q Business web experience chat."} +{"global_id": 378, "doc_id": "qbusiness", "chunk_id": "12", "question_id": 3, "question": "What does Amazon Q Business enable independent software vendors (ISVs) to leverage?", "answer_span": "Amazon Q Business enables independent software vendors (ISVs) to leverage customer enterprise data through the Amazon Q index", "chunk": "include browser extensions for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers, as well as applications for Slack, Microsoft Teams, Microsoft Outlook, and Microsoft Word. Each integration must be configured and deployed to bring Amazon Q Business capabilities directly within those enterprise tools, allowing users to access Amazon Q's knowledge without context switching during their work. For more information, see Integrations. Identity provider 18 Amazon Q Business User Guide ISV integration Amazon Q Business enables independent software vendors (ISVs) to leverage customer enterprise data through the Amazon Q index to enhance their applications with generative AI capabilities. ISVs can access this data through two methods: either by being added as a data accessor to an existing customer's Amazon Q index, or by creating a Amazon Q application on behalf of the customer. The SearchRelevantContent API operation allows ISVs to retrieve relevant content from the customer's data sources while maintaining security and access controls, ensuring users only see content they have permission to access. This integration enables software providers to build enhanced application experiences without having to directly connect to or index individual data sources. For more information, see Amazon Q index for independent software vendors (ISVs). Large language model A large language model (LLM) is a language-based, machine learning model that's tuned to a large number (billions) of parameters and trained on a large corpus of documents. Principal Mapping Principal mapping is used to connect users and groups with their user ids and group membership information in data sources connected to the application. Plugins Amazon Q Business includes a plugins feature that you can use to interact with third-party services such as Jira and Salesforce. With the plugins feature, you can perform actions specific to that service (like creating a ticket) from within your Amazon Q Business web experience chat."} +{"global_id": 379, "doc_id": "qbusiness", "chunk_id": "12", "question_id": 4, "question": "What is a large language model (LLM)?", "answer_span": "A large language model (LLM) is a language-based, machine learning model that's tuned to a large number (billions) of parameters and trained on a large corpus of documents", "chunk": "include browser extensions for Google Chrome, Microsoft Edge, and Mozilla Firefox browsers, as well as applications for Slack, Microsoft Teams, Microsoft Outlook, and Microsoft Word. Each integration must be configured and deployed to bring Amazon Q Business capabilities directly within those enterprise tools, allowing users to access Amazon Q's knowledge without context switching during their work. For more information, see Integrations. Identity provider 18 Amazon Q Business User Guide ISV integration Amazon Q Business enables independent software vendors (ISVs) to leverage customer enterprise data through the Amazon Q index to enhance their applications with generative AI capabilities. ISVs can access this data through two methods: either by being added as a data accessor to an existing customer's Amazon Q index, or by creating a Amazon Q application on behalf of the customer. The SearchRelevantContent API operation allows ISVs to retrieve relevant content from the customer's data sources while maintaining security and access controls, ensuring users only see content they have permission to access. This integration enables software providers to build enhanced application experiences without having to directly connect to or index individual data sources. For more information, see Amazon Q index for independent software vendors (ISVs). Large language model A large language model (LLM) is a language-based, machine learning model that's tuned to a large number (billions) of parameters and trained on a large corpus of documents. Principal Mapping Principal mapping is used to connect users and groups with their user ids and group membership information in data sources connected to the application. Plugins Amazon Q Business includes a plugins feature that you can use to interact with third-party services such as Jira and Salesforce. With the plugins feature, you can perform actions specific to that service (like creating a ticket) from within your Amazon Q Business web experience chat."} +{"global_id": 380, "doc_id": "qbusiness", "chunk_id": "13", "question_id": 1, "question": "What feature does Amazon Q Business include to interact with third-party services?", "answer_span": "Plugins Amazon Q Business includes a plugins feature that you can use to interact with third-party services such as Jira and Salesforce.", "chunk": "to the application. Plugins Amazon Q Business includes a plugins feature that you can use to interact with third-party services such as Jira and Salesforce. With the plugins feature, you can perform actions specific to that service (like creating a ticket) from within your Amazon Q Business web experience chat. For more information, see Plugins. Quick prompts The Amazon Q Business quick prompts feature helps with end user discoverability of the web experience chat features. Use this feature to prompt your end user to engage with their web experience chat in specific ways. For example, you can show the available configured plugins or inform users that they can choose to summarize their chat. Response personalization Response personalization is a Amazon Q Business feature that customizes chat responses to end users based on metadata associated with them—specifically address and job-related information ISV integration 19 Amazon Q Business User Guide —in your SSO instance. This feature enhances the relevance of responses by tailoring them to the user's specific context within the organization. To use response personalization effectively, you must have already added the necessary user information in your SSO instance. For more information, see Response settings. Retriever A retriever pulls data from an index in real time during a conversation. Amazon Q Business supports a native index retriever and also a Amazon Kendra index retriever. Retrieval Augmented Generation Retrieval Augmented Generation (RAG) is a natural language processing (NLP) technique. Using RAG, generative artificial intelligence (generative AI) is conditioned on specific documents that are retrieved from a dataset. Amazon Q Business has a built-in RAG system. A RAG model has the following two components: • A retrieval component retrieves relevant documents for the user query. • A generation component takes the query and the retrieved documents and then generates an answer to the"} +{"global_id": 381, "doc_id": "qbusiness", "chunk_id": "13", "question_id": 2, "question": "What does the quick prompts feature help with?", "answer_span": "The Amazon Q Business quick prompts feature helps with end user discoverability of the web experience chat features.", "chunk": "to the application. Plugins Amazon Q Business includes a plugins feature that you can use to interact with third-party services such as Jira and Salesforce. With the plugins feature, you can perform actions specific to that service (like creating a ticket) from within your Amazon Q Business web experience chat. For more information, see Plugins. Quick prompts The Amazon Q Business quick prompts feature helps with end user discoverability of the web experience chat features. Use this feature to prompt your end user to engage with their web experience chat in specific ways. For example, you can show the available configured plugins or inform users that they can choose to summarize their chat. Response personalization Response personalization is a Amazon Q Business feature that customizes chat responses to end users based on metadata associated with them—specifically address and job-related information ISV integration 19 Amazon Q Business User Guide —in your SSO instance. This feature enhances the relevance of responses by tailoring them to the user's specific context within the organization. To use response personalization effectively, you must have already added the necessary user information in your SSO instance. For more information, see Response settings. Retriever A retriever pulls data from an index in real time during a conversation. Amazon Q Business supports a native index retriever and also a Amazon Kendra index retriever. Retrieval Augmented Generation Retrieval Augmented Generation (RAG) is a natural language processing (NLP) technique. Using RAG, generative artificial intelligence (generative AI) is conditioned on specific documents that are retrieved from a dataset. Amazon Q Business has a built-in RAG system. A RAG model has the following two components: • A retrieval component retrieves relevant documents for the user query. • A generation component takes the query and the retrieved documents and then generates an answer to the"} +{"global_id": 382, "doc_id": "qbusiness", "chunk_id": "13", "question_id": 3, "question": "What does response personalization customize?", "answer_span": "Response personalization is a Amazon Q Business feature that customizes chat responses to end users based on metadata associated with them—specifically address and job-related information ISV integration.", "chunk": "to the application. Plugins Amazon Q Business includes a plugins feature that you can use to interact with third-party services such as Jira and Salesforce. With the plugins feature, you can perform actions specific to that service (like creating a ticket) from within your Amazon Q Business web experience chat. For more information, see Plugins. Quick prompts The Amazon Q Business quick prompts feature helps with end user discoverability of the web experience chat features. Use this feature to prompt your end user to engage with their web experience chat in specific ways. For example, you can show the available configured plugins or inform users that they can choose to summarize their chat. Response personalization Response personalization is a Amazon Q Business feature that customizes chat responses to end users based on metadata associated with them—specifically address and job-related information ISV integration 19 Amazon Q Business User Guide —in your SSO instance. This feature enhances the relevance of responses by tailoring them to the user's specific context within the organization. To use response personalization effectively, you must have already added the necessary user information in your SSO instance. For more information, see Response settings. Retriever A retriever pulls data from an index in real time during a conversation. Amazon Q Business supports a native index retriever and also a Amazon Kendra index retriever. Retrieval Augmented Generation Retrieval Augmented Generation (RAG) is a natural language processing (NLP) technique. Using RAG, generative artificial intelligence (generative AI) is conditioned on specific documents that are retrieved from a dataset. Amazon Q Business has a built-in RAG system. A RAG model has the following two components: • A retrieval component retrieves relevant documents for the user query. • A generation component takes the query and the retrieved documents and then generates an answer to the"} +{"global_id": 383, "doc_id": "qbusiness", "chunk_id": "13", "question_id": 4, "question": "What does a retriever do in Amazon Q Business?", "answer_span": "A retriever pulls data from an index in real time during a conversation.", "chunk": "to the application. Plugins Amazon Q Business includes a plugins feature that you can use to interact with third-party services such as Jira and Salesforce. With the plugins feature, you can perform actions specific to that service (like creating a ticket) from within your Amazon Q Business web experience chat. For more information, see Plugins. Quick prompts The Amazon Q Business quick prompts feature helps with end user discoverability of the web experience chat features. Use this feature to prompt your end user to engage with their web experience chat in specific ways. For example, you can show the available configured plugins or inform users that they can choose to summarize their chat. Response personalization Response personalization is a Amazon Q Business feature that customizes chat responses to end users based on metadata associated with them—specifically address and job-related information ISV integration 19 Amazon Q Business User Guide —in your SSO instance. This feature enhances the relevance of responses by tailoring them to the user's specific context within the organization. To use response personalization effectively, you must have already added the necessary user information in your SSO instance. For more information, see Response settings. Retriever A retriever pulls data from an index in real time during a conversation. Amazon Q Business supports a native index retriever and also a Amazon Kendra index retriever. Retrieval Augmented Generation Retrieval Augmented Generation (RAG) is a natural language processing (NLP) technique. Using RAG, generative artificial intelligence (generative AI) is conditioned on specific documents that are retrieved from a dataset. Amazon Q Business has a built-in RAG system. A RAG model has the following two components: • A retrieval component retrieves relevant documents for the user query. • A generation component takes the query and the retrieved documents and then generates an answer to the"} +{"global_id": 384, "doc_id": "qbusiness", "chunk_id": "14", "question_id": 1, "question": "What are the two components of a RAG model?", "answer_span": "A RAG model has the following two components: • A retrieval component retrieves relevant documents for the user query. • A generation component takes the query and the retrieved documents and then generates an answer to the query using a large language model.", "chunk": "are retrieved from a dataset. Amazon Q Business has a built-in RAG system. A RAG model has the following two components: • A retrieval component retrieves relevant documents for the user query. • A generation component takes the query and the retrieved documents and then generates an answer to the query using a large language model. Relevance tuning You can choose to use document attributes to boost and tune the relevance of chat responses for end users from specific content. For example, if you have a document attribute associated document creation or updating date, you use these attributes to boost chat responses from more recently created or updated documents. For more information, see Relevance tuning. Subscription tiers Amazon Q Business offers multiple user subscription tiers and index types that can be combined to meet your organization's needs. User subscription tiers determine the features available to end users, with Pro tier users having access to advanced features like browser extensions. Index types include starter index and enterprise index, each with different capabilities and storage capacities. You can choose any combination of index types and user subscriptions for your Amazon Q Business application. For more information, see Amazon Q Business subscription tiers and index types. Retriever 20 Amazon Q Business User Guide Important Amazon Q Business Pro tier subscriptions aren't supported in Europe (Ireland) (eu-west-1) and Asia Pacific (Sydney) (ap-southeast-2) regions. Tags Manage your Amazon Q Business applications and data sources by assigning tags or labels. You can use tags to categorize your Amazon Q Business resources in various ways. For example, categorize by purpose, owner, or application environment, or any combination. Each tag consists of a key and a value, both of which you define. For more information, see Tags. Visual content extraction When Amazon Q Business processes your input files"} +{"global_id": 385, "doc_id": "qbusiness", "chunk_id": "14", "question_id": 2, "question": "What can you use to boost chat responses from more recently created or updated documents?", "answer_span": "if you have a document attribute associated document creation or updating date, you use these attributes to boost chat responses from more recently created or updated documents.", "chunk": "are retrieved from a dataset. Amazon Q Business has a built-in RAG system. A RAG model has the following two components: • A retrieval component retrieves relevant documents for the user query. • A generation component takes the query and the retrieved documents and then generates an answer to the query using a large language model. Relevance tuning You can choose to use document attributes to boost and tune the relevance of chat responses for end users from specific content. For example, if you have a document attribute associated document creation or updating date, you use these attributes to boost chat responses from more recently created or updated documents. For more information, see Relevance tuning. Subscription tiers Amazon Q Business offers multiple user subscription tiers and index types that can be combined to meet your organization's needs. User subscription tiers determine the features available to end users, with Pro tier users having access to advanced features like browser extensions. Index types include starter index and enterprise index, each with different capabilities and storage capacities. You can choose any combination of index types and user subscriptions for your Amazon Q Business application. For more information, see Amazon Q Business subscription tiers and index types. Retriever 20 Amazon Q Business User Guide Important Amazon Q Business Pro tier subscriptions aren't supported in Europe (Ireland) (eu-west-1) and Asia Pacific (Sydney) (ap-southeast-2) regions. Tags Manage your Amazon Q Business applications and data sources by assigning tags or labels. You can use tags to categorize your Amazon Q Business resources in various ways. For example, categorize by purpose, owner, or application environment, or any combination. Each tag consists of a key and a value, both of which you define. For more information, see Tags. Visual content extraction When Amazon Q Business processes your input files"} +{"global_id": 386, "doc_id": "qbusiness", "chunk_id": "14", "question_id": 3, "question": "What do user subscription tiers determine?", "answer_span": "User subscription tiers determine the features available to end users, with Pro tier users having access to advanced features like browser extensions.", "chunk": "are retrieved from a dataset. Amazon Q Business has a built-in RAG system. A RAG model has the following two components: • A retrieval component retrieves relevant documents for the user query. • A generation component takes the query and the retrieved documents and then generates an answer to the query using a large language model. Relevance tuning You can choose to use document attributes to boost and tune the relevance of chat responses for end users from specific content. For example, if you have a document attribute associated document creation or updating date, you use these attributes to boost chat responses from more recently created or updated documents. For more information, see Relevance tuning. Subscription tiers Amazon Q Business offers multiple user subscription tiers and index types that can be combined to meet your organization's needs. User subscription tiers determine the features available to end users, with Pro tier users having access to advanced features like browser extensions. Index types include starter index and enterprise index, each with different capabilities and storage capacities. You can choose any combination of index types and user subscriptions for your Amazon Q Business application. For more information, see Amazon Q Business subscription tiers and index types. Retriever 20 Amazon Q Business User Guide Important Amazon Q Business Pro tier subscriptions aren't supported in Europe (Ireland) (eu-west-1) and Asia Pacific (Sydney) (ap-southeast-2) regions. Tags Manage your Amazon Q Business applications and data sources by assigning tags or labels. You can use tags to categorize your Amazon Q Business resources in various ways. For example, categorize by purpose, owner, or application environment, or any combination. Each tag consists of a key and a value, both of which you define. For more information, see Tags. Visual content extraction When Amazon Q Business processes your input files"} +{"global_id": 387, "doc_id": "qbusiness", "chunk_id": "14", "question_id": 4, "question": "What regions do Amazon Q Business Pro tier subscriptions not support?", "answer_span": "Important Amazon Q Business Pro tier subscriptions aren't supported in Europe (Ireland) (eu-west-1) and Asia Pacific (Sydney) (ap-southeast-2) regions.", "chunk": "are retrieved from a dataset. Amazon Q Business has a built-in RAG system. A RAG model has the following two components: • A retrieval component retrieves relevant documents for the user query. • A generation component takes the query and the retrieved documents and then generates an answer to the query using a large language model. Relevance tuning You can choose to use document attributes to boost and tune the relevance of chat responses for end users from specific content. For example, if you have a document attribute associated document creation or updating date, you use these attributes to boost chat responses from more recently created or updated documents. For more information, see Relevance tuning. Subscription tiers Amazon Q Business offers multiple user subscription tiers and index types that can be combined to meet your organization's needs. User subscription tiers determine the features available to end users, with Pro tier users having access to advanced features like browser extensions. Index types include starter index and enterprise index, each with different capabilities and storage capacities. You can choose any combination of index types and user subscriptions for your Amazon Q Business application. For more information, see Amazon Q Business subscription tiers and index types. Retriever 20 Amazon Q Business User Guide Important Amazon Q Business Pro tier subscriptions aren't supported in Europe (Ireland) (eu-west-1) and Asia Pacific (Sydney) (ap-southeast-2) regions. Tags Manage your Amazon Q Business applications and data sources by assigning tags or labels. You can use tags to categorize your Amazon Q Business resources in various ways. For example, categorize by purpose, owner, or application environment, or any combination. Each tag consists of a key and a value, both of which you define. For more information, see Tags. Visual content extraction When Amazon Q Business processes your input files"} +{"global_id": 388, "doc_id": "qbusiness", "chunk_id": "15", "question_id": 1, "question": "What does each tag in Amazon Q Business consist of?", "answer_span": "Each tag consists of a key and a value, both of which you define.", "chunk": "your Amazon Q Business resources in various ways. For example, categorize by purpose, owner, or application environment, or any combination. Each tag consists of a key and a value, both of which you define. For more information, see Tags. Visual content extraction When Amazon Q Business processes your input files from a data source, it uses advanced image understanding capabilities to extract semantic information and insights from images and other visuals. This feature makes visual information in your data sources queryable, allowing end users to find relevant information even when it's conveyed in embedded diagrams, charts, or technical illustrations. Visual content extraction provides additional context and nuance to the information in your data sources and builds a more complete knowledge base from your enterprise data. For more information, see Extracting semantic meaning from embedded visual content. User store User Store is an Amazon Q Business data source connector feature that streamlines user and group management across all the data sources attached to your application environment. For more information about how this feature works and implementation details, see Understanding User Store. Web experience An Amazon Q Business web experience is the chat interface that you create using your Amazon Q Business application environment. Then, your end users can chat with your organization’s Amazon Q Business web experience. You can configure and customize your Amazon Q Business web experience using either the Amazon Q Business console or the Amazon Q Business API. For more information, see Customizing your web experience. Tags 21 Amazon Q Business User Guide Amazon Q Business subscription tiers and index types Amazon Q Business offers multiple index types and user subscription tiers. You can choose any combination of index types and user subscriptions for your Amazon Q Business application environment. Topics • Index types • User subscription tiers"} +{"global_id": 389, "doc_id": "qbusiness", "chunk_id": "15", "question_id": 2, "question": "What capabilities does Amazon Q Business use to extract information from images?", "answer_span": "it uses advanced image understanding capabilities to extract semantic information and insights from images and other visuals.", "chunk": "your Amazon Q Business resources in various ways. For example, categorize by purpose, owner, or application environment, or any combination. Each tag consists of a key and a value, both of which you define. For more information, see Tags. Visual content extraction When Amazon Q Business processes your input files from a data source, it uses advanced image understanding capabilities to extract semantic information and insights from images and other visuals. This feature makes visual information in your data sources queryable, allowing end users to find relevant information even when it's conveyed in embedded diagrams, charts, or technical illustrations. Visual content extraction provides additional context and nuance to the information in your data sources and builds a more complete knowledge base from your enterprise data. For more information, see Extracting semantic meaning from embedded visual content. User store User Store is an Amazon Q Business data source connector feature that streamlines user and group management across all the data sources attached to your application environment. For more information about how this feature works and implementation details, see Understanding User Store. Web experience An Amazon Q Business web experience is the chat interface that you create using your Amazon Q Business application environment. Then, your end users can chat with your organization’s Amazon Q Business web experience. You can configure and customize your Amazon Q Business web experience using either the Amazon Q Business console or the Amazon Q Business API. For more information, see Customizing your web experience. Tags 21 Amazon Q Business User Guide Amazon Q Business subscription tiers and index types Amazon Q Business offers multiple index types and user subscription tiers. You can choose any combination of index types and user subscriptions for your Amazon Q Business application environment. Topics • Index types • User subscription tiers"} +{"global_id": 390, "doc_id": "qbusiness", "chunk_id": "15", "question_id": 3, "question": "What is the purpose of the User Store feature in Amazon Q Business?", "answer_span": "User Store is an Amazon Q Business data source connector feature that streamlines user and group management across all the data sources attached to your application environment.", "chunk": "your Amazon Q Business resources in various ways. For example, categorize by purpose, owner, or application environment, or any combination. Each tag consists of a key and a value, both of which you define. For more information, see Tags. Visual content extraction When Amazon Q Business processes your input files from a data source, it uses advanced image understanding capabilities to extract semantic information and insights from images and other visuals. This feature makes visual information in your data sources queryable, allowing end users to find relevant information even when it's conveyed in embedded diagrams, charts, or technical illustrations. Visual content extraction provides additional context and nuance to the information in your data sources and builds a more complete knowledge base from your enterprise data. For more information, see Extracting semantic meaning from embedded visual content. User store User Store is an Amazon Q Business data source connector feature that streamlines user and group management across all the data sources attached to your application environment. For more information about how this feature works and implementation details, see Understanding User Store. Web experience An Amazon Q Business web experience is the chat interface that you create using your Amazon Q Business application environment. Then, your end users can chat with your organization’s Amazon Q Business web experience. You can configure and customize your Amazon Q Business web experience using either the Amazon Q Business console or the Amazon Q Business API. For more information, see Customizing your web experience. Tags 21 Amazon Q Business User Guide Amazon Q Business subscription tiers and index types Amazon Q Business offers multiple index types and user subscription tiers. You can choose any combination of index types and user subscriptions for your Amazon Q Business application environment. Topics • Index types • User subscription tiers"} +{"global_id": 391, "doc_id": "qbusiness", "chunk_id": "15", "question_id": 4, "question": "How can you customize your Amazon Q Business web experience?", "answer_span": "You can configure and customize your Amazon Q Business web experience using either the Amazon Q Business console or the Amazon Q Business API.", "chunk": "your Amazon Q Business resources in various ways. For example, categorize by purpose, owner, or application environment, or any combination. Each tag consists of a key and a value, both of which you define. For more information, see Tags. Visual content extraction When Amazon Q Business processes your input files from a data source, it uses advanced image understanding capabilities to extract semantic information and insights from images and other visuals. This feature makes visual information in your data sources queryable, allowing end users to find relevant information even when it's conveyed in embedded diagrams, charts, or technical illustrations. Visual content extraction provides additional context and nuance to the information in your data sources and builds a more complete knowledge base from your enterprise data. For more information, see Extracting semantic meaning from embedded visual content. User store User Store is an Amazon Q Business data source connector feature that streamlines user and group management across all the data sources attached to your application environment. For more information about how this feature works and implementation details, see Understanding User Store. Web experience An Amazon Q Business web experience is the chat interface that you create using your Amazon Q Business application environment. Then, your end users can chat with your organization’s Amazon Q Business web experience. You can configure and customize your Amazon Q Business web experience using either the Amazon Q Business console or the Amazon Q Business API. For more information, see Customizing your web experience. Tags 21 Amazon Q Business User Guide Amazon Q Business subscription tiers and index types Amazon Q Business offers multiple index types and user subscription tiers. You can choose any combination of index types and user subscriptions for your Amazon Q Business application environment. Topics • Index types • User subscription tiers"} +{"global_id": 392, "doc_id": "qbusiness", "chunk_id": "16", "question_id": 1, "question": "What types of indexes does Amazon Q Business offer?", "answer_span": "Amazon Q Business offers two types of indexes: starter index and enterprise index.", "chunk": "Amazon Q Business User Guide Amazon Q Business subscription tiers and index types Amazon Q Business offers multiple index types and user subscription tiers. You can choose any combination of index types and user subscriptions for your Amazon Q Business application environment. Topics • Index types • User subscription tiers • Understanding user subscriptions • Pricing Index types Amazon Q Business offers two types of indexes: starter index and enterprise index. The following table outlines the features of both. Starter index Enterprise index Ideal use case Ideal use case • Proof-of-concept or developer workloads • Production workloads Features Features • Runs in 1 Availability Zone (AZ) – See • Runs in 3 Availability Zone (AZ) – See Availability Zones (data centers in AWS regions) Availability Zones (data centers in AWS regions) • Includes up to 20,000 document capacity or 200 MB of total extracted text (whichever is reached first)* • Includes up to 20,000 document capacity or 200 MB of total extracted text (whichever is reached first)* • Includes up to 100 hours of data source connector usage (time that it takes to scan and index new, updated, or deleted documents) • Includes up to 100 hours of data source connector usage (time that it takes to scan and index new, updated, or deleted documents) Subscription tiers and index types 22"} +{"global_id": 393, "doc_id": "qbusiness", "chunk_id": "16", "question_id": 2, "question": "What is the ideal use case for the starter index?", "answer_span": "Ideal use case • Proof-of-concept or developer workloads", "chunk": "Amazon Q Business User Guide Amazon Q Business subscription tiers and index types Amazon Q Business offers multiple index types and user subscription tiers. You can choose any combination of index types and user subscriptions for your Amazon Q Business application environment. Topics • Index types • User subscription tiers • Understanding user subscriptions • Pricing Index types Amazon Q Business offers two types of indexes: starter index and enterprise index. The following table outlines the features of both. Starter index Enterprise index Ideal use case Ideal use case • Proof-of-concept or developer workloads • Production workloads Features Features • Runs in 1 Availability Zone (AZ) – See • Runs in 3 Availability Zone (AZ) – See Availability Zones (data centers in AWS regions) Availability Zones (data centers in AWS regions) • Includes up to 20,000 document capacity or 200 MB of total extracted text (whichever is reached first)* • Includes up to 20,000 document capacity or 200 MB of total extracted text (whichever is reached first)* • Includes up to 100 hours of data source connector usage (time that it takes to scan and index new, updated, or deleted documents) • Includes up to 100 hours of data source connector usage (time that it takes to scan and index new, updated, or deleted documents) Subscription tiers and index types 22"} +{"global_id": 394, "doc_id": "qbusiness", "chunk_id": "16", "question_id": 3, "question": "How many Availability Zones does the enterprise index run in?", "answer_span": "Runs in 3 Availability Zone (AZ) – See Availability Zones (data centers in AWS regions)", "chunk": "Amazon Q Business User Guide Amazon Q Business subscription tiers and index types Amazon Q Business offers multiple index types and user subscription tiers. You can choose any combination of index types and user subscriptions for your Amazon Q Business application environment. Topics • Index types • User subscription tiers • Understanding user subscriptions • Pricing Index types Amazon Q Business offers two types of indexes: starter index and enterprise index. The following table outlines the features of both. Starter index Enterprise index Ideal use case Ideal use case • Proof-of-concept or developer workloads • Production workloads Features Features • Runs in 1 Availability Zone (AZ) – See • Runs in 3 Availability Zone (AZ) – See Availability Zones (data centers in AWS regions) Availability Zones (data centers in AWS regions) • Includes up to 20,000 document capacity or 200 MB of total extracted text (whichever is reached first)* • Includes up to 20,000 document capacity or 200 MB of total extracted text (whichever is reached first)* • Includes up to 100 hours of data source connector usage (time that it takes to scan and index new, updated, or deleted documents) • Includes up to 100 hours of data source connector usage (time that it takes to scan and index new, updated, or deleted documents) Subscription tiers and index types 22"} +{"global_id": 395, "doc_id": "qbusiness", "chunk_id": "16", "question_id": 4, "question": "What is included in the features of both index types?", "answer_span": "Includes up to 20,000 document capacity or 200 MB of total extracted text (whichever is reached first)*", "chunk": "Amazon Q Business User Guide Amazon Q Business subscription tiers and index types Amazon Q Business offers multiple index types and user subscription tiers. You can choose any combination of index types and user subscriptions for your Amazon Q Business application environment. Topics • Index types • User subscription tiers • Understanding user subscriptions • Pricing Index types Amazon Q Business offers two types of indexes: starter index and enterprise index. The following table outlines the features of both. Starter index Enterprise index Ideal use case Ideal use case • Proof-of-concept or developer workloads • Production workloads Features Features • Runs in 1 Availability Zone (AZ) – See • Runs in 3 Availability Zone (AZ) – See Availability Zones (data centers in AWS regions) Availability Zones (data centers in AWS regions) • Includes up to 20,000 document capacity or 200 MB of total extracted text (whichever is reached first)* • Includes up to 20,000 document capacity or 200 MB of total extracted text (whichever is reached first)* • Includes up to 100 hours of data source connector usage (time that it takes to scan and index new, updated, or deleted documents) • Includes up to 100 hours of data source connector usage (time that it takes to scan and index new, updated, or deleted documents) Subscription tiers and index types 22"} +{"global_id": 396, "doc_id": "sagemaker", "chunk_id": "0", "question_id": 1, "question": "What is Amazon SageMaker AI?", "answer_span": "Amazon SageMaker AI is a fully managed machine learning (ML) service.", "chunk": "Amazon SageMaker AI Developer Guide What is Amazon SageMaker AI? Amazon SageMaker AI is a fully managed machine learning (ML) service. With SageMaker AI, data scientists and developers can quickly and confidently build, train, and deploy ML models into a production-ready hosted environment. It provides a UI experience for running ML workflows that makes SageMaker AI ML tools available across multiple integrated development environments (IDEs). With SageMaker AI, you can store and share your data without having to build and manage your own servers. This gives you or your organizations more time to collaboratively build and develop your ML workflow, and do it sooner. SageMaker AI provides managed ML algorithms to run efficiently against extremely large data in a distributed environment. With built-in support for bring-your-own-algorithms and frameworks, SageMaker AI offers flexible distributed training options that adjust to your specific workflows. Within a few steps, you can deploy a model into a secure and scalable environment from the SageMaker AI console. Topics • Amazon SageMaker AI rename • Amazon SageMaker and Amazon SageMaker AI • Pricing for Amazon SageMaker AI • Recommendations for a first-time user of Amazon SageMaker AI • Overview of machine learning with Amazon SageMaker AI • Amazon SageMaker AI Features Amazon SageMaker AI rename On December 03, 2024, Amazon SageMaker was renamed to Amazon SageMaker AI. This name change does not apply to any of the existing Amazon SageMaker features. Legacy namespaces remain the same The sagemaker API namespaces, along with the following related namespaces, remain unchanged for backward compatibility purposes. • AWS CLI commands • Managed policies containing AmazonSageMaker prefixes Amazon SageMaker AI rename 1 Amazon SageMaker AI Developer Guide • Service endpoints containing sagemaker • AWS CloudFormation resources containing AWS::SageMaker prefixes • Service-linked role containing AWSServiceRoleForSageMaker • Console URLs containing sagemaker • Documentation URLs"} +{"global_id": 397, "doc_id": "sagemaker", "chunk_id": "0", "question_id": 2, "question": "What can data scientists and developers do with SageMaker AI?", "answer_span": "With SageMaker AI, data scientists and developers can quickly and confidently build, train, and deploy ML models into a production-ready hosted environment.", "chunk": "Amazon SageMaker AI Developer Guide What is Amazon SageMaker AI? Amazon SageMaker AI is a fully managed machine learning (ML) service. With SageMaker AI, data scientists and developers can quickly and confidently build, train, and deploy ML models into a production-ready hosted environment. It provides a UI experience for running ML workflows that makes SageMaker AI ML tools available across multiple integrated development environments (IDEs). With SageMaker AI, you can store and share your data without having to build and manage your own servers. This gives you or your organizations more time to collaboratively build and develop your ML workflow, and do it sooner. SageMaker AI provides managed ML algorithms to run efficiently against extremely large data in a distributed environment. With built-in support for bring-your-own-algorithms and frameworks, SageMaker AI offers flexible distributed training options that adjust to your specific workflows. Within a few steps, you can deploy a model into a secure and scalable environment from the SageMaker AI console. Topics • Amazon SageMaker AI rename • Amazon SageMaker and Amazon SageMaker AI • Pricing for Amazon SageMaker AI • Recommendations for a first-time user of Amazon SageMaker AI • Overview of machine learning with Amazon SageMaker AI • Amazon SageMaker AI Features Amazon SageMaker AI rename On December 03, 2024, Amazon SageMaker was renamed to Amazon SageMaker AI. This name change does not apply to any of the existing Amazon SageMaker features. Legacy namespaces remain the same The sagemaker API namespaces, along with the following related namespaces, remain unchanged for backward compatibility purposes. • AWS CLI commands • Managed policies containing AmazonSageMaker prefixes Amazon SageMaker AI rename 1 Amazon SageMaker AI Developer Guide • Service endpoints containing sagemaker • AWS CloudFormation resources containing AWS::SageMaker prefixes • Service-linked role containing AWSServiceRoleForSageMaker • Console URLs containing sagemaker • Documentation URLs"} +{"global_id": 398, "doc_id": "sagemaker", "chunk_id": "0", "question_id": 3, "question": "When was Amazon SageMaker renamed to Amazon SageMaker AI?", "answer_span": "On December 03, 2024, Amazon SageMaker was renamed to Amazon SageMaker AI.", "chunk": "Amazon SageMaker AI Developer Guide What is Amazon SageMaker AI? Amazon SageMaker AI is a fully managed machine learning (ML) service. With SageMaker AI, data scientists and developers can quickly and confidently build, train, and deploy ML models into a production-ready hosted environment. It provides a UI experience for running ML workflows that makes SageMaker AI ML tools available across multiple integrated development environments (IDEs). With SageMaker AI, you can store and share your data without having to build and manage your own servers. This gives you or your organizations more time to collaboratively build and develop your ML workflow, and do it sooner. SageMaker AI provides managed ML algorithms to run efficiently against extremely large data in a distributed environment. With built-in support for bring-your-own-algorithms and frameworks, SageMaker AI offers flexible distributed training options that adjust to your specific workflows. Within a few steps, you can deploy a model into a secure and scalable environment from the SageMaker AI console. Topics • Amazon SageMaker AI rename • Amazon SageMaker and Amazon SageMaker AI • Pricing for Amazon SageMaker AI • Recommendations for a first-time user of Amazon SageMaker AI • Overview of machine learning with Amazon SageMaker AI • Amazon SageMaker AI Features Amazon SageMaker AI rename On December 03, 2024, Amazon SageMaker was renamed to Amazon SageMaker AI. This name change does not apply to any of the existing Amazon SageMaker features. Legacy namespaces remain the same The sagemaker API namespaces, along with the following related namespaces, remain unchanged for backward compatibility purposes. • AWS CLI commands • Managed policies containing AmazonSageMaker prefixes Amazon SageMaker AI rename 1 Amazon SageMaker AI Developer Guide • Service endpoints containing sagemaker • AWS CloudFormation resources containing AWS::SageMaker prefixes • Service-linked role containing AWSServiceRoleForSageMaker • Console URLs containing sagemaker • Documentation URLs"} +{"global_id": 399, "doc_id": "sagemaker", "chunk_id": "0", "question_id": 4, "question": "What remains unchanged for backward compatibility purposes?", "answer_span": "The sagemaker API namespaces, along with the following related namespaces, remain unchanged for backward compatibility purposes.", "chunk": "Amazon SageMaker AI Developer Guide What is Amazon SageMaker AI? Amazon SageMaker AI is a fully managed machine learning (ML) service. With SageMaker AI, data scientists and developers can quickly and confidently build, train, and deploy ML models into a production-ready hosted environment. It provides a UI experience for running ML workflows that makes SageMaker AI ML tools available across multiple integrated development environments (IDEs). With SageMaker AI, you can store and share your data without having to build and manage your own servers. This gives you or your organizations more time to collaboratively build and develop your ML workflow, and do it sooner. SageMaker AI provides managed ML algorithms to run efficiently against extremely large data in a distributed environment. With built-in support for bring-your-own-algorithms and frameworks, SageMaker AI offers flexible distributed training options that adjust to your specific workflows. Within a few steps, you can deploy a model into a secure and scalable environment from the SageMaker AI console. Topics • Amazon SageMaker AI rename • Amazon SageMaker and Amazon SageMaker AI • Pricing for Amazon SageMaker AI • Recommendations for a first-time user of Amazon SageMaker AI • Overview of machine learning with Amazon SageMaker AI • Amazon SageMaker AI Features Amazon SageMaker AI rename On December 03, 2024, Amazon SageMaker was renamed to Amazon SageMaker AI. This name change does not apply to any of the existing Amazon SageMaker features. Legacy namespaces remain the same The sagemaker API namespaces, along with the following related namespaces, remain unchanged for backward compatibility purposes. • AWS CLI commands • Managed policies containing AmazonSageMaker prefixes Amazon SageMaker AI rename 1 Amazon SageMaker AI Developer Guide • Service endpoints containing sagemaker • AWS CloudFormation resources containing AWS::SageMaker prefixes • Service-linked role containing AWSServiceRoleForSageMaker • Console URLs containing sagemaker • Documentation URLs"} +{"global_id": 400, "doc_id": "sagemaker", "chunk_id": "1", "question_id": 1, "question": "What is Amazon SageMaker?", "answer_span": "Amazon SageMaker is a unified platform for data, analytics, and AI.", "chunk": "unchanged for backward compatibility purposes. • AWS CLI commands • Managed policies containing AmazonSageMaker prefixes Amazon SageMaker AI rename 1 Amazon SageMaker AI Developer Guide • Service endpoints containing sagemaker • AWS CloudFormation resources containing AWS::SageMaker prefixes • Service-linked role containing AWSServiceRoleForSageMaker • Console URLs containing sagemaker • Documentation URLs containing sagemaker Amazon SageMaker and Amazon SageMaker AI On December 03, 2024, Amazon released the next generation of Amazon SageMaker. Amazon SageMaker is a unified platform for data, analytics, and AI. Bringing together AWS machine learning and analytics capabilities, the next generation of SageMaker delivers an integrated experience for analytics and AI with unified access to all your data. Amazon SageMaker includes the following capabilities: • Amazon SageMaker AI (formerly Amazon SageMaker) - Build, train, and deploy ML and foundation models, with fully managed infrastructure, tools, and workflows • Amazon SageMaker Lakehouse – Unify data access across Amazon S3 data lakes, Amazon Redshift, and other data sources • Amazon SageMaker Data and AI Governance – Discover, govern, and collaborate on data and AI securely with Amazon SageMaker Catalog, built on Amazon DataZone • SQL Analytics - Gain insights with the most price-performant SQL engine with Amazon Redshift • Amazon SageMaker Data Processing - Analyze, prepare, and integrate data for analytics and AI using open-source frameworks on Amazon Athena, Amazon EMR, and AWS Glue • Amazon SageMaker Unified Studio – Build with all your data and tools for analytics and AI in a single development environment • Amazon Bedrock - Build and scale generative AI applications For more information, refer to Amazon SageMaker. Pricing for Amazon SageMaker AI For information about AWS Free Tier limits and the cost of using SageMaker AI, see Amazon SageMaker AI Pricing. Amazon SageMaker and Amazon SageMaker AI 2 Amazon SageMaker AI Developer Guide Recommendations for"} +{"global_id": 401, "doc_id": "sagemaker", "chunk_id": "1", "question_id": 2, "question": "What capabilities does Amazon SageMaker include?", "answer_span": "Amazon SageMaker includes the following capabilities: • Amazon SageMaker AI (formerly Amazon SageMaker) - Build, train, and deploy ML and foundation models, with fully managed infrastructure, tools, and workflows • Amazon SageMaker Lakehouse – Unify data access across Amazon S3 data lakes, Amazon Redshift, and other data sources • Amazon SageMaker Data and AI Governance – Discover, govern, and collaborate on data and AI securely with Amazon SageMaker Catalog, built on Amazon DataZone • SQL Analytics - Gain insights with the most price-performant SQL engine with Amazon Redshift • Amazon SageMaker Data Processing - Analyze, prepare, and integrate data for analytics and AI using open-source frameworks on Amazon Athena, Amazon EMR, and AWS Glue • Amazon SageMaker Unified Studio – Build with all your data and tools for analytics and AI in a single development environment • Amazon Bedrock - Build and scale generative AI applications", "chunk": "unchanged for backward compatibility purposes. • AWS CLI commands • Managed policies containing AmazonSageMaker prefixes Amazon SageMaker AI rename 1 Amazon SageMaker AI Developer Guide • Service endpoints containing sagemaker • AWS CloudFormation resources containing AWS::SageMaker prefixes • Service-linked role containing AWSServiceRoleForSageMaker • Console URLs containing sagemaker • Documentation URLs containing sagemaker Amazon SageMaker and Amazon SageMaker AI On December 03, 2024, Amazon released the next generation of Amazon SageMaker. Amazon SageMaker is a unified platform for data, analytics, and AI. Bringing together AWS machine learning and analytics capabilities, the next generation of SageMaker delivers an integrated experience for analytics and AI with unified access to all your data. Amazon SageMaker includes the following capabilities: • Amazon SageMaker AI (formerly Amazon SageMaker) - Build, train, and deploy ML and foundation models, with fully managed infrastructure, tools, and workflows • Amazon SageMaker Lakehouse – Unify data access across Amazon S3 data lakes, Amazon Redshift, and other data sources • Amazon SageMaker Data and AI Governance – Discover, govern, and collaborate on data and AI securely with Amazon SageMaker Catalog, built on Amazon DataZone • SQL Analytics - Gain insights with the most price-performant SQL engine with Amazon Redshift • Amazon SageMaker Data Processing - Analyze, prepare, and integrate data for analytics and AI using open-source frameworks on Amazon Athena, Amazon EMR, and AWS Glue • Amazon SageMaker Unified Studio – Build with all your data and tools for analytics and AI in a single development environment • Amazon Bedrock - Build and scale generative AI applications For more information, refer to Amazon SageMaker. Pricing for Amazon SageMaker AI For information about AWS Free Tier limits and the cost of using SageMaker AI, see Amazon SageMaker AI Pricing. Amazon SageMaker and Amazon SageMaker AI 2 Amazon SageMaker AI Developer Guide Recommendations for"} +{"global_id": 402, "doc_id": "sagemaker", "chunk_id": "1", "question_id": 3, "question": "When was the next generation of Amazon SageMaker released?", "answer_span": "On December 03, 2024, Amazon released the next generation of Amazon SageMaker.", "chunk": "unchanged for backward compatibility purposes. • AWS CLI commands • Managed policies containing AmazonSageMaker prefixes Amazon SageMaker AI rename 1 Amazon SageMaker AI Developer Guide • Service endpoints containing sagemaker • AWS CloudFormation resources containing AWS::SageMaker prefixes • Service-linked role containing AWSServiceRoleForSageMaker • Console URLs containing sagemaker • Documentation URLs containing sagemaker Amazon SageMaker and Amazon SageMaker AI On December 03, 2024, Amazon released the next generation of Amazon SageMaker. Amazon SageMaker is a unified platform for data, analytics, and AI. Bringing together AWS machine learning and analytics capabilities, the next generation of SageMaker delivers an integrated experience for analytics and AI with unified access to all your data. Amazon SageMaker includes the following capabilities: • Amazon SageMaker AI (formerly Amazon SageMaker) - Build, train, and deploy ML and foundation models, with fully managed infrastructure, tools, and workflows • Amazon SageMaker Lakehouse – Unify data access across Amazon S3 data lakes, Amazon Redshift, and other data sources • Amazon SageMaker Data and AI Governance – Discover, govern, and collaborate on data and AI securely with Amazon SageMaker Catalog, built on Amazon DataZone • SQL Analytics - Gain insights with the most price-performant SQL engine with Amazon Redshift • Amazon SageMaker Data Processing - Analyze, prepare, and integrate data for analytics and AI using open-source frameworks on Amazon Athena, Amazon EMR, and AWS Glue • Amazon SageMaker Unified Studio – Build with all your data and tools for analytics and AI in a single development environment • Amazon Bedrock - Build and scale generative AI applications For more information, refer to Amazon SageMaker. Pricing for Amazon SageMaker AI For information about AWS Free Tier limits and the cost of using SageMaker AI, see Amazon SageMaker AI Pricing. Amazon SageMaker and Amazon SageMaker AI 2 Amazon SageMaker AI Developer Guide Recommendations for"} +{"global_id": 403, "doc_id": "sagemaker", "chunk_id": "1", "question_id": 4, "question": "What is the purpose of Amazon SageMaker Data Processing?", "answer_span": "Analyze, prepare, and integrate data for analytics and AI using open-source frameworks on Amazon Athena, Amazon EMR, and AWS Glue.", "chunk": "unchanged for backward compatibility purposes. • AWS CLI commands • Managed policies containing AmazonSageMaker prefixes Amazon SageMaker AI rename 1 Amazon SageMaker AI Developer Guide • Service endpoints containing sagemaker • AWS CloudFormation resources containing AWS::SageMaker prefixes • Service-linked role containing AWSServiceRoleForSageMaker • Console URLs containing sagemaker • Documentation URLs containing sagemaker Amazon SageMaker and Amazon SageMaker AI On December 03, 2024, Amazon released the next generation of Amazon SageMaker. Amazon SageMaker is a unified platform for data, analytics, and AI. Bringing together AWS machine learning and analytics capabilities, the next generation of SageMaker delivers an integrated experience for analytics and AI with unified access to all your data. Amazon SageMaker includes the following capabilities: • Amazon SageMaker AI (formerly Amazon SageMaker) - Build, train, and deploy ML and foundation models, with fully managed infrastructure, tools, and workflows • Amazon SageMaker Lakehouse – Unify data access across Amazon S3 data lakes, Amazon Redshift, and other data sources • Amazon SageMaker Data and AI Governance – Discover, govern, and collaborate on data and AI securely with Amazon SageMaker Catalog, built on Amazon DataZone • SQL Analytics - Gain insights with the most price-performant SQL engine with Amazon Redshift • Amazon SageMaker Data Processing - Analyze, prepare, and integrate data for analytics and AI using open-source frameworks on Amazon Athena, Amazon EMR, and AWS Glue • Amazon SageMaker Unified Studio – Build with all your data and tools for analytics and AI in a single development environment • Amazon Bedrock - Build and scale generative AI applications For more information, refer to Amazon SageMaker. Pricing for Amazon SageMaker AI For information about AWS Free Tier limits and the cost of using SageMaker AI, see Amazon SageMaker AI Pricing. Amazon SageMaker and Amazon SageMaker AI 2 Amazon SageMaker AI Developer Guide Recommendations for"} +{"global_id": 404, "doc_id": "sagemaker", "chunk_id": "2", "question_id": 1, "question": "What is recommended for a first-time user of Amazon SageMaker AI?", "answer_span": "If you're a first-time user of SageMaker AI, we recommend that you complete the following: 1. Overview of machine learning with Amazon SageMaker AI – Get an overview of the machine learning (ML) lifecycle and learn about solutions that are offered.", "chunk": "and scale generative AI applications For more information, refer to Amazon SageMaker. Pricing for Amazon SageMaker AI For information about AWS Free Tier limits and the cost of using SageMaker AI, see Amazon SageMaker AI Pricing. Amazon SageMaker and Amazon SageMaker AI 2 Amazon SageMaker AI Developer Guide Recommendations for a first-time user of Amazon SageMaker AI If you're a first-time user of SageMaker AI, we recommend that you complete the following: 1. Overview of machine learning with Amazon SageMaker AI – Get an overview of the machine learning (ML) lifecycle and learn about solutions that are offered. This page explains key concepts and describes the core components involved in building AI solutions with SageMaker AI. 2. Guide to getting set up with Amazon SageMaker AI – Learn how to set up and use SageMaker AI based on your needs. 3. Automated ML, no-code, or low-code – Learn about low-code and no-code ML options that simplify a ML workflow by automating machine learning tasks. These options are helpful ML learning tools because they provide visibility into the code by generating notebooks for each of the automated ML tasks. 4. Machine learning environments offered by Amazon SageMaker AI – Familiarize yourself with the ML environments that you can use to develop your ML workflow, such as information and examples about ready-to-use and custom models. 5. Explore other topics – Use the SageMaker AI Developer Guide's table of contents to explore more topics. For example, you can find information about ML lifecycle stages, in Overview of machine learning with Amazon SageMaker AI, and various solutions that SageMaker AI offers. 6. Amazon SageMaker AI resources – Refer to the various developer resources that SageMaker AI offers. Overview of machine learning with Amazon SageMaker AI This section describes a typical machine learning (ML) workflow"} +{"global_id": 405, "doc_id": "sagemaker", "chunk_id": "2", "question_id": 2, "question": "What does the Guide to getting set up with Amazon SageMaker AI help you learn?", "answer_span": "Learn how to set up and use SageMaker AI based on your needs.", "chunk": "and scale generative AI applications For more information, refer to Amazon SageMaker. Pricing for Amazon SageMaker AI For information about AWS Free Tier limits and the cost of using SageMaker AI, see Amazon SageMaker AI Pricing. Amazon SageMaker and Amazon SageMaker AI 2 Amazon SageMaker AI Developer Guide Recommendations for a first-time user of Amazon SageMaker AI If you're a first-time user of SageMaker AI, we recommend that you complete the following: 1. Overview of machine learning with Amazon SageMaker AI – Get an overview of the machine learning (ML) lifecycle and learn about solutions that are offered. This page explains key concepts and describes the core components involved in building AI solutions with SageMaker AI. 2. Guide to getting set up with Amazon SageMaker AI – Learn how to set up and use SageMaker AI based on your needs. 3. Automated ML, no-code, or low-code – Learn about low-code and no-code ML options that simplify a ML workflow by automating machine learning tasks. These options are helpful ML learning tools because they provide visibility into the code by generating notebooks for each of the automated ML tasks. 4. Machine learning environments offered by Amazon SageMaker AI – Familiarize yourself with the ML environments that you can use to develop your ML workflow, such as information and examples about ready-to-use and custom models. 5. Explore other topics – Use the SageMaker AI Developer Guide's table of contents to explore more topics. For example, you can find information about ML lifecycle stages, in Overview of machine learning with Amazon SageMaker AI, and various solutions that SageMaker AI offers. 6. Amazon SageMaker AI resources – Refer to the various developer resources that SageMaker AI offers. Overview of machine learning with Amazon SageMaker AI This section describes a typical machine learning (ML) workflow"} +{"global_id": 406, "doc_id": "sagemaker", "chunk_id": "2", "question_id": 3, "question": "What options simplify a ML workflow by automating machine learning tasks?", "answer_span": "Learn about low-code and no-code ML options that simplify a ML workflow by automating machine learning tasks.", "chunk": "and scale generative AI applications For more information, refer to Amazon SageMaker. Pricing for Amazon SageMaker AI For information about AWS Free Tier limits and the cost of using SageMaker AI, see Amazon SageMaker AI Pricing. Amazon SageMaker and Amazon SageMaker AI 2 Amazon SageMaker AI Developer Guide Recommendations for a first-time user of Amazon SageMaker AI If you're a first-time user of SageMaker AI, we recommend that you complete the following: 1. Overview of machine learning with Amazon SageMaker AI – Get an overview of the machine learning (ML) lifecycle and learn about solutions that are offered. This page explains key concepts and describes the core components involved in building AI solutions with SageMaker AI. 2. Guide to getting set up with Amazon SageMaker AI – Learn how to set up and use SageMaker AI based on your needs. 3. Automated ML, no-code, or low-code – Learn about low-code and no-code ML options that simplify a ML workflow by automating machine learning tasks. These options are helpful ML learning tools because they provide visibility into the code by generating notebooks for each of the automated ML tasks. 4. Machine learning environments offered by Amazon SageMaker AI – Familiarize yourself with the ML environments that you can use to develop your ML workflow, such as information and examples about ready-to-use and custom models. 5. Explore other topics – Use the SageMaker AI Developer Guide's table of contents to explore more topics. For example, you can find information about ML lifecycle stages, in Overview of machine learning with Amazon SageMaker AI, and various solutions that SageMaker AI offers. 6. Amazon SageMaker AI resources – Refer to the various developer resources that SageMaker AI offers. Overview of machine learning with Amazon SageMaker AI This section describes a typical machine learning (ML) workflow"} +{"global_id": 407, "doc_id": "sagemaker", "chunk_id": "2", "question_id": 4, "question": "What can you find in the SageMaker AI Developer Guide's table of contents?", "answer_span": "Use the SageMaker AI Developer Guide's table of contents to explore more topics.", "chunk": "and scale generative AI applications For more information, refer to Amazon SageMaker. Pricing for Amazon SageMaker AI For information about AWS Free Tier limits and the cost of using SageMaker AI, see Amazon SageMaker AI Pricing. Amazon SageMaker and Amazon SageMaker AI 2 Amazon SageMaker AI Developer Guide Recommendations for a first-time user of Amazon SageMaker AI If you're a first-time user of SageMaker AI, we recommend that you complete the following: 1. Overview of machine learning with Amazon SageMaker AI – Get an overview of the machine learning (ML) lifecycle and learn about solutions that are offered. This page explains key concepts and describes the core components involved in building AI solutions with SageMaker AI. 2. Guide to getting set up with Amazon SageMaker AI – Learn how to set up and use SageMaker AI based on your needs. 3. Automated ML, no-code, or low-code – Learn about low-code and no-code ML options that simplify a ML workflow by automating machine learning tasks. These options are helpful ML learning tools because they provide visibility into the code by generating notebooks for each of the automated ML tasks. 4. Machine learning environments offered by Amazon SageMaker AI – Familiarize yourself with the ML environments that you can use to develop your ML workflow, such as information and examples about ready-to-use and custom models. 5. Explore other topics – Use the SageMaker AI Developer Guide's table of contents to explore more topics. For example, you can find information about ML lifecycle stages, in Overview of machine learning with Amazon SageMaker AI, and various solutions that SageMaker AI offers. 6. Amazon SageMaker AI resources – Refer to the various developer resources that SageMaker AI offers. Overview of machine learning with Amazon SageMaker AI This section describes a typical machine learning (ML) workflow"} +{"global_id": 408, "doc_id": "sagemaker", "chunk_id": "3", "question_id": 1, "question": "What does Amazon SageMaker AI help you accomplish?", "answer_span": "This section describes a typical machine learning (ML) workflow and describes how to accomplish those tasks with Amazon SageMaker AI.", "chunk": "stages, in Overview of machine learning with Amazon SageMaker AI, and various solutions that SageMaker AI offers. 6. Amazon SageMaker AI resources – Refer to the various developer resources that SageMaker AI offers. Overview of machine learning with Amazon SageMaker AI This section describes a typical machine learning (ML) workflow and describes how to accomplish those tasks with Amazon SageMaker AI. In machine learning, you teach a computer to make predictions or inferences. First, you use an algorithm and example data to train a model. Then, you integrate your model into your application to generate inferences in real time and at scale. The following diagram shows the typical workflow for creating an ML model. It includes three stages in a circular flow that we cover in more detail proceeding the diagram: Recommendations for a first-time user of Amazon SageMaker AI 3 Amazon SageMaker AI Developer Guide • Generate example data • Train a model • Deploy the model The diagram shows how to perform the following tasks in most typical scenarios: 1. Generate example data – To train a model, you need example data. The type of data that you need depends on the business problem that you want the model to solve. This relates to the inferences that you want the model to generate. For example, if you want to create a model that predicts a number from an input image of a handwritten digit. To train this model, you need example images of handwritten numbers. Data scientists often devote time exploring and preprocessing example data before using it for model training. To preprocess data, you typically do the following: a. Fetch the data – You might have in-house example data repositories, or you might use datasets that are publicly available. Typically, you pull the dataset or datasets into"} +{"global_id": 409, "doc_id": "sagemaker", "chunk_id": "3", "question_id": 2, "question": "What are the three stages included in the typical workflow for creating an ML model?", "answer_span": "It includes three stages in a circular flow that we cover in more detail proceeding the diagram: Generate example data, Train a model, Deploy the model.", "chunk": "stages, in Overview of machine learning with Amazon SageMaker AI, and various solutions that SageMaker AI offers. 6. Amazon SageMaker AI resources – Refer to the various developer resources that SageMaker AI offers. Overview of machine learning with Amazon SageMaker AI This section describes a typical machine learning (ML) workflow and describes how to accomplish those tasks with Amazon SageMaker AI. In machine learning, you teach a computer to make predictions or inferences. First, you use an algorithm and example data to train a model. Then, you integrate your model into your application to generate inferences in real time and at scale. The following diagram shows the typical workflow for creating an ML model. It includes three stages in a circular flow that we cover in more detail proceeding the diagram: Recommendations for a first-time user of Amazon SageMaker AI 3 Amazon SageMaker AI Developer Guide • Generate example data • Train a model • Deploy the model The diagram shows how to perform the following tasks in most typical scenarios: 1. Generate example data – To train a model, you need example data. The type of data that you need depends on the business problem that you want the model to solve. This relates to the inferences that you want the model to generate. For example, if you want to create a model that predicts a number from an input image of a handwritten digit. To train this model, you need example images of handwritten numbers. Data scientists often devote time exploring and preprocessing example data before using it for model training. To preprocess data, you typically do the following: a. Fetch the data – You might have in-house example data repositories, or you might use datasets that are publicly available. Typically, you pull the dataset or datasets into"} +{"global_id": 410, "doc_id": "sagemaker", "chunk_id": "3", "question_id": 3, "question": "What do you need to train a model?", "answer_span": "To train a model, you need example data.", "chunk": "stages, in Overview of machine learning with Amazon SageMaker AI, and various solutions that SageMaker AI offers. 6. Amazon SageMaker AI resources – Refer to the various developer resources that SageMaker AI offers. Overview of machine learning with Amazon SageMaker AI This section describes a typical machine learning (ML) workflow and describes how to accomplish those tasks with Amazon SageMaker AI. In machine learning, you teach a computer to make predictions or inferences. First, you use an algorithm and example data to train a model. Then, you integrate your model into your application to generate inferences in real time and at scale. The following diagram shows the typical workflow for creating an ML model. It includes three stages in a circular flow that we cover in more detail proceeding the diagram: Recommendations for a first-time user of Amazon SageMaker AI 3 Amazon SageMaker AI Developer Guide • Generate example data • Train a model • Deploy the model The diagram shows how to perform the following tasks in most typical scenarios: 1. Generate example data – To train a model, you need example data. The type of data that you need depends on the business problem that you want the model to solve. This relates to the inferences that you want the model to generate. For example, if you want to create a model that predicts a number from an input image of a handwritten digit. To train this model, you need example images of handwritten numbers. Data scientists often devote time exploring and preprocessing example data before using it for model training. To preprocess data, you typically do the following: a. Fetch the data – You might have in-house example data repositories, or you might use datasets that are publicly available. Typically, you pull the dataset or datasets into"} +{"global_id": 411, "doc_id": "sagemaker", "chunk_id": "3", "question_id": 4, "question": "What is the first step in preprocessing data?", "answer_span": "Fetch the data – You might have in-house example data repositories, or you might use datasets that are publicly available.", "chunk": "stages, in Overview of machine learning with Amazon SageMaker AI, and various solutions that SageMaker AI offers. 6. Amazon SageMaker AI resources – Refer to the various developer resources that SageMaker AI offers. Overview of machine learning with Amazon SageMaker AI This section describes a typical machine learning (ML) workflow and describes how to accomplish those tasks with Amazon SageMaker AI. In machine learning, you teach a computer to make predictions or inferences. First, you use an algorithm and example data to train a model. Then, you integrate your model into your application to generate inferences in real time and at scale. The following diagram shows the typical workflow for creating an ML model. It includes three stages in a circular flow that we cover in more detail proceeding the diagram: Recommendations for a first-time user of Amazon SageMaker AI 3 Amazon SageMaker AI Developer Guide • Generate example data • Train a model • Deploy the model The diagram shows how to perform the following tasks in most typical scenarios: 1. Generate example data – To train a model, you need example data. The type of data that you need depends on the business problem that you want the model to solve. This relates to the inferences that you want the model to generate. For example, if you want to create a model that predicts a number from an input image of a handwritten digit. To train this model, you need example images of handwritten numbers. Data scientists often devote time exploring and preprocessing example data before using it for model training. To preprocess data, you typically do the following: a. Fetch the data – You might have in-house example data repositories, or you might use datasets that are publicly available. Typically, you pull the dataset or datasets into"} +{"global_id": 412, "doc_id": "sagemaker", "chunk_id": "4", "question_id": 1, "question": "What should you do before using data for model training?", "answer_span": "devote time exploring and preprocessing example data before using it for model training.", "chunk": "devote time exploring and preprocessing example data before using it for model training. To preprocess data, you typically do the following: a. Fetch the data – You might have in-house example data repositories, or you might use datasets that are publicly available. Typically, you pull the dataset or datasets into a single repository. Overview of machine learning with Amazon SageMaker AI 4 Amazon SageMaker AI Developer Guide b. Clean the data – To improve model training, inspect the data and clean it, as needed. For example, if your data has a country name attribute with values United States and US, you can edit the data to be consistent. c. Prepare or transform the data – To improve performance, you might perform additional data transformations. For example, you might choose to combine attributes for a model that predicts the conditions that require de-icing an aircraft. Instead of using temperature and humidity attributes separately, you can combine those attributes into a new attribute to get a better model. In SageMaker AI, you can preprocess example data using SageMaker APIs with the SageMaker Python SDK in an integrated development environment (IDE). With SDK for Python (Boto3) you can fetch, explore, and prepare your data for model training. For information about data preparation, processing, and transforming your data, see Recommendations for choosing the right data preparation tool in SageMaker AI, Data transformation workloads with SageMaker Processing, and Create, store, and share features with Feature Store. 2. Train a model – Model training includes both training and evaluating the model, as follows: • Training the model – To train a model, you need an algorithm or a pre-trained base model. The algorithm you choose depends on a number of factors. For a built-in solution, you can use one of the algorithms that SageMaker provides. For"} +{"global_id": 413, "doc_id": "sagemaker", "chunk_id": "4", "question_id": 2, "question": "What is the first step in preprocessing data?", "answer_span": "Fetch the data – You might have in-house example data repositories, or you might use datasets that are publicly available.", "chunk": "devote time exploring and preprocessing example data before using it for model training. To preprocess data, you typically do the following: a. Fetch the data – You might have in-house example data repositories, or you might use datasets that are publicly available. Typically, you pull the dataset or datasets into a single repository. Overview of machine learning with Amazon SageMaker AI 4 Amazon SageMaker AI Developer Guide b. Clean the data – To improve model training, inspect the data and clean it, as needed. For example, if your data has a country name attribute with values United States and US, you can edit the data to be consistent. c. Prepare or transform the data – To improve performance, you might perform additional data transformations. For example, you might choose to combine attributes for a model that predicts the conditions that require de-icing an aircraft. Instead of using temperature and humidity attributes separately, you can combine those attributes into a new attribute to get a better model. In SageMaker AI, you can preprocess example data using SageMaker APIs with the SageMaker Python SDK in an integrated development environment (IDE). With SDK for Python (Boto3) you can fetch, explore, and prepare your data for model training. For information about data preparation, processing, and transforming your data, see Recommendations for choosing the right data preparation tool in SageMaker AI, Data transformation workloads with SageMaker Processing, and Create, store, and share features with Feature Store. 2. Train a model – Model training includes both training and evaluating the model, as follows: • Training the model – To train a model, you need an algorithm or a pre-trained base model. The algorithm you choose depends on a number of factors. For a built-in solution, you can use one of the algorithms that SageMaker provides. For"} +{"global_id": 414, "doc_id": "sagemaker", "chunk_id": "4", "question_id": 3, "question": "How can you improve model training?", "answer_span": "To improve model training, inspect the data and clean it, as needed.", "chunk": "devote time exploring and preprocessing example data before using it for model training. To preprocess data, you typically do the following: a. Fetch the data – You might have in-house example data repositories, or you might use datasets that are publicly available. Typically, you pull the dataset or datasets into a single repository. Overview of machine learning with Amazon SageMaker AI 4 Amazon SageMaker AI Developer Guide b. Clean the data – To improve model training, inspect the data and clean it, as needed. For example, if your data has a country name attribute with values United States and US, you can edit the data to be consistent. c. Prepare or transform the data – To improve performance, you might perform additional data transformations. For example, you might choose to combine attributes for a model that predicts the conditions that require de-icing an aircraft. Instead of using temperature and humidity attributes separately, you can combine those attributes into a new attribute to get a better model. In SageMaker AI, you can preprocess example data using SageMaker APIs with the SageMaker Python SDK in an integrated development environment (IDE). With SDK for Python (Boto3) you can fetch, explore, and prepare your data for model training. For information about data preparation, processing, and transforming your data, see Recommendations for choosing the right data preparation tool in SageMaker AI, Data transformation workloads with SageMaker Processing, and Create, store, and share features with Feature Store. 2. Train a model – Model training includes both training and evaluating the model, as follows: • Training the model – To train a model, you need an algorithm or a pre-trained base model. The algorithm you choose depends on a number of factors. For a built-in solution, you can use one of the algorithms that SageMaker provides. For"} +{"global_id": 415, "doc_id": "sagemaker", "chunk_id": "4", "question_id": 4, "question": "What do you need to train a model?", "answer_span": "To train a model, you need an algorithm or a pre-trained base model.", "chunk": "devote time exploring and preprocessing example data before using it for model training. To preprocess data, you typically do the following: a. Fetch the data – You might have in-house example data repositories, or you might use datasets that are publicly available. Typically, you pull the dataset or datasets into a single repository. Overview of machine learning with Amazon SageMaker AI 4 Amazon SageMaker AI Developer Guide b. Clean the data – To improve model training, inspect the data and clean it, as needed. For example, if your data has a country name attribute with values United States and US, you can edit the data to be consistent. c. Prepare or transform the data – To improve performance, you might perform additional data transformations. For example, you might choose to combine attributes for a model that predicts the conditions that require de-icing an aircraft. Instead of using temperature and humidity attributes separately, you can combine those attributes into a new attribute to get a better model. In SageMaker AI, you can preprocess example data using SageMaker APIs with the SageMaker Python SDK in an integrated development environment (IDE). With SDK for Python (Boto3) you can fetch, explore, and prepare your data for model training. For information about data preparation, processing, and transforming your data, see Recommendations for choosing the right data preparation tool in SageMaker AI, Data transformation workloads with SageMaker Processing, and Create, store, and share features with Feature Store. 2. Train a model – Model training includes both training and evaluating the model, as follows: • Training the model – To train a model, you need an algorithm or a pre-trained base model. The algorithm you choose depends on a number of factors. For a built-in solution, you can use one of the algorithms that SageMaker provides. For"} +{"global_id": 416, "doc_id": "sagemaker", "chunk_id": "5", "question_id": 1, "question": "What do you need to train a model?", "answer_span": "To train a model, you need an algorithm or a pre-trained base model.", "chunk": "training and evaluating the model, as follows: • Training the model – To train a model, you need an algorithm or a pre-trained base model. The algorithm you choose depends on a number of factors. For a built-in solution, you can use one of the algorithms that SageMaker provides. For a list of algorithms provided by SageMaker and related considerations, see Built-in algorithms and pretrained models in Amazon SageMaker. For a UI-based training solution that provides algorithms and models, see SageMaker JumpStart pretrained models. You also need compute resources for training. Your resource use depends on the size of your training dataset and how quickly you need the results. You can use resources ranging from a single general-purpose instance to a distributed cluster of GPU instances. For more information, see Train a Model with Amazon SageMaker. • Evaluating the model – After you train your model, you evaluate it to determine whether the accuracy of the inferences is acceptable. To train and evaluate your model, use the SageMaker Python SDK to send requests to the model for inferences through one of the available IDEs. For more information about evaluating your model, see Data and model quality monitoring with Amazon SageMaker Model Monitor. 3. Deploy the model – You traditionally re-engineer a model before you integrate it with your application and deploy it. With SageMaker AI hosting services, you can deploy your model Overview of machine learning with Amazon SageMaker AI 5 Amazon SageMaker AI Developer Guide independently, which decouples it from your application code. For more information, see Deploy models for inference. Machine learning is a continuous cycle. After deploying a model, you monitor the inferences, collect more high-quality data, and evaluate the model to identify drift. You then increase the accuracy of your inferences by updating your training data"} +{"global_id": 417, "doc_id": "sagemaker", "chunk_id": "5", "question_id": 2, "question": "What resources do you need for training?", "answer_span": "You also need compute resources for training.", "chunk": "training and evaluating the model, as follows: • Training the model – To train a model, you need an algorithm or a pre-trained base model. The algorithm you choose depends on a number of factors. For a built-in solution, you can use one of the algorithms that SageMaker provides. For a list of algorithms provided by SageMaker and related considerations, see Built-in algorithms and pretrained models in Amazon SageMaker. For a UI-based training solution that provides algorithms and models, see SageMaker JumpStart pretrained models. You also need compute resources for training. Your resource use depends on the size of your training dataset and how quickly you need the results. You can use resources ranging from a single general-purpose instance to a distributed cluster of GPU instances. For more information, see Train a Model with Amazon SageMaker. • Evaluating the model – After you train your model, you evaluate it to determine whether the accuracy of the inferences is acceptable. To train and evaluate your model, use the SageMaker Python SDK to send requests to the model for inferences through one of the available IDEs. For more information about evaluating your model, see Data and model quality monitoring with Amazon SageMaker Model Monitor. 3. Deploy the model – You traditionally re-engineer a model before you integrate it with your application and deploy it. With SageMaker AI hosting services, you can deploy your model Overview of machine learning with Amazon SageMaker AI 5 Amazon SageMaker AI Developer Guide independently, which decouples it from your application code. For more information, see Deploy models for inference. Machine learning is a continuous cycle. After deploying a model, you monitor the inferences, collect more high-quality data, and evaluate the model to identify drift. You then increase the accuracy of your inferences by updating your training data"} +{"global_id": 418, "doc_id": "sagemaker", "chunk_id": "5", "question_id": 3, "question": "How do you evaluate your model?", "answer_span": "After you train your model, you evaluate it to determine whether the accuracy of the inferences is acceptable.", "chunk": "training and evaluating the model, as follows: • Training the model – To train a model, you need an algorithm or a pre-trained base model. The algorithm you choose depends on a number of factors. For a built-in solution, you can use one of the algorithms that SageMaker provides. For a list of algorithms provided by SageMaker and related considerations, see Built-in algorithms and pretrained models in Amazon SageMaker. For a UI-based training solution that provides algorithms and models, see SageMaker JumpStart pretrained models. You also need compute resources for training. Your resource use depends on the size of your training dataset and how quickly you need the results. You can use resources ranging from a single general-purpose instance to a distributed cluster of GPU instances. For more information, see Train a Model with Amazon SageMaker. • Evaluating the model – After you train your model, you evaluate it to determine whether the accuracy of the inferences is acceptable. To train and evaluate your model, use the SageMaker Python SDK to send requests to the model for inferences through one of the available IDEs. For more information about evaluating your model, see Data and model quality monitoring with Amazon SageMaker Model Monitor. 3. Deploy the model – You traditionally re-engineer a model before you integrate it with your application and deploy it. With SageMaker AI hosting services, you can deploy your model Overview of machine learning with Amazon SageMaker AI 5 Amazon SageMaker AI Developer Guide independently, which decouples it from your application code. For more information, see Deploy models for inference. Machine learning is a continuous cycle. After deploying a model, you monitor the inferences, collect more high-quality data, and evaluate the model to identify drift. You then increase the accuracy of your inferences by updating your training data"} +{"global_id": 419, "doc_id": "sagemaker", "chunk_id": "5", "question_id": 4, "question": "What is the purpose of deploying a model?", "answer_span": "You traditionally re-engineer a model before you integrate it with your application and deploy it.", "chunk": "training and evaluating the model, as follows: • Training the model – To train a model, you need an algorithm or a pre-trained base model. The algorithm you choose depends on a number of factors. For a built-in solution, you can use one of the algorithms that SageMaker provides. For a list of algorithms provided by SageMaker and related considerations, see Built-in algorithms and pretrained models in Amazon SageMaker. For a UI-based training solution that provides algorithms and models, see SageMaker JumpStart pretrained models. You also need compute resources for training. Your resource use depends on the size of your training dataset and how quickly you need the results. You can use resources ranging from a single general-purpose instance to a distributed cluster of GPU instances. For more information, see Train a Model with Amazon SageMaker. • Evaluating the model – After you train your model, you evaluate it to determine whether the accuracy of the inferences is acceptable. To train and evaluate your model, use the SageMaker Python SDK to send requests to the model for inferences through one of the available IDEs. For more information about evaluating your model, see Data and model quality monitoring with Amazon SageMaker Model Monitor. 3. Deploy the model – You traditionally re-engineer a model before you integrate it with your application and deploy it. With SageMaker AI hosting services, you can deploy your model Overview of machine learning with Amazon SageMaker AI 5 Amazon SageMaker AI Developer Guide independently, which decouples it from your application code. For more information, see Deploy models for inference. Machine learning is a continuous cycle. After deploying a model, you monitor the inferences, collect more high-quality data, and evaluate the model to identify drift. You then increase the accuracy of your inferences by updating your training data"} +{"global_id": 420, "doc_id": "sagemaker", "chunk_id": "6", "question_id": 1, "question": "What is a continuous cycle in machine learning after deploying a model?", "answer_span": "After deploying a model, you monitor the inferences, collect more high-quality data, and evaluate the model to identify drift.", "chunk": "from your application code. For more information, see Deploy models for inference. Machine learning is a continuous cycle. After deploying a model, you monitor the inferences, collect more high-quality data, and evaluate the model to identify drift. You then increase the accuracy of your inferences by updating your training data to include the newly collected highquality data. As more example data becomes available, you continue retraining your model to increase accuracy. Amazon SageMaker AI Features Amazon SageMaker AI includes the following features. Topics • New features for re:Invent 2024 • Machine learning environments • Major features New features for re:Invent 2024 SageMaker AI includes the following new features for re:Invent 2024. HyperPod recipes You can run recipes within Amazon SageMaker HyperPod or as SageMaker training jobs. You use the HyperPod training adapter as the framework to help you run end-to-end training workflows. The training adapter is built on the NVIDIA NeMo framework and Neuronx Distributed Training package. HyperPod in Studio In Amazon SageMaker Studio, you can launch machine learning workloads on HyperPod clusters and view HyperPod cluster information. The increased visibility into cluster details and hardware metrics can help your team identify the right candidate for your pre-training or finetuning workloads. SageMaker AI Features 6 Amazon SageMaker AI Developer Guide HyperPod task governance Amazon SageMaker HyperPod task governance is a robust management system designed to streamline resource allocation and ensure efficient utilization of compute resources across teams and projects for your Amazon EKS clusters. HyperPod task governance also provides Amazon EKS cluster Observability, offering real-time visibility into cluster capacity, compute availability and usage, team allocation and utilization, and task run and wait time information. Amazon SageMaker Partner AI Apps With Amazon SageMaker Partner AI Apps, users get access to generative artificial intelligence (AI) and machine learning (ML) development applications built,"} +{"global_id": 421, "doc_id": "sagemaker", "chunk_id": "6", "question_id": 2, "question": "What framework is the HyperPod training adapter built on?", "answer_span": "The training adapter is built on the NVIDIA NeMo framework and Neuronx Distributed Training package.", "chunk": "from your application code. For more information, see Deploy models for inference. Machine learning is a continuous cycle. After deploying a model, you monitor the inferences, collect more high-quality data, and evaluate the model to identify drift. You then increase the accuracy of your inferences by updating your training data to include the newly collected highquality data. As more example data becomes available, you continue retraining your model to increase accuracy. Amazon SageMaker AI Features Amazon SageMaker AI includes the following features. Topics • New features for re:Invent 2024 • Machine learning environments • Major features New features for re:Invent 2024 SageMaker AI includes the following new features for re:Invent 2024. HyperPod recipes You can run recipes within Amazon SageMaker HyperPod or as SageMaker training jobs. You use the HyperPod training adapter as the framework to help you run end-to-end training workflows. The training adapter is built on the NVIDIA NeMo framework and Neuronx Distributed Training package. HyperPod in Studio In Amazon SageMaker Studio, you can launch machine learning workloads on HyperPod clusters and view HyperPod cluster information. The increased visibility into cluster details and hardware metrics can help your team identify the right candidate for your pre-training or finetuning workloads. SageMaker AI Features 6 Amazon SageMaker AI Developer Guide HyperPod task governance Amazon SageMaker HyperPod task governance is a robust management system designed to streamline resource allocation and ensure efficient utilization of compute resources across teams and projects for your Amazon EKS clusters. HyperPod task governance also provides Amazon EKS cluster Observability, offering real-time visibility into cluster capacity, compute availability and usage, team allocation and utilization, and task run and wait time information. Amazon SageMaker Partner AI Apps With Amazon SageMaker Partner AI Apps, users get access to generative artificial intelligence (AI) and machine learning (ML) development applications built,"} +{"global_id": 422, "doc_id": "sagemaker", "chunk_id": "6", "question_id": 3, "question": "What does Amazon SageMaker HyperPod task governance provide?", "answer_span": "HyperPod task governance also provides Amazon EKS cluster Observability, offering real-time visibility into cluster capacity, compute availability and usage, team allocation and utilization, and task run and wait time information.", "chunk": "from your application code. For more information, see Deploy models for inference. Machine learning is a continuous cycle. After deploying a model, you monitor the inferences, collect more high-quality data, and evaluate the model to identify drift. You then increase the accuracy of your inferences by updating your training data to include the newly collected highquality data. As more example data becomes available, you continue retraining your model to increase accuracy. Amazon SageMaker AI Features Amazon SageMaker AI includes the following features. Topics • New features for re:Invent 2024 • Machine learning environments • Major features New features for re:Invent 2024 SageMaker AI includes the following new features for re:Invent 2024. HyperPod recipes You can run recipes within Amazon SageMaker HyperPod or as SageMaker training jobs. You use the HyperPod training adapter as the framework to help you run end-to-end training workflows. The training adapter is built on the NVIDIA NeMo framework and Neuronx Distributed Training package. HyperPod in Studio In Amazon SageMaker Studio, you can launch machine learning workloads on HyperPod clusters and view HyperPod cluster information. The increased visibility into cluster details and hardware metrics can help your team identify the right candidate for your pre-training or finetuning workloads. SageMaker AI Features 6 Amazon SageMaker AI Developer Guide HyperPod task governance Amazon SageMaker HyperPod task governance is a robust management system designed to streamline resource allocation and ensure efficient utilization of compute resources across teams and projects for your Amazon EKS clusters. HyperPod task governance also provides Amazon EKS cluster Observability, offering real-time visibility into cluster capacity, compute availability and usage, team allocation and utilization, and task run and wait time information. Amazon SageMaker Partner AI Apps With Amazon SageMaker Partner AI Apps, users get access to generative artificial intelligence (AI) and machine learning (ML) development applications built,"} +{"global_id": 423, "doc_id": "sagemaker", "chunk_id": "6", "question_id": 4, "question": "What can you do in Amazon SageMaker Studio with HyperPod?", "answer_span": "In Amazon SageMaker Studio, you can launch machine learning workloads on HyperPod clusters and view HyperPod cluster information.", "chunk": "from your application code. For more information, see Deploy models for inference. Machine learning is a continuous cycle. After deploying a model, you monitor the inferences, collect more high-quality data, and evaluate the model to identify drift. You then increase the accuracy of your inferences by updating your training data to include the newly collected highquality data. As more example data becomes available, you continue retraining your model to increase accuracy. Amazon SageMaker AI Features Amazon SageMaker AI includes the following features. Topics • New features for re:Invent 2024 • Machine learning environments • Major features New features for re:Invent 2024 SageMaker AI includes the following new features for re:Invent 2024. HyperPod recipes You can run recipes within Amazon SageMaker HyperPod or as SageMaker training jobs. You use the HyperPod training adapter as the framework to help you run end-to-end training workflows. The training adapter is built on the NVIDIA NeMo framework and Neuronx Distributed Training package. HyperPod in Studio In Amazon SageMaker Studio, you can launch machine learning workloads on HyperPod clusters and view HyperPod cluster information. The increased visibility into cluster details and hardware metrics can help your team identify the right candidate for your pre-training or finetuning workloads. SageMaker AI Features 6 Amazon SageMaker AI Developer Guide HyperPod task governance Amazon SageMaker HyperPod task governance is a robust management system designed to streamline resource allocation and ensure efficient utilization of compute resources across teams and projects for your Amazon EKS clusters. HyperPod task governance also provides Amazon EKS cluster Observability, offering real-time visibility into cluster capacity, compute availability and usage, team allocation and utilization, and task run and wait time information. Amazon SageMaker Partner AI Apps With Amazon SageMaker Partner AI Apps, users get access to generative artificial intelligence (AI) and machine learning (ML) development applications built,"} +{"global_id": 424, "doc_id": "sagemaker", "chunk_id": "7", "question_id": 1, "question": "What does EKS cluster Observability offer?", "answer_span": "EKS cluster Observability, offering real-time visibility into cluster capacity, compute availability and usage, team allocation and utilization, and task run and wait time information.", "chunk": "EKS cluster Observability, offering real-time visibility into cluster capacity, compute availability and usage, team allocation and utilization, and task run and wait time information. Amazon SageMaker Partner AI Apps With Amazon SageMaker Partner AI Apps, users get access to generative artificial intelligence (AI) and machine learning (ML) development applications built, published, and distributed by industry-leading application providers. Partner AI Apps are certified to run on SageMaker AI. With Partner AI Apps, users can accelerate and improve how they build solutions based on foundation models (FM) and classic ML models without compromising the security of their sensitive data, which stays completely within their trusted security configuration and is never shared with a third party. Q Developer is available in Canvas You can chat with Amazon Q Developer in Amazon SageMaker Canvas using natural language for generative AI assistance with solving your machine learning problems. You can converse with Q Developer to discuss the steps of a machine learning workflow and leverage Canvas functionality such as data transforms, model building, and deployment. SageMaker training plans Amazon SageMaker training plans are a compute reservation capability designed for largescale AI model training workloads running on SageMaker training jobs and HyperPod clusters. They provide predictable access to high-demand GPU-accelerated computing resources within specified timelines. You can specify a desired timeline, duration, and maximum compute resources, and SageMaker training plans automatically manages infrastructure setup, workload execution, and fault recovery. This allows for efficiently planning and executing mission-critical AI projects with a predictable cost model. Machine learning environments SageMaker AI includes the following machine learning environments. Machine learning environments 7 Amazon SageMaker AI Developer Guide SageMaker Canvas An auto ML service that gives people with no coding experience the ability to build models and make predictions with them. Code Editor Code Editor extends Studio so that you"} +{"global_id": 425, "doc_id": "sagemaker", "chunk_id": "7", "question_id": 2, "question": "What can users do with Amazon SageMaker Partner AI Apps?", "answer_span": "With Amazon SageMaker Partner AI Apps, users get access to generative artificial intelligence (AI) and machine learning (ML) development applications built, published, and distributed by industry-leading application providers.", "chunk": "EKS cluster Observability, offering real-time visibility into cluster capacity, compute availability and usage, team allocation and utilization, and task run and wait time information. Amazon SageMaker Partner AI Apps With Amazon SageMaker Partner AI Apps, users get access to generative artificial intelligence (AI) and machine learning (ML) development applications built, published, and distributed by industry-leading application providers. Partner AI Apps are certified to run on SageMaker AI. With Partner AI Apps, users can accelerate and improve how they build solutions based on foundation models (FM) and classic ML models without compromising the security of their sensitive data, which stays completely within their trusted security configuration and is never shared with a third party. Q Developer is available in Canvas You can chat with Amazon Q Developer in Amazon SageMaker Canvas using natural language for generative AI assistance with solving your machine learning problems. You can converse with Q Developer to discuss the steps of a machine learning workflow and leverage Canvas functionality such as data transforms, model building, and deployment. SageMaker training plans Amazon SageMaker training plans are a compute reservation capability designed for largescale AI model training workloads running on SageMaker training jobs and HyperPod clusters. They provide predictable access to high-demand GPU-accelerated computing resources within specified timelines. You can specify a desired timeline, duration, and maximum compute resources, and SageMaker training plans automatically manages infrastructure setup, workload execution, and fault recovery. This allows for efficiently planning and executing mission-critical AI projects with a predictable cost model. Machine learning environments SageMaker AI includes the following machine learning environments. Machine learning environments 7 Amazon SageMaker AI Developer Guide SageMaker Canvas An auto ML service that gives people with no coding experience the ability to build models and make predictions with them. Code Editor Code Editor extends Studio so that you"} +{"global_id": 426, "doc_id": "sagemaker", "chunk_id": "7", "question_id": 3, "question": "What are SageMaker training plans designed for?", "answer_span": "Amazon SageMaker training plans are a compute reservation capability designed for largescale AI model training workloads running on SageMaker training jobs and HyperPod clusters.", "chunk": "EKS cluster Observability, offering real-time visibility into cluster capacity, compute availability and usage, team allocation and utilization, and task run and wait time information. Amazon SageMaker Partner AI Apps With Amazon SageMaker Partner AI Apps, users get access to generative artificial intelligence (AI) and machine learning (ML) development applications built, published, and distributed by industry-leading application providers. Partner AI Apps are certified to run on SageMaker AI. With Partner AI Apps, users can accelerate and improve how they build solutions based on foundation models (FM) and classic ML models without compromising the security of their sensitive data, which stays completely within their trusted security configuration and is never shared with a third party. Q Developer is available in Canvas You can chat with Amazon Q Developer in Amazon SageMaker Canvas using natural language for generative AI assistance with solving your machine learning problems. You can converse with Q Developer to discuss the steps of a machine learning workflow and leverage Canvas functionality such as data transforms, model building, and deployment. SageMaker training plans Amazon SageMaker training plans are a compute reservation capability designed for largescale AI model training workloads running on SageMaker training jobs and HyperPod clusters. They provide predictable access to high-demand GPU-accelerated computing resources within specified timelines. You can specify a desired timeline, duration, and maximum compute resources, and SageMaker training plans automatically manages infrastructure setup, workload execution, and fault recovery. This allows for efficiently planning and executing mission-critical AI projects with a predictable cost model. Machine learning environments SageMaker AI includes the following machine learning environments. Machine learning environments 7 Amazon SageMaker AI Developer Guide SageMaker Canvas An auto ML service that gives people with no coding experience the ability to build models and make predictions with them. Code Editor Code Editor extends Studio so that you"} +{"global_id": 427, "doc_id": "sagemaker", "chunk_id": "7", "question_id": 4, "question": "What is SageMaker Canvas?", "answer_span": "SageMaker Canvas An auto ML service that gives people with no coding experience the ability to build models and make predictions with them.", "chunk": "EKS cluster Observability, offering real-time visibility into cluster capacity, compute availability and usage, team allocation and utilization, and task run and wait time information. Amazon SageMaker Partner AI Apps With Amazon SageMaker Partner AI Apps, users get access to generative artificial intelligence (AI) and machine learning (ML) development applications built, published, and distributed by industry-leading application providers. Partner AI Apps are certified to run on SageMaker AI. With Partner AI Apps, users can accelerate and improve how they build solutions based on foundation models (FM) and classic ML models without compromising the security of their sensitive data, which stays completely within their trusted security configuration and is never shared with a third party. Q Developer is available in Canvas You can chat with Amazon Q Developer in Amazon SageMaker Canvas using natural language for generative AI assistance with solving your machine learning problems. You can converse with Q Developer to discuss the steps of a machine learning workflow and leverage Canvas functionality such as data transforms, model building, and deployment. SageMaker training plans Amazon SageMaker training plans are a compute reservation capability designed for largescale AI model training workloads running on SageMaker training jobs and HyperPod clusters. They provide predictable access to high-demand GPU-accelerated computing resources within specified timelines. You can specify a desired timeline, duration, and maximum compute resources, and SageMaker training plans automatically manages infrastructure setup, workload execution, and fault recovery. This allows for efficiently planning and executing mission-critical AI projects with a predictable cost model. Machine learning environments SageMaker AI includes the following machine learning environments. Machine learning environments 7 Amazon SageMaker AI Developer Guide SageMaker Canvas An auto ML service that gives people with no coding experience the ability to build models and make predictions with them. Code Editor Code Editor extends Studio so that you"} +{"global_id": 428, "doc_id": "sagemaker", "chunk_id": "8", "question_id": 1, "question": "What is SageMaker Canvas?", "answer_span": "SageMaker Canvas An auto ML service that gives people with no coding experience the ability to build models and make predictions with them.", "chunk": "environments SageMaker AI includes the following machine learning environments. Machine learning environments 7 Amazon SageMaker AI Developer Guide SageMaker Canvas An auto ML service that gives people with no coding experience the ability to build models and make predictions with them. Code Editor Code Editor extends Studio so that you can write, test, debug and run your analytics and machine learning code in an environment based on Visual Studio Code - Open Source (\"CodeOSS\"). SageMaker geospatial capabilities Build, train, and deploy ML models using geospatial data. SageMaker HyperPod Amazon SageMaker HyperPod is a capability of SageMaker AI that provides an always-on machine learning environment on resilient clusters that you can run any machine learning workloads for developing large machine learning models such as large language models (LLMs) and diffusion models. JupyterLab in Studio JupyterLab in Studio improves latency and reliability for Studio Notebooks Studio Studio is the latest web-based experience for running ML workflows. Studio offers a suite of IDEs, including Code Editor, a new Jupyterlab application, RStudio, and Studio Classic. Amazon SageMaker Studio Classic An integrated machine learning environment where you can build, train, deploy, and analyze your models all in the same application. SageMaker Studio Lab A free service that gives customers access to AWS compute resources in an environment based on open-source JupyterLab. RStudio on Amazon SageMaker AI An integrated development environment for R, with a console, syntax-highlighting editor that supports direct code execution, and tools for plotting, history, debugging and workspace management. Machine learning environments 8 Amazon SageMaker AI Developer Guide Major features SageMaker AI includes the following major features in alphabetical order excluding any SageMaker AI prefix. Amazon Augmented AI Build the workflows required for human review of ML predictions. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with"} +{"global_id": 429, "doc_id": "sagemaker", "chunk_id": "8", "question_id": 2, "question": "What does Amazon SageMaker HyperPod provide?", "answer_span": "Amazon SageMaker HyperPod is a capability of SageMaker AI that provides an always-on machine learning environment on resilient clusters that you can run any machine learning workloads for developing large machine learning models such as large language models (LLMs) and diffusion models.", "chunk": "environments SageMaker AI includes the following machine learning environments. Machine learning environments 7 Amazon SageMaker AI Developer Guide SageMaker Canvas An auto ML service that gives people with no coding experience the ability to build models and make predictions with them. Code Editor Code Editor extends Studio so that you can write, test, debug and run your analytics and machine learning code in an environment based on Visual Studio Code - Open Source (\"CodeOSS\"). SageMaker geospatial capabilities Build, train, and deploy ML models using geospatial data. SageMaker HyperPod Amazon SageMaker HyperPod is a capability of SageMaker AI that provides an always-on machine learning environment on resilient clusters that you can run any machine learning workloads for developing large machine learning models such as large language models (LLMs) and diffusion models. JupyterLab in Studio JupyterLab in Studio improves latency and reliability for Studio Notebooks Studio Studio is the latest web-based experience for running ML workflows. Studio offers a suite of IDEs, including Code Editor, a new Jupyterlab application, RStudio, and Studio Classic. Amazon SageMaker Studio Classic An integrated machine learning environment where you can build, train, deploy, and analyze your models all in the same application. SageMaker Studio Lab A free service that gives customers access to AWS compute resources in an environment based on open-source JupyterLab. RStudio on Amazon SageMaker AI An integrated development environment for R, with a console, syntax-highlighting editor that supports direct code execution, and tools for plotting, history, debugging and workspace management. Machine learning environments 8 Amazon SageMaker AI Developer Guide Major features SageMaker AI includes the following major features in alphabetical order excluding any SageMaker AI prefix. Amazon Augmented AI Build the workflows required for human review of ML predictions. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with"} +{"global_id": 430, "doc_id": "sagemaker", "chunk_id": "8", "question_id": 3, "question": "What is SageMaker Studio Lab?", "answer_span": "SageMaker Studio Lab A free service that gives customers access to AWS compute resources in an environment based on open-source JupyterLab.", "chunk": "environments SageMaker AI includes the following machine learning environments. Machine learning environments 7 Amazon SageMaker AI Developer Guide SageMaker Canvas An auto ML service that gives people with no coding experience the ability to build models and make predictions with them. Code Editor Code Editor extends Studio so that you can write, test, debug and run your analytics and machine learning code in an environment based on Visual Studio Code - Open Source (\"CodeOSS\"). SageMaker geospatial capabilities Build, train, and deploy ML models using geospatial data. SageMaker HyperPod Amazon SageMaker HyperPod is a capability of SageMaker AI that provides an always-on machine learning environment on resilient clusters that you can run any machine learning workloads for developing large machine learning models such as large language models (LLMs) and diffusion models. JupyterLab in Studio JupyterLab in Studio improves latency and reliability for Studio Notebooks Studio Studio is the latest web-based experience for running ML workflows. Studio offers a suite of IDEs, including Code Editor, a new Jupyterlab application, RStudio, and Studio Classic. Amazon SageMaker Studio Classic An integrated machine learning environment where you can build, train, deploy, and analyze your models all in the same application. SageMaker Studio Lab A free service that gives customers access to AWS compute resources in an environment based on open-source JupyterLab. RStudio on Amazon SageMaker AI An integrated development environment for R, with a console, syntax-highlighting editor that supports direct code execution, and tools for plotting, history, debugging and workspace management. Machine learning environments 8 Amazon SageMaker AI Developer Guide Major features SageMaker AI includes the following major features in alphabetical order excluding any SageMaker AI prefix. Amazon Augmented AI Build the workflows required for human review of ML predictions. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with"} +{"global_id": 431, "doc_id": "sagemaker", "chunk_id": "8", "question_id": 4, "question": "What is RStudio on Amazon SageMaker AI?", "answer_span": "RStudio on Amazon SageMaker AI An integrated development environment for R, with a console, syntax-highlighting editor that supports direct code execution, and tools for plotting, history, debugging and workspace management.", "chunk": "environments SageMaker AI includes the following machine learning environments. Machine learning environments 7 Amazon SageMaker AI Developer Guide SageMaker Canvas An auto ML service that gives people with no coding experience the ability to build models and make predictions with them. Code Editor Code Editor extends Studio so that you can write, test, debug and run your analytics and machine learning code in an environment based on Visual Studio Code - Open Source (\"CodeOSS\"). SageMaker geospatial capabilities Build, train, and deploy ML models using geospatial data. SageMaker HyperPod Amazon SageMaker HyperPod is a capability of SageMaker AI that provides an always-on machine learning environment on resilient clusters that you can run any machine learning workloads for developing large machine learning models such as large language models (LLMs) and diffusion models. JupyterLab in Studio JupyterLab in Studio improves latency and reliability for Studio Notebooks Studio Studio is the latest web-based experience for running ML workflows. Studio offers a suite of IDEs, including Code Editor, a new Jupyterlab application, RStudio, and Studio Classic. Amazon SageMaker Studio Classic An integrated machine learning environment where you can build, train, deploy, and analyze your models all in the same application. SageMaker Studio Lab A free service that gives customers access to AWS compute resources in an environment based on open-source JupyterLab. RStudio on Amazon SageMaker AI An integrated development environment for R, with a console, syntax-highlighting editor that supports direct code execution, and tools for plotting, history, debugging and workspace management. Machine learning environments 8 Amazon SageMaker AI Developer Guide Major features SageMaker AI includes the following major features in alphabetical order excluding any SageMaker AI prefix. Amazon Augmented AI Build the workflows required for human review of ML predictions. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with"} +{"global_id": 432, "doc_id": "sagemaker", "chunk_id": "9", "question_id": 1, "question": "What does Amazon Augmented AI do?", "answer_span": "Amazon Augmented AI Build the workflows required for human review of ML predictions.", "chunk": "Amazon SageMaker AI Developer Guide Major features SageMaker AI includes the following major features in alphabetical order excluding any SageMaker AI prefix. Amazon Augmented AI Build the workflows required for human review of ML predictions. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers. AutoML step Create an AutoML job to automatically train a model in Pipelines. SageMaker Autopilot Users without machine learning knowledge can quickly build classification and regression models. Batch Transform Preprocess datasets, run inference when you don't need a persistent endpoint, and associate input records with inferences to assist the interpretation of results. SageMaker Clarify Improve your machine learning models by detecting potential bias and help explain the predictions that models make. Collaboration with shared spaces A shared space consists of a shared JupyterServer application and a shared directory. All user profiles in a Amazon SageMaker AI domain have access to all shared spaces in the domain. SageMaker Data Wrangler Import, analyze, prepare, and featurize data in SageMaker Studio. You can integrate Data Wrangler into your machine learning workflows to simplify and streamline data pre-processing and feature engineering using little to no coding. You can also add your own Python scripts and transformations to customize your data prep workflow. Data Wrangler data preparation widget Interact with your data, get visualizations, explore actionable insights, and fix data quality issues. Major features 9 Amazon SageMaker AI Developer Guide SageMaker Debugger Inspect training parameters and data throughout the training process. Automatically detect and alert users to commonly occurring errors such as parameter values getting too large or small. SageMaker Edge Manager Optimize custom models for edge devices, create and manage fleets and run models with an efficient runtime. SageMaker Experiments Experiment management and"} +{"global_id": 433, "doc_id": "sagemaker", "chunk_id": "9", "question_id": 2, "question": "What can users without machine learning knowledge do with SageMaker Autopilot?", "answer_span": "SageMaker Autopilot Users without machine learning knowledge can quickly build classification and regression models.", "chunk": "Amazon SageMaker AI Developer Guide Major features SageMaker AI includes the following major features in alphabetical order excluding any SageMaker AI prefix. Amazon Augmented AI Build the workflows required for human review of ML predictions. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers. AutoML step Create an AutoML job to automatically train a model in Pipelines. SageMaker Autopilot Users without machine learning knowledge can quickly build classification and regression models. Batch Transform Preprocess datasets, run inference when you don't need a persistent endpoint, and associate input records with inferences to assist the interpretation of results. SageMaker Clarify Improve your machine learning models by detecting potential bias and help explain the predictions that models make. Collaboration with shared spaces A shared space consists of a shared JupyterServer application and a shared directory. All user profiles in a Amazon SageMaker AI domain have access to all shared spaces in the domain. SageMaker Data Wrangler Import, analyze, prepare, and featurize data in SageMaker Studio. You can integrate Data Wrangler into your machine learning workflows to simplify and streamline data pre-processing and feature engineering using little to no coding. You can also add your own Python scripts and transformations to customize your data prep workflow. Data Wrangler data preparation widget Interact with your data, get visualizations, explore actionable insights, and fix data quality issues. Major features 9 Amazon SageMaker AI Developer Guide SageMaker Debugger Inspect training parameters and data throughout the training process. Automatically detect and alert users to commonly occurring errors such as parameter values getting too large or small. SageMaker Edge Manager Optimize custom models for edge devices, create and manage fleets and run models with an efficient runtime. SageMaker Experiments Experiment management and"} +{"global_id": 434, "doc_id": "sagemaker", "chunk_id": "9", "question_id": 3, "question": "What is the purpose of SageMaker Debugger?", "answer_span": "SageMaker Debugger Inspect training parameters and data throughout the training process.", "chunk": "Amazon SageMaker AI Developer Guide Major features SageMaker AI includes the following major features in alphabetical order excluding any SageMaker AI prefix. Amazon Augmented AI Build the workflows required for human review of ML predictions. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers. AutoML step Create an AutoML job to automatically train a model in Pipelines. SageMaker Autopilot Users without machine learning knowledge can quickly build classification and regression models. Batch Transform Preprocess datasets, run inference when you don't need a persistent endpoint, and associate input records with inferences to assist the interpretation of results. SageMaker Clarify Improve your machine learning models by detecting potential bias and help explain the predictions that models make. Collaboration with shared spaces A shared space consists of a shared JupyterServer application and a shared directory. All user profiles in a Amazon SageMaker AI domain have access to all shared spaces in the domain. SageMaker Data Wrangler Import, analyze, prepare, and featurize data in SageMaker Studio. You can integrate Data Wrangler into your machine learning workflows to simplify and streamline data pre-processing and feature engineering using little to no coding. You can also add your own Python scripts and transformations to customize your data prep workflow. Data Wrangler data preparation widget Interact with your data, get visualizations, explore actionable insights, and fix data quality issues. Major features 9 Amazon SageMaker AI Developer Guide SageMaker Debugger Inspect training parameters and data throughout the training process. Automatically detect and alert users to commonly occurring errors such as parameter values getting too large or small. SageMaker Edge Manager Optimize custom models for edge devices, create and manage fleets and run models with an efficient runtime. SageMaker Experiments Experiment management and"} +{"global_id": 435, "doc_id": "sagemaker", "chunk_id": "9", "question_id": 4, "question": "What can you do with SageMaker Data Wrangler?", "answer_span": "SageMaker Data Wrangler Import, analyze, prepare, and featurize data in SageMaker Studio.", "chunk": "Amazon SageMaker AI Developer Guide Major features SageMaker AI includes the following major features in alphabetical order excluding any SageMaker AI prefix. Amazon Augmented AI Build the workflows required for human review of ML predictions. Amazon A2I brings human review to all developers, removing the undifferentiated heavy lifting associated with building human review systems or managing large numbers of human reviewers. AutoML step Create an AutoML job to automatically train a model in Pipelines. SageMaker Autopilot Users without machine learning knowledge can quickly build classification and regression models. Batch Transform Preprocess datasets, run inference when you don't need a persistent endpoint, and associate input records with inferences to assist the interpretation of results. SageMaker Clarify Improve your machine learning models by detecting potential bias and help explain the predictions that models make. Collaboration with shared spaces A shared space consists of a shared JupyterServer application and a shared directory. All user profiles in a Amazon SageMaker AI domain have access to all shared spaces in the domain. SageMaker Data Wrangler Import, analyze, prepare, and featurize data in SageMaker Studio. You can integrate Data Wrangler into your machine learning workflows to simplify and streamline data pre-processing and feature engineering using little to no coding. You can also add your own Python scripts and transformations to customize your data prep workflow. Data Wrangler data preparation widget Interact with your data, get visualizations, explore actionable insights, and fix data quality issues. Major features 9 Amazon SageMaker AI Developer Guide SageMaker Debugger Inspect training parameters and data throughout the training process. Automatically detect and alert users to commonly occurring errors such as parameter values getting too large or small. SageMaker Edge Manager Optimize custom models for edge devices, create and manage fleets and run models with an efficient runtime. SageMaker Experiments Experiment management and"} +{"global_id": 436, "doc_id": "sagemaker", "chunk_id": "10", "question_id": 1, "question": "What does SageMaker Edge Manager do?", "answer_span": "SageMaker Edge Manager Optimize custom models for edge devices, create and manage fleets and run models with an efficient runtime.", "chunk": "parameters and data throughout the training process. Automatically detect and alert users to commonly occurring errors such as parameter values getting too large or small. SageMaker Edge Manager Optimize custom models for edge devices, create and manage fleets and run models with an efficient runtime. SageMaker Experiments Experiment management and tracking. You can use the tracked data to reconstruct an experiment, incrementally build on experiments conducted by peers, and trace model lineage for compliance and audit verifications. SageMaker Feature Store A centralized store for features and associated metadata so features can be easily discovered and reused. You can create two types of stores, an Online or Offline store. The Online Store can be used for low latency, real-time inference use cases and the Offline Store can be used for training and batch inference. SageMaker Ground Truth High-quality training datasets by using workers along with machine learning to create labeled datasets. SageMaker Ground Truth Plus A turnkey data labeling feature to create high-quality training datasets without having to build labeling applications and manage the labeling workforce on your own. SageMaker Inference Recommender Get recommendations on inference instance types and configurations (e.g. instance count, container parameters and model optimizations) to use your ML models and workloads. Inference shadow tests Evaluate any changes to your model-serving infrastructure by comparing its performance against the currently deployed infrastructure. Major features 10 Amazon SageMaker AI Developer Guide SageMaker JumpStart Learn about SageMaker AI features and capabilities through curated 1-click solutions, example notebooks, and pretrained models that you can deploy. You can also fine-tune the models and deploy them. SageMaker ML Lineage Tracking Track the lineage of machine learning workflows. SageMaker Model Building Pipelines Create and manage machine learning pipelines integrated directly with SageMaker AI jobs. SageMaker Model Cards Document information about your ML models in a"} +{"global_id": 437, "doc_id": "sagemaker", "chunk_id": "10", "question_id": 2, "question": "What is the purpose of SageMaker Experiments?", "answer_span": "SageMaker Experiments Experiment management and tracking.", "chunk": "parameters and data throughout the training process. Automatically detect and alert users to commonly occurring errors such as parameter values getting too large or small. SageMaker Edge Manager Optimize custom models for edge devices, create and manage fleets and run models with an efficient runtime. SageMaker Experiments Experiment management and tracking. You can use the tracked data to reconstruct an experiment, incrementally build on experiments conducted by peers, and trace model lineage for compliance and audit verifications. SageMaker Feature Store A centralized store for features and associated metadata so features can be easily discovered and reused. You can create two types of stores, an Online or Offline store. The Online Store can be used for low latency, real-time inference use cases and the Offline Store can be used for training and batch inference. SageMaker Ground Truth High-quality training datasets by using workers along with machine learning to create labeled datasets. SageMaker Ground Truth Plus A turnkey data labeling feature to create high-quality training datasets without having to build labeling applications and manage the labeling workforce on your own. SageMaker Inference Recommender Get recommendations on inference instance types and configurations (e.g. instance count, container parameters and model optimizations) to use your ML models and workloads. Inference shadow tests Evaluate any changes to your model-serving infrastructure by comparing its performance against the currently deployed infrastructure. Major features 10 Amazon SageMaker AI Developer Guide SageMaker JumpStart Learn about SageMaker AI features and capabilities through curated 1-click solutions, example notebooks, and pretrained models that you can deploy. You can also fine-tune the models and deploy them. SageMaker ML Lineage Tracking Track the lineage of machine learning workflows. SageMaker Model Building Pipelines Create and manage machine learning pipelines integrated directly with SageMaker AI jobs. SageMaker Model Cards Document information about your ML models in a"} +{"global_id": 438, "doc_id": "sagemaker", "chunk_id": "10", "question_id": 3, "question": "What types of stores can you create in SageMaker Feature Store?", "answer_span": "You can create two types of stores, an Online or Offline store.", "chunk": "parameters and data throughout the training process. Automatically detect and alert users to commonly occurring errors such as parameter values getting too large or small. SageMaker Edge Manager Optimize custom models for edge devices, create and manage fleets and run models with an efficient runtime. SageMaker Experiments Experiment management and tracking. You can use the tracked data to reconstruct an experiment, incrementally build on experiments conducted by peers, and trace model lineage for compliance and audit verifications. SageMaker Feature Store A centralized store for features and associated metadata so features can be easily discovered and reused. You can create two types of stores, an Online or Offline store. The Online Store can be used for low latency, real-time inference use cases and the Offline Store can be used for training and batch inference. SageMaker Ground Truth High-quality training datasets by using workers along with machine learning to create labeled datasets. SageMaker Ground Truth Plus A turnkey data labeling feature to create high-quality training datasets without having to build labeling applications and manage the labeling workforce on your own. SageMaker Inference Recommender Get recommendations on inference instance types and configurations (e.g. instance count, container parameters and model optimizations) to use your ML models and workloads. Inference shadow tests Evaluate any changes to your model-serving infrastructure by comparing its performance against the currently deployed infrastructure. Major features 10 Amazon SageMaker AI Developer Guide SageMaker JumpStart Learn about SageMaker AI features and capabilities through curated 1-click solutions, example notebooks, and pretrained models that you can deploy. You can also fine-tune the models and deploy them. SageMaker ML Lineage Tracking Track the lineage of machine learning workflows. SageMaker Model Building Pipelines Create and manage machine learning pipelines integrated directly with SageMaker AI jobs. SageMaker Model Cards Document information about your ML models in a"} +{"global_id": 439, "doc_id": "sagemaker", "chunk_id": "10", "question_id": 4, "question": "What does SageMaker Ground Truth Plus provide?", "answer_span": "A turnkey data labeling feature to create high-quality training datasets without having to build labeling applications and manage the labeling workforce on your own.", "chunk": "parameters and data throughout the training process. Automatically detect and alert users to commonly occurring errors such as parameter values getting too large or small. SageMaker Edge Manager Optimize custom models for edge devices, create and manage fleets and run models with an efficient runtime. SageMaker Experiments Experiment management and tracking. You can use the tracked data to reconstruct an experiment, incrementally build on experiments conducted by peers, and trace model lineage for compliance and audit verifications. SageMaker Feature Store A centralized store for features and associated metadata so features can be easily discovered and reused. You can create two types of stores, an Online or Offline store. The Online Store can be used for low latency, real-time inference use cases and the Offline Store can be used for training and batch inference. SageMaker Ground Truth High-quality training datasets by using workers along with machine learning to create labeled datasets. SageMaker Ground Truth Plus A turnkey data labeling feature to create high-quality training datasets without having to build labeling applications and manage the labeling workforce on your own. SageMaker Inference Recommender Get recommendations on inference instance types and configurations (e.g. instance count, container parameters and model optimizations) to use your ML models and workloads. Inference shadow tests Evaluate any changes to your model-serving infrastructure by comparing its performance against the currently deployed infrastructure. Major features 10 Amazon SageMaker AI Developer Guide SageMaker JumpStart Learn about SageMaker AI features and capabilities through curated 1-click solutions, example notebooks, and pretrained models that you can deploy. You can also fine-tune the models and deploy them. SageMaker ML Lineage Tracking Track the lineage of machine learning workflows. SageMaker Model Building Pipelines Create and manage machine learning pipelines integrated directly with SageMaker AI jobs. SageMaker Model Cards Document information about your ML models in a"} +{"global_id": 440, "doc_id": "sagemaker", "chunk_id": "11", "question_id": 1, "question": "What can you do with SageMaker ML Lineage Tracking?", "answer_span": "Track the lineage of machine learning workflows.", "chunk": "you can deploy. You can also fine-tune the models and deploy them. SageMaker ML Lineage Tracking Track the lineage of machine learning workflows. SageMaker Model Building Pipelines Create and manage machine learning pipelines integrated directly with SageMaker AI jobs. SageMaker Model Cards Document information about your ML models in a single place for streamlined governance and reporting throughout the ML lifecycle. SageMaker Model Dashboard A pre-built, visual overview of all the models in your account. Model Dashboard integrates information from SageMaker Model Monitor, transform jobs, endpoints, lineage tracking, and CloudWatch so you can access high-level model information and track model performance in one unified view. SageMaker Model Monitor Monitor and analyze models in production (endpoints) to detect data drift and deviations in model quality. SageMaker Model Registry Versioning, artifact and lineage tracking, approval workflow, and cross account support for deployment of your machine learning models. SageMaker Neo Train machine learning models once, then run anywhere in the cloud and at the edge. Notebook-based Workflows Run your SageMaker Studio notebook as a non-interactive, scheduled job. Preprocessing Analyze and preprocess data, tackle feature engineering, and evaluate models. Major features 11 Amazon SageMaker AI Developer Guide SageMaker Projects Create end-to-end ML solutions with CI/CD by using SageMaker Projects. Reinforcement Learning Maximize the long-term reward that an agent receives as a result of its actions. SageMaker Role Manager Administrators can define least-privilege permissions for common ML activities using custom and preconfigured persona-based IAM roles. SageMaker Serverless Endpoints A serverless endpoint option for hosting your ML model. Automatically scales in capacity to serve your endpoint traffic. Removes the need to select instance types or manage scaling policies on an endpoint. Studio Classic Git extension A Git extension to enter the URL of a Git repository, clone it into your environment, push changes, and view commit"} +{"global_id": 441, "doc_id": "sagemaker", "chunk_id": "11", "question_id": 2, "question": "What is the purpose of SageMaker Model Cards?", "answer_span": "Document information about your ML models in a single place for streamlined governance and reporting throughout the ML lifecycle.", "chunk": "you can deploy. You can also fine-tune the models and deploy them. SageMaker ML Lineage Tracking Track the lineage of machine learning workflows. SageMaker Model Building Pipelines Create and manage machine learning pipelines integrated directly with SageMaker AI jobs. SageMaker Model Cards Document information about your ML models in a single place for streamlined governance and reporting throughout the ML lifecycle. SageMaker Model Dashboard A pre-built, visual overview of all the models in your account. Model Dashboard integrates information from SageMaker Model Monitor, transform jobs, endpoints, lineage tracking, and CloudWatch so you can access high-level model information and track model performance in one unified view. SageMaker Model Monitor Monitor and analyze models in production (endpoints) to detect data drift and deviations in model quality. SageMaker Model Registry Versioning, artifact and lineage tracking, approval workflow, and cross account support for deployment of your machine learning models. SageMaker Neo Train machine learning models once, then run anywhere in the cloud and at the edge. Notebook-based Workflows Run your SageMaker Studio notebook as a non-interactive, scheduled job. Preprocessing Analyze and preprocess data, tackle feature engineering, and evaluate models. Major features 11 Amazon SageMaker AI Developer Guide SageMaker Projects Create end-to-end ML solutions with CI/CD by using SageMaker Projects. Reinforcement Learning Maximize the long-term reward that an agent receives as a result of its actions. SageMaker Role Manager Administrators can define least-privilege permissions for common ML activities using custom and preconfigured persona-based IAM roles. SageMaker Serverless Endpoints A serverless endpoint option for hosting your ML model. Automatically scales in capacity to serve your endpoint traffic. Removes the need to select instance types or manage scaling policies on an endpoint. Studio Classic Git extension A Git extension to enter the URL of a Git repository, clone it into your environment, push changes, and view commit"} +{"global_id": 442, "doc_id": "sagemaker", "chunk_id": "11", "question_id": 3, "question": "What does SageMaker Model Monitor do?", "answer_span": "Monitor and analyze models in production (endpoints) to detect data drift and deviations in model quality.", "chunk": "you can deploy. You can also fine-tune the models and deploy them. SageMaker ML Lineage Tracking Track the lineage of machine learning workflows. SageMaker Model Building Pipelines Create and manage machine learning pipelines integrated directly with SageMaker AI jobs. SageMaker Model Cards Document information about your ML models in a single place for streamlined governance and reporting throughout the ML lifecycle. SageMaker Model Dashboard A pre-built, visual overview of all the models in your account. Model Dashboard integrates information from SageMaker Model Monitor, transform jobs, endpoints, lineage tracking, and CloudWatch so you can access high-level model information and track model performance in one unified view. SageMaker Model Monitor Monitor and analyze models in production (endpoints) to detect data drift and deviations in model quality. SageMaker Model Registry Versioning, artifact and lineage tracking, approval workflow, and cross account support for deployment of your machine learning models. SageMaker Neo Train machine learning models once, then run anywhere in the cloud and at the edge. Notebook-based Workflows Run your SageMaker Studio notebook as a non-interactive, scheduled job. Preprocessing Analyze and preprocess data, tackle feature engineering, and evaluate models. Major features 11 Amazon SageMaker AI Developer Guide SageMaker Projects Create end-to-end ML solutions with CI/CD by using SageMaker Projects. Reinforcement Learning Maximize the long-term reward that an agent receives as a result of its actions. SageMaker Role Manager Administrators can define least-privilege permissions for common ML activities using custom and preconfigured persona-based IAM roles. SageMaker Serverless Endpoints A serverless endpoint option for hosting your ML model. Automatically scales in capacity to serve your endpoint traffic. Removes the need to select instance types or manage scaling policies on an endpoint. Studio Classic Git extension A Git extension to enter the URL of a Git repository, clone it into your environment, push changes, and view commit"} +{"global_id": 443, "doc_id": "sagemaker", "chunk_id": "11", "question_id": 4, "question": "What is a feature of SageMaker Serverless Endpoints?", "answer_span": "Automatically scales in capacity to serve your endpoint traffic.", "chunk": "you can deploy. You can also fine-tune the models and deploy them. SageMaker ML Lineage Tracking Track the lineage of machine learning workflows. SageMaker Model Building Pipelines Create and manage machine learning pipelines integrated directly with SageMaker AI jobs. SageMaker Model Cards Document information about your ML models in a single place for streamlined governance and reporting throughout the ML lifecycle. SageMaker Model Dashboard A pre-built, visual overview of all the models in your account. Model Dashboard integrates information from SageMaker Model Monitor, transform jobs, endpoints, lineage tracking, and CloudWatch so you can access high-level model information and track model performance in one unified view. SageMaker Model Monitor Monitor and analyze models in production (endpoints) to detect data drift and deviations in model quality. SageMaker Model Registry Versioning, artifact and lineage tracking, approval workflow, and cross account support for deployment of your machine learning models. SageMaker Neo Train machine learning models once, then run anywhere in the cloud and at the edge. Notebook-based Workflows Run your SageMaker Studio notebook as a non-interactive, scheduled job. Preprocessing Analyze and preprocess data, tackle feature engineering, and evaluate models. Major features 11 Amazon SageMaker AI Developer Guide SageMaker Projects Create end-to-end ML solutions with CI/CD by using SageMaker Projects. Reinforcement Learning Maximize the long-term reward that an agent receives as a result of its actions. SageMaker Role Manager Administrators can define least-privilege permissions for common ML activities using custom and preconfigured persona-based IAM roles. SageMaker Serverless Endpoints A serverless endpoint option for hosting your ML model. Automatically scales in capacity to serve your endpoint traffic. Removes the need to select instance types or manage scaling policies on an endpoint. Studio Classic Git extension A Git extension to enter the URL of a Git repository, clone it into your environment, push changes, and view commit"} +{"global_id": 444, "doc_id": "sagemaker", "chunk_id": "12", "question_id": 1, "question": "What does the ML model automatically do?", "answer_span": "Automatically scales in capacity to serve your endpoint traffic.", "chunk": "ML model. Automatically scales in capacity to serve your endpoint traffic. Removes the need to select instance types or manage scaling policies on an endpoint. Studio Classic Git extension A Git extension to enter the URL of a Git repository, clone it into your environment, push changes, and view commit history. SageMaker Studio Notebooks The next generation of SageMaker notebooks that include AWS IAM Identity Center (IAM Identity Center) integration, fast start-up times, and single-click sharing. SageMaker Studio Notebooks and Amazon EMR Easily discover, connect to, create, terminate and manage Amazon EMR clusters in single account and cross account configurations directly from SageMaker Studio. SageMaker Training Compiler Train deep learning models faster on scalable GPU instances managed by SageMaker AI. Major features 12 Amazon SageMaker AI Developer Guide Guide to getting set up with Amazon SageMaker AI To use the features in Amazon SageMaker AI, you must have access to Amazon SageMaker AI. To set up Amazon SageMaker AI and its features, use one of the following options. • Use quick setup: Fastest setup for individual users with default settings. • Use custom setup: Advanced setup for enterprise Machine Learning (ML) administrators. Ideal option for ML administrators setting up SageMaker AI for many users or an organization. Note You do not need to set up SageMaker AI if: • An email is sent to you inviting you to create a password to use the IAM Identity Center authentication. The email also contains the AWS access portal URL you use to sign in. For more information about signing in to the AWS access portal, see Sign in to the AWS access portal. • You intend to use the Amazon SageMaker Studio Lab ML environment. Studio Lab does not require you to have an AWS account. For information about Studio Lab, see"} +{"global_id": 445, "doc_id": "sagemaker", "chunk_id": "12", "question_id": 2, "question": "What is the purpose of the Studio Classic Git extension?", "answer_span": "A Git extension to enter the URL of a Git repository, clone it into your environment, push changes, and view commit history.", "chunk": "ML model. Automatically scales in capacity to serve your endpoint traffic. Removes the need to select instance types or manage scaling policies on an endpoint. Studio Classic Git extension A Git extension to enter the URL of a Git repository, clone it into your environment, push changes, and view commit history. SageMaker Studio Notebooks The next generation of SageMaker notebooks that include AWS IAM Identity Center (IAM Identity Center) integration, fast start-up times, and single-click sharing. SageMaker Studio Notebooks and Amazon EMR Easily discover, connect to, create, terminate and manage Amazon EMR clusters in single account and cross account configurations directly from SageMaker Studio. SageMaker Training Compiler Train deep learning models faster on scalable GPU instances managed by SageMaker AI. Major features 12 Amazon SageMaker AI Developer Guide Guide to getting set up with Amazon SageMaker AI To use the features in Amazon SageMaker AI, you must have access to Amazon SageMaker AI. To set up Amazon SageMaker AI and its features, use one of the following options. • Use quick setup: Fastest setup for individual users with default settings. • Use custom setup: Advanced setup for enterprise Machine Learning (ML) administrators. Ideal option for ML administrators setting up SageMaker AI for many users or an organization. Note You do not need to set up SageMaker AI if: • An email is sent to you inviting you to create a password to use the IAM Identity Center authentication. The email also contains the AWS access portal URL you use to sign in. For more information about signing in to the AWS access portal, see Sign in to the AWS access portal. • You intend to use the Amazon SageMaker Studio Lab ML environment. Studio Lab does not require you to have an AWS account. For information about Studio Lab, see"} +{"global_id": 446, "doc_id": "sagemaker", "chunk_id": "12", "question_id": 3, "question": "What integration do SageMaker Studio Notebooks include?", "answer_span": "AWS IAM Identity Center (IAM Identity Center) integration.", "chunk": "ML model. Automatically scales in capacity to serve your endpoint traffic. Removes the need to select instance types or manage scaling policies on an endpoint. Studio Classic Git extension A Git extension to enter the URL of a Git repository, clone it into your environment, push changes, and view commit history. SageMaker Studio Notebooks The next generation of SageMaker notebooks that include AWS IAM Identity Center (IAM Identity Center) integration, fast start-up times, and single-click sharing. SageMaker Studio Notebooks and Amazon EMR Easily discover, connect to, create, terminate and manage Amazon EMR clusters in single account and cross account configurations directly from SageMaker Studio. SageMaker Training Compiler Train deep learning models faster on scalable GPU instances managed by SageMaker AI. Major features 12 Amazon SageMaker AI Developer Guide Guide to getting set up with Amazon SageMaker AI To use the features in Amazon SageMaker AI, you must have access to Amazon SageMaker AI. To set up Amazon SageMaker AI and its features, use one of the following options. • Use quick setup: Fastest setup for individual users with default settings. • Use custom setup: Advanced setup for enterprise Machine Learning (ML) administrators. Ideal option for ML administrators setting up SageMaker AI for many users or an organization. Note You do not need to set up SageMaker AI if: • An email is sent to you inviting you to create a password to use the IAM Identity Center authentication. The email also contains the AWS access portal URL you use to sign in. For more information about signing in to the AWS access portal, see Sign in to the AWS access portal. • You intend to use the Amazon SageMaker Studio Lab ML environment. Studio Lab does not require you to have an AWS account. For information about Studio Lab, see"} +{"global_id": 447, "doc_id": "sagemaker", "chunk_id": "12", "question_id": 4, "question": "What is the fastest setup option for individual users in Amazon SageMaker AI?", "answer_span": "Use quick setup: Fastest setup for individual users with default settings.", "chunk": "ML model. Automatically scales in capacity to serve your endpoint traffic. Removes the need to select instance types or manage scaling policies on an endpoint. Studio Classic Git extension A Git extension to enter the URL of a Git repository, clone it into your environment, push changes, and view commit history. SageMaker Studio Notebooks The next generation of SageMaker notebooks that include AWS IAM Identity Center (IAM Identity Center) integration, fast start-up times, and single-click sharing. SageMaker Studio Notebooks and Amazon EMR Easily discover, connect to, create, terminate and manage Amazon EMR clusters in single account and cross account configurations directly from SageMaker Studio. SageMaker Training Compiler Train deep learning models faster on scalable GPU instances managed by SageMaker AI. Major features 12 Amazon SageMaker AI Developer Guide Guide to getting set up with Amazon SageMaker AI To use the features in Amazon SageMaker AI, you must have access to Amazon SageMaker AI. To set up Amazon SageMaker AI and its features, use one of the following options. • Use quick setup: Fastest setup for individual users with default settings. • Use custom setup: Advanced setup for enterprise Machine Learning (ML) administrators. Ideal option for ML administrators setting up SageMaker AI for many users or an organization. Note You do not need to set up SageMaker AI if: • An email is sent to you inviting you to create a password to use the IAM Identity Center authentication. The email also contains the AWS access portal URL you use to sign in. For more information about signing in to the AWS access portal, see Sign in to the AWS access portal. • You intend to use the Amazon SageMaker Studio Lab ML environment. Studio Lab does not require you to have an AWS account. For information about Studio Lab, see"} +{"global_id": 448, "doc_id": "sagemaker", "chunk_id": "13", "question_id": 1, "question": "What do you need to create to get access to all of the AWS services and resources?", "answer_span": "You will need to create an Amazon Web Services (AWS) account to get access to all of the AWS services and resources for the account.", "chunk": "sign in. For more information about signing in to the AWS access portal, see Sign in to the AWS access portal. • You intend to use the Amazon SageMaker Studio Lab ML environment. Studio Lab does not require you to have an AWS account. For information about Studio Lab, see Amazon SageMaker Studio Lab. • If you are using the AWS CLI, SageMaker APIs, or SageMaker SDKs You do not need to set up SageMaker AI if any of the prior situations apply. You can skip the rest of this Guide to getting set up with Amazon SageMaker AI chapter and navigate to the following: • Automated ML, no-code, or low-code • Machine learning environments offered by Amazon SageMaker AI • APIs, CLI, and SDKs Topics • Complete Amazon SageMaker AI prerequisites • Use quick setup for Amazon SageMaker AI • Use custom setup for Amazon SageMaker AI 13 Amazon SageMaker AI Developer Guide • Amazon SageMaker AI domain overview • Supported Regions and Quotas Complete Amazon SageMaker AI prerequisites Before you can set up Amazon SageMaker AI, you must complete the following prerequisites. • Required: You will need to create an Amazon Web Services (AWS) account to get access to all of the AWS services and resources for the account. • Highly recommended: We highly recommend that you create an administrative user to manage AWS resources for the account, to adhere to the Security best practices in IAM. It is assumed that you have an administrative user for many of the administrative tasks throughout the SageMaker AI developer guide. • Optional: Configure the AWS Command Line Interface (AWS CLI) if you intend to manage your AWS services and resources for the account using the AWS CLI. Topics • Sign up for an AWS account • Create a user with"} +{"global_id": 449, "doc_id": "sagemaker", "chunk_id": "13", "question_id": 2, "question": "Is it required to have an AWS account to use the Amazon SageMaker Studio Lab ML environment?", "answer_span": "Studio Lab does not require you to have an AWS account.", "chunk": "sign in. For more information about signing in to the AWS access portal, see Sign in to the AWS access portal. • You intend to use the Amazon SageMaker Studio Lab ML environment. Studio Lab does not require you to have an AWS account. For information about Studio Lab, see Amazon SageMaker Studio Lab. • If you are using the AWS CLI, SageMaker APIs, or SageMaker SDKs You do not need to set up SageMaker AI if any of the prior situations apply. You can skip the rest of this Guide to getting set up with Amazon SageMaker AI chapter and navigate to the following: • Automated ML, no-code, or low-code • Machine learning environments offered by Amazon SageMaker AI • APIs, CLI, and SDKs Topics • Complete Amazon SageMaker AI prerequisites • Use quick setup for Amazon SageMaker AI • Use custom setup for Amazon SageMaker AI 13 Amazon SageMaker AI Developer Guide • Amazon SageMaker AI domain overview • Supported Regions and Quotas Complete Amazon SageMaker AI prerequisites Before you can set up Amazon SageMaker AI, you must complete the following prerequisites. • Required: You will need to create an Amazon Web Services (AWS) account to get access to all of the AWS services and resources for the account. • Highly recommended: We highly recommend that you create an administrative user to manage AWS resources for the account, to adhere to the Security best practices in IAM. It is assumed that you have an administrative user for many of the administrative tasks throughout the SageMaker AI developer guide. • Optional: Configure the AWS Command Line Interface (AWS CLI) if you intend to manage your AWS services and resources for the account using the AWS CLI. Topics • Sign up for an AWS account • Create a user with"} +{"global_id": 450, "doc_id": "sagemaker", "chunk_id": "13", "question_id": 3, "question": "What is highly recommended for managing AWS resources for the account?", "answer_span": "We highly recommend that you create an administrative user to manage AWS resources for the account, to adhere to the Security best practices in IAM.", "chunk": "sign in. For more information about signing in to the AWS access portal, see Sign in to the AWS access portal. • You intend to use the Amazon SageMaker Studio Lab ML environment. Studio Lab does not require you to have an AWS account. For information about Studio Lab, see Amazon SageMaker Studio Lab. • If you are using the AWS CLI, SageMaker APIs, or SageMaker SDKs You do not need to set up SageMaker AI if any of the prior situations apply. You can skip the rest of this Guide to getting set up with Amazon SageMaker AI chapter and navigate to the following: • Automated ML, no-code, or low-code • Machine learning environments offered by Amazon SageMaker AI • APIs, CLI, and SDKs Topics • Complete Amazon SageMaker AI prerequisites • Use quick setup for Amazon SageMaker AI • Use custom setup for Amazon SageMaker AI 13 Amazon SageMaker AI Developer Guide • Amazon SageMaker AI domain overview • Supported Regions and Quotas Complete Amazon SageMaker AI prerequisites Before you can set up Amazon SageMaker AI, you must complete the following prerequisites. • Required: You will need to create an Amazon Web Services (AWS) account to get access to all of the AWS services and resources for the account. • Highly recommended: We highly recommend that you create an administrative user to manage AWS resources for the account, to adhere to the Security best practices in IAM. It is assumed that you have an administrative user for many of the administrative tasks throughout the SageMaker AI developer guide. • Optional: Configure the AWS Command Line Interface (AWS CLI) if you intend to manage your AWS services and resources for the account using the AWS CLI. Topics • Sign up for an AWS account • Create a user with"} +{"global_id": 451, "doc_id": "sagemaker", "chunk_id": "13", "question_id": 4, "question": "What should you configure if you intend to manage your AWS services and resources using the AWS CLI?", "answer_span": "Configure the AWS Command Line Interface (AWS CLI) if you intend to manage your AWS services and resources for the account using the AWS CLI.", "chunk": "sign in. For more information about signing in to the AWS access portal, see Sign in to the AWS access portal. • You intend to use the Amazon SageMaker Studio Lab ML environment. Studio Lab does not require you to have an AWS account. For information about Studio Lab, see Amazon SageMaker Studio Lab. • If you are using the AWS CLI, SageMaker APIs, or SageMaker SDKs You do not need to set up SageMaker AI if any of the prior situations apply. You can skip the rest of this Guide to getting set up with Amazon SageMaker AI chapter and navigate to the following: • Automated ML, no-code, or low-code • Machine learning environments offered by Amazon SageMaker AI • APIs, CLI, and SDKs Topics • Complete Amazon SageMaker AI prerequisites • Use quick setup for Amazon SageMaker AI • Use custom setup for Amazon SageMaker AI 13 Amazon SageMaker AI Developer Guide • Amazon SageMaker AI domain overview • Supported Regions and Quotas Complete Amazon SageMaker AI prerequisites Before you can set up Amazon SageMaker AI, you must complete the following prerequisites. • Required: You will need to create an Amazon Web Services (AWS) account to get access to all of the AWS services and resources for the account. • Highly recommended: We highly recommend that you create an administrative user to manage AWS resources for the account, to adhere to the Security best practices in IAM. It is assumed that you have an administrative user for many of the administrative tasks throughout the SageMaker AI developer guide. • Optional: Configure the AWS Command Line Interface (AWS CLI) if you intend to manage your AWS services and resources for the account using the AWS CLI. Topics • Sign up for an AWS account • Create a user with"} +{"global_id": 452, "doc_id": "sagemaker", "chunk_id": "14", "question_id": 1, "question": "What is the first step to sign up for an AWS account?", "answer_span": "Open https://portal.aws.amazon.com/billing/signup.", "chunk": "of the administrative tasks throughout the SageMaker AI developer guide. • Optional: Configure the AWS Command Line Interface (AWS CLI) if you intend to manage your AWS services and resources for the account using the AWS CLI. Topics • Sign up for an AWS account • Create a user with administrative access • (Optional) Configure the AWS CLI Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign Complete Amazon SageMaker AI prerequisites 14"} +{"global_id": 453, "doc_id": "sagemaker", "chunk_id": "14", "question_id": 2, "question": "What does the root user have access to in an AWS account?", "answer_span": "The root user has access to all AWS services and resources in the account.", "chunk": "of the administrative tasks throughout the SageMaker AI developer guide. • Optional: Configure the AWS Command Line Interface (AWS CLI) if you intend to manage your AWS services and resources for the account using the AWS CLI. Topics • Sign up for an AWS account • Create a user with administrative access • (Optional) Configure the AWS CLI Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign Complete Amazon SageMaker AI prerequisites 14"} +{"global_id": 454, "doc_id": "sagemaker", "chunk_id": "14", "question_id": 3, "question": "What is optional when managing AWS services and resources?", "answer_span": "Optional: Configure the AWS Command Line Interface (AWS CLI)", "chunk": "of the administrative tasks throughout the SageMaker AI developer guide. • Optional: Configure the AWS Command Line Interface (AWS CLI) if you intend to manage your AWS services and resources for the account using the AWS CLI. Topics • Sign up for an AWS account • Create a user with administrative access • (Optional) Configure the AWS CLI Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign Complete Amazon SageMaker AI prerequisites 14"} +{"global_id": 455, "doc_id": "sagemaker", "chunk_id": "14", "question_id": 4, "question": "What is part of the sign-up procedure for an AWS account?", "answer_span": "Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad.", "chunk": "of the administrative tasks throughout the SageMaker AI developer guide. • Optional: Configure the AWS Command Line Interface (AWS CLI) if you intend to manage your AWS services and resources for the account using the AWS CLI. Topics • Sign up for an AWS account • Create a user with administrative access • (Optional) Configure the AWS CLI Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign Complete Amazon SageMaker AI prerequisites 14"} +{"global_id": 456, "doc_id": "ecs", "chunk_id": "0", "question_id": 1, "question": "What is Amazon Elastic Container Service?", "answer_span": "Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.", "chunk": "Amazon Elastic Container Service Developer Guide What is Amazon Elastic Container Service? Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. As a fully managed service, Amazon ECS comes with AWS configuration and operational best practices built-in. It's integrated with both AWS tools, such as Amazon Elastic Container Registry, and third-party tools, such as Docker. This integration makes it easier for teams to focus on building the applications, not the environment. You can run and scale your container workloads across AWS Regions in the cloud, and on-premises, without the complexity of managing a control plane. Terminology and components There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers The following diagram shows the Amazon ECS layers. Terminology and components 1 Amazon Elastic Container Service Developer Guide The capacity is the infrastructure where your containers run. The following is an overview of the capacity options: • Amazon EC2 instances in the AWS cloud You choose the instance type, the number of instances, and manage the capacity. • Serverless (AWS Fargate) in the AWS cloud Fargate is a serverless, pay-as-you-go compute engine. With Fargate you don't need to manage servers, handle capacity planning, or isolate container workloads for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic"} +{"global_id": 457, "doc_id": "ecs", "chunk_id": "0", "question_id": 2, "question": "What are the three layers in Amazon ECS?", "answer_span": "There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers", "chunk": "Amazon Elastic Container Service Developer Guide What is Amazon Elastic Container Service? Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. As a fully managed service, Amazon ECS comes with AWS configuration and operational best practices built-in. It's integrated with both AWS tools, such as Amazon Elastic Container Registry, and third-party tools, such as Docker. This integration makes it easier for teams to focus on building the applications, not the environment. You can run and scale your container workloads across AWS Regions in the cloud, and on-premises, without the complexity of managing a control plane. Terminology and components There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers The following diagram shows the Amazon ECS layers. Terminology and components 1 Amazon Elastic Container Service Developer Guide The capacity is the infrastructure where your containers run. The following is an overview of the capacity options: • Amazon EC2 instances in the AWS cloud You choose the instance type, the number of instances, and manage the capacity. • Serverless (AWS Fargate) in the AWS cloud Fargate is a serverless, pay-as-you-go compute engine. With Fargate you don't need to manage servers, handle capacity planning, or isolate container workloads for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic"} +{"global_id": 458, "doc_id": "ecs", "chunk_id": "0", "question_id": 3, "question": "What is Fargate in the context of Amazon ECS?", "answer_span": "Fargate is a serverless, pay-as-you-go compute engine.", "chunk": "Amazon Elastic Container Service Developer Guide What is Amazon Elastic Container Service? Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. As a fully managed service, Amazon ECS comes with AWS configuration and operational best practices built-in. It's integrated with both AWS tools, such as Amazon Elastic Container Registry, and third-party tools, such as Docker. This integration makes it easier for teams to focus on building the applications, not the environment. You can run and scale your container workloads across AWS Regions in the cloud, and on-premises, without the complexity of managing a control plane. Terminology and components There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers The following diagram shows the Amazon ECS layers. Terminology and components 1 Amazon Elastic Container Service Developer Guide The capacity is the infrastructure where your containers run. The following is an overview of the capacity options: • Amazon EC2 instances in the AWS cloud You choose the instance type, the number of instances, and manage the capacity. • Serverless (AWS Fargate) in the AWS cloud Fargate is a serverless, pay-as-you-go compute engine. With Fargate you don't need to manage servers, handle capacity planning, or isolate container workloads for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic"} +{"global_id": 459, "doc_id": "ecs", "chunk_id": "0", "question_id": 4, "question": "What does Amazon ECS Anywhere provide support for?", "answer_span": "Amazon ECS Anywhere provides support for registering an external instance such as an on-premises server or virtual machine (VM), to your Amazon ECS cluster.", "chunk": "Amazon Elastic Container Service Developer Guide What is Amazon Elastic Container Service? Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications. As a fully managed service, Amazon ECS comes with AWS configuration and operational best practices built-in. It's integrated with both AWS tools, such as Amazon Elastic Container Registry, and third-party tools, such as Docker. This integration makes it easier for teams to focus on building the applications, not the environment. You can run and scale your container workloads across AWS Regions in the cloud, and on-premises, without the complexity of managing a control plane. Terminology and components There are three layers in Amazon ECS: • Capacity - The infrastructure where your containers run • Controller - Deploy and manage your applications that run on the containers • Provisioning - The tools that you can use to interface with the scheduler to deploy and manage your applications and containers The following diagram shows the Amazon ECS layers. Terminology and components 1 Amazon Elastic Container Service Developer Guide The capacity is the infrastructure where your containers run. The following is an overview of the capacity options: • Amazon EC2 instances in the AWS cloud You choose the instance type, the number of instances, and manage the capacity. • Serverless (AWS Fargate) in the AWS cloud Fargate is a serverless, pay-as-you-go compute engine. With Fargate you don't need to manage servers, handle capacity planning, or isolate container workloads for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic"} +{"global_id": 460, "doc_id": "ecs", "chunk_id": "1", "question_id": 1, "question": "What does Amazon ECS Anywhere provide support for?", "answer_span": "Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster.", "chunk": "for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic Container Service Developer Guide Features Amazon ECS provides the following high-level features: Task definition The blueprint for the application. Cluster The infrastructure your application runs on. Task An application such as a batch job that performs work, and then stops. Service A long running stateless application. Account Setting Allows access to features. Cluster Auto Scaling Amazon ECS manages the scaling of Amazon EC2 instances that are registered to your cluster. Service Auto Scaling Amazon ECS increases or decreases the desired number of tasks in your service automatically. Provisioning There are multiple options for provisioning Amazon ECS: • AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details. These include calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Features 3 Amazon Elastic Container Service Developer Guide • AWS CDK — Provides an open-source software development framework that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing"} +{"global_id": 461, "doc_id": "ecs", "chunk_id": "1", "question_id": 2, "question": "What is the blueprint for the application in Amazon ECS?", "answer_span": "Task definition The blueprint for the application.", "chunk": "for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic Container Service Developer Guide Features Amazon ECS provides the following high-level features: Task definition The blueprint for the application. Cluster The infrastructure your application runs on. Task An application such as a batch job that performs work, and then stops. Service A long running stateless application. Account Setting Allows access to features. Cluster Auto Scaling Amazon ECS manages the scaling of Amazon EC2 instances that are registered to your cluster. Service Auto Scaling Amazon ECS increases or decreases the desired number of tasks in your service automatically. Provisioning There are multiple options for provisioning Amazon ECS: • AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details. These include calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Features 3 Amazon Elastic Container Service Developer Guide • AWS CDK — Provides an open-source software development framework that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing"} +{"global_id": 462, "doc_id": "ecs", "chunk_id": "1", "question_id": 3, "question": "What does Amazon ECS manage regarding Amazon EC2 instances?", "answer_span": "Amazon ECS manages the scaling of Amazon EC2 instances that are registered to your cluster.", "chunk": "for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic Container Service Developer Guide Features Amazon ECS provides the following high-level features: Task definition The blueprint for the application. Cluster The infrastructure your application runs on. Task An application such as a batch job that performs work, and then stops. Service A long running stateless application. Account Setting Allows access to features. Cluster Auto Scaling Amazon ECS manages the scaling of Amazon EC2 instances that are registered to your cluster. Service Auto Scaling Amazon ECS increases or decreases the desired number of tasks in your service automatically. Provisioning There are multiple options for provisioning Amazon ECS: • AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details. These include calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Features 3 Amazon Elastic Container Service Developer Guide • AWS CDK — Provides an open-source software development framework that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing"} +{"global_id": 463, "doc_id": "ecs", "chunk_id": "1", "question_id": 4, "question": "What does AWS CDK provide?", "answer_span": "AWS CDK — Provides an open-source software development framework that you can use to model and provision your cloud application resources using familiar programming languages.", "chunk": "for security. • On-premises virtual machines (VM) or servers Amazon ECS Anywhere provides support for registering an external instance such as an onpremises server or virtual machine (VM), to your Amazon ECS cluster. The Amazon ECS scheduler is the software that manages your applications. Terminology and components 2 Amazon Elastic Container Service Developer Guide Features Amazon ECS provides the following high-level features: Task definition The blueprint for the application. Cluster The infrastructure your application runs on. Task An application such as a batch job that performs work, and then stops. Service A long running stateless application. Account Setting Allows access to features. Cluster Auto Scaling Amazon ECS manages the scaling of Amazon EC2 instances that are registered to your cluster. Service Auto Scaling Amazon ECS increases or decreases the desired number of tasks in your service automatically. Provisioning There are multiple options for provisioning Amazon ECS: • AWS Management Console — Provides a web interface that you can use to access your Amazon ECS resources. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon ECS. It's supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details. These include calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. Features 3 Amazon Elastic Container Service Developer Guide • AWS CDK — Provides an open-source software development framework that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing"} +{"global_id": 464, "doc_id": "ecs", "chunk_id": "2", "question_id": 1, "question": "What does the AWS CDK do?", "answer_span": "The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation.", "chunk": "that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing information for Amazon ECS. • AWS Fargate pricing – Pricing information for Fargate. Related services Services to use with Amazon ECS You can use other AWS services to help you deploy yours tasks and services on Amazon ECS. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon CloudWatch Monitor your services and tasks. Amazon Elastic Container Registry Store and manage container images. Elastic Load Balancing Automatically distribute incoming service traffic. Amazon GuardDuty Detect potentially unauthorized or malicious use of your container instances and workloads. Pricing 4 Amazon Elastic Container Service Developer Guide Learn how to create and use Amazon ECS resources The following guides provide an introduction to the tools available to access Amazon ECS and introductory procedures to run containers. Docker basics takes you through the basic steps to create a Docker container image and upload it to an Amazon ECR private repository. The getting started guides walk you through using the AWS Copilot command line interface and the AWS Management Console to complete the common tasks to run your containers on Amazon ECS and AWS Fargate. Contents • Set up to use Amazon ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows"} +{"global_id": 465, "doc_id": "ecs", "chunk_id": "2", "question_id": 2, "question": "What does Amazon ECS pricing depend on?", "answer_span": "Amazon ECS pricing depends on the capacity option you choose for your containers.", "chunk": "that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing information for Amazon ECS. • AWS Fargate pricing – Pricing information for Fargate. Related services Services to use with Amazon ECS You can use other AWS services to help you deploy yours tasks and services on Amazon ECS. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon CloudWatch Monitor your services and tasks. Amazon Elastic Container Registry Store and manage container images. Elastic Load Balancing Automatically distribute incoming service traffic. Amazon GuardDuty Detect potentially unauthorized or malicious use of your container instances and workloads. Pricing 4 Amazon Elastic Container Service Developer Guide Learn how to create and use Amazon ECS resources The following guides provide an introduction to the tools available to access Amazon ECS and introductory procedures to run containers. Docker basics takes you through the basic steps to create a Docker container image and upload it to an Amazon ECR private repository. The getting started guides walk you through using the AWS Copilot command line interface and the AWS Management Console to complete the common tasks to run your containers on Amazon ECS and AWS Fargate. Contents • Set up to use Amazon ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows"} +{"global_id": 466, "doc_id": "ecs", "chunk_id": "2", "question_id": 3, "question": "What service helps ensure you have the correct number of Amazon EC2 instances?", "answer_span": "Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application.", "chunk": "that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing information for Amazon ECS. • AWS Fargate pricing – Pricing information for Fargate. Related services Services to use with Amazon ECS You can use other AWS services to help you deploy yours tasks and services on Amazon ECS. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon CloudWatch Monitor your services and tasks. Amazon Elastic Container Registry Store and manage container images. Elastic Load Balancing Automatically distribute incoming service traffic. Amazon GuardDuty Detect potentially unauthorized or malicious use of your container instances and workloads. Pricing 4 Amazon Elastic Container Service Developer Guide Learn how to create and use Amazon ECS resources The following guides provide an introduction to the tools available to access Amazon ECS and introductory procedures to run containers. Docker basics takes you through the basic steps to create a Docker container image and upload it to an Amazon ECR private repository. The getting started guides walk you through using the AWS Copilot command line interface and the AWS Management Console to complete the common tasks to run your containers on Amazon ECS and AWS Fargate. Contents • Set up to use Amazon ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows"} +{"global_id": 467, "doc_id": "ecs", "chunk_id": "2", "question_id": 4, "question": "What does Amazon GuardDuty do?", "answer_span": "Amazon GuardDuty Detect potentially unauthorized or malicious use of your container instances and workloads.", "chunk": "that you can use to model and provision your cloud application resources using familiar programming languages. The AWS CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. Pricing Amazon ECS pricing depends on the capacity option you choose for your containers. • Amazon ECS pricing – Pricing information for Amazon ECS. • AWS Fargate pricing – Pricing information for Fargate. Related services Services to use with Amazon ECS You can use other AWS services to help you deploy yours tasks and services on Amazon ECS. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. Amazon CloudWatch Monitor your services and tasks. Amazon Elastic Container Registry Store and manage container images. Elastic Load Balancing Automatically distribute incoming service traffic. Amazon GuardDuty Detect potentially unauthorized or malicious use of your container instances and workloads. Pricing 4 Amazon Elastic Container Service Developer Guide Learn how to create and use Amazon ECS resources The following guides provide an introduction to the tools available to access Amazon ECS and introductory procedures to run containers. Docker basics takes you through the basic steps to create a Docker container image and upload it to an Amazon ECR private repository. The getting started guides walk you through using the AWS Copilot command line interface and the AWS Management Console to complete the common tasks to run your containers on Amazon ECS and AWS Fargate. Contents • Set up to use Amazon ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows"} +{"global_id": 468, "doc_id": "ecs", "chunk_id": "3", "question_id": 1, "question": "What is the AWS Management Console used for?", "answer_span": "The AWS Management Console is a browser-based interface for managing Amazon ECS resources.", "chunk": "ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the EC2 launch type • Creating Amazon ECS resources using the AWS CDK • Creating Amazon ECS resources using the AWS Copilot command line interface Set up to use Amazon ECS If you've already signed up for Amazon Web Services (AWS) and have been using Amazon Elastic Compute Cloud (Amazon EC2), you are close to being able to use Amazon ECS. The set-up process for the two services is similar. The following guide prepares you for launching your first Amazon ECS cluster. Complete the following tasks to get set up for Amazon ECS. AWS Management Console The AWS Management Console is a browser-based interface for managing Amazon ECS resources. The console provides a visual overview of the service, making it easy to explore Amazon ECS features and functions without needing to use additional tools. Many related tutorials and walkthroughs are available that can guide you through use of the console. For a tutorial that guides you through the console, see Learn how to create and use Amazon ECS resources. Set up 5 Amazon Elastic Container Service Developer Guide Amazon ECS best practices You can use any of the following pages to learn the most important operational best practices for Amazon ECS networking. Best practice overview Learn more Connect applications to the internet Connect Amazon ECS applications to the internet Receive inbound connectio ns to Amazon ECS from the internet. Best practices for receiving inbound connections to Amazon ECS from the internet Connect Amazon ECS to other AWS services"} +{"global_id": 469, "doc_id": "ecs", "chunk_id": "3", "question_id": 2, "question": "What type of tasks can you create for the Fargate launch type?", "answer_span": "Learn how to create an Amazon ECS Linux task for the Fargate launch type", "chunk": "ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the EC2 launch type • Creating Amazon ECS resources using the AWS CDK • Creating Amazon ECS resources using the AWS Copilot command line interface Set up to use Amazon ECS If you've already signed up for Amazon Web Services (AWS) and have been using Amazon Elastic Compute Cloud (Amazon EC2), you are close to being able to use Amazon ECS. The set-up process for the two services is similar. The following guide prepares you for launching your first Amazon ECS cluster. Complete the following tasks to get set up for Amazon ECS. AWS Management Console The AWS Management Console is a browser-based interface for managing Amazon ECS resources. The console provides a visual overview of the service, making it easy to explore Amazon ECS features and functions without needing to use additional tools. Many related tutorials and walkthroughs are available that can guide you through use of the console. For a tutorial that guides you through the console, see Learn how to create and use Amazon ECS resources. Set up 5 Amazon Elastic Container Service Developer Guide Amazon ECS best practices You can use any of the following pages to learn the most important operational best practices for Amazon ECS networking. Best practice overview Learn more Connect applications to the internet Connect Amazon ECS applications to the internet Receive inbound connectio ns to Amazon ECS from the internet. Best practices for receiving inbound connections to Amazon ECS from the internet Connect Amazon ECS to other AWS services"} +{"global_id": 470, "doc_id": "ecs", "chunk_id": "3", "question_id": 3, "question": "What can you learn from the Amazon ECS best practices pages?", "answer_span": "You can use any of the following pages to learn the most important operational best practices for Amazon ECS networking.", "chunk": "ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the EC2 launch type • Creating Amazon ECS resources using the AWS CDK • Creating Amazon ECS resources using the AWS Copilot command line interface Set up to use Amazon ECS If you've already signed up for Amazon Web Services (AWS) and have been using Amazon Elastic Compute Cloud (Amazon EC2), you are close to being able to use Amazon ECS. The set-up process for the two services is similar. The following guide prepares you for launching your first Amazon ECS cluster. Complete the following tasks to get set up for Amazon ECS. AWS Management Console The AWS Management Console is a browser-based interface for managing Amazon ECS resources. The console provides a visual overview of the service, making it easy to explore Amazon ECS features and functions without needing to use additional tools. Many related tutorials and walkthroughs are available that can guide you through use of the console. For a tutorial that guides you through the console, see Learn how to create and use Amazon ECS resources. Set up 5 Amazon Elastic Container Service Developer Guide Amazon ECS best practices You can use any of the following pages to learn the most important operational best practices for Amazon ECS networking. Best practice overview Learn more Connect applications to the internet Connect Amazon ECS applications to the internet Receive inbound connectio ns to Amazon ECS from the internet. Best practices for receiving inbound connections to Amazon ECS from the internet Connect Amazon ECS to other AWS services"} +{"global_id": 471, "doc_id": "ecs", "chunk_id": "3", "question_id": 4, "question": "What is the purpose of the guide mentioned in the text?", "answer_span": "The following guide prepares you for launching your first Amazon ECS cluster.", "chunk": "ECS • Creating a container image for use on Amazon ECS • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the EC2 launch type • Creating Amazon ECS resources using the AWS CDK • Creating Amazon ECS resources using the AWS Copilot command line interface Set up to use Amazon ECS If you've already signed up for Amazon Web Services (AWS) and have been using Amazon Elastic Compute Cloud (Amazon EC2), you are close to being able to use Amazon ECS. The set-up process for the two services is similar. The following guide prepares you for launching your first Amazon ECS cluster. Complete the following tasks to get set up for Amazon ECS. AWS Management Console The AWS Management Console is a browser-based interface for managing Amazon ECS resources. The console provides a visual overview of the service, making it easy to explore Amazon ECS features and functions without needing to use additional tools. Many related tutorials and walkthroughs are available that can guide you through use of the console. For a tutorial that guides you through the console, see Learn how to create and use Amazon ECS resources. Set up 5 Amazon Elastic Container Service Developer Guide Amazon ECS best practices You can use any of the following pages to learn the most important operational best practices for Amazon ECS networking. Best practice overview Learn more Connect applications to the internet Connect Amazon ECS applications to the internet Receive inbound connectio ns to Amazon ECS from the internet. Best practices for receiving inbound connections to Amazon ECS from the internet Connect Amazon ECS to other AWS services"} +{"global_id": 472, "doc_id": "ecs", "chunk_id": "4", "question_id": 1, "question": "What is the best practice overview for Amazon ECS networking?", "answer_span": "Best practice overview Learn more", "chunk": "for Amazon ECS networking. Best practice overview Learn more Connect applications to the internet Connect Amazon ECS applications to the internet Receive inbound connectio ns to Amazon ECS from the internet. Best practices for receiving inbound connections to Amazon ECS from the internet Connect Amazon ECS to other AWS services within a VPC Best practices for connecting Amazon ECS to AWS services from inside your VPC Network services across AWS accounts and VPCs Best practices for networkin g Amazon ECS services across AWS accounts and VPCs Troubleshoot network issues AWS services for Amazon ECS networking troubleshooting You can use any of the following pages to learn the most important operational best practices for Fargate on Amazon ECS. Best practice overview Learn more Fargate security Fargate security best practices in Amazon ECS Fargate security considera tions Fargate security considera tions for Amazon ECS 163 Amazon Elastic Container Service Developer Guide Best practice overview Learn more Linux containers on Fargate container image pull behavior Linux containers on Fargate container image pull behavior for Amazon ECS Windows containers on Fargate container image pull behavior Windows containers on Fargate container image pull behavior for Amazon ECS Fargate task retirement Task retirement and maintenance for AWS Fargate on Amazon ECS You can use any of the following pages to learn the most important operational best practices for task definitions. Best practice overview Learn more Container images Best practices for Amazon ECS container images Task size Best practices for Amazon ECS task sizes Volume best practices Storage options for Amazon ECS tasks You can use any of the following pages to learn the most important operational best practices for clusters and capacity. Best practice overview Learn more Fargate security Fargate security best practices in Amazon ECS 164 Amazon Elastic Container Service Developer Guide Best practice overview"} +{"global_id": 473, "doc_id": "ecs", "chunk_id": "4", "question_id": 2, "question": "What are best practices for receiving inbound connections to Amazon ECS from the internet?", "answer_span": "Best practices for receiving inbound connections to Amazon ECS from the internet", "chunk": "for Amazon ECS networking. Best practice overview Learn more Connect applications to the internet Connect Amazon ECS applications to the internet Receive inbound connectio ns to Amazon ECS from the internet. Best practices for receiving inbound connections to Amazon ECS from the internet Connect Amazon ECS to other AWS services within a VPC Best practices for connecting Amazon ECS to AWS services from inside your VPC Network services across AWS accounts and VPCs Best practices for networkin g Amazon ECS services across AWS accounts and VPCs Troubleshoot network issues AWS services for Amazon ECS networking troubleshooting You can use any of the following pages to learn the most important operational best practices for Fargate on Amazon ECS. Best practice overview Learn more Fargate security Fargate security best practices in Amazon ECS Fargate security considera tions Fargate security considera tions for Amazon ECS 163 Amazon Elastic Container Service Developer Guide Best practice overview Learn more Linux containers on Fargate container image pull behavior Linux containers on Fargate container image pull behavior for Amazon ECS Windows containers on Fargate container image pull behavior Windows containers on Fargate container image pull behavior for Amazon ECS Fargate task retirement Task retirement and maintenance for AWS Fargate on Amazon ECS You can use any of the following pages to learn the most important operational best practices for task definitions. Best practice overview Learn more Container images Best practices for Amazon ECS container images Task size Best practices for Amazon ECS task sizes Volume best practices Storage options for Amazon ECS tasks You can use any of the following pages to learn the most important operational best practices for clusters and capacity. Best practice overview Learn more Fargate security Fargate security best practices in Amazon ECS 164 Amazon Elastic Container Service Developer Guide Best practice overview"} +{"global_id": 474, "doc_id": "ecs", "chunk_id": "4", "question_id": 3, "question": "What can you use to learn the most important operational best practices for Fargate on Amazon ECS?", "answer_span": "You can use any of the following pages to learn the most important operational best practices for Fargate on Amazon ECS.", "chunk": "for Amazon ECS networking. Best practice overview Learn more Connect applications to the internet Connect Amazon ECS applications to the internet Receive inbound connectio ns to Amazon ECS from the internet. Best practices for receiving inbound connections to Amazon ECS from the internet Connect Amazon ECS to other AWS services within a VPC Best practices for connecting Amazon ECS to AWS services from inside your VPC Network services across AWS accounts and VPCs Best practices for networkin g Amazon ECS services across AWS accounts and VPCs Troubleshoot network issues AWS services for Amazon ECS networking troubleshooting You can use any of the following pages to learn the most important operational best practices for Fargate on Amazon ECS. Best practice overview Learn more Fargate security Fargate security best practices in Amazon ECS Fargate security considera tions Fargate security considera tions for Amazon ECS 163 Amazon Elastic Container Service Developer Guide Best practice overview Learn more Linux containers on Fargate container image pull behavior Linux containers on Fargate container image pull behavior for Amazon ECS Windows containers on Fargate container image pull behavior Windows containers on Fargate container image pull behavior for Amazon ECS Fargate task retirement Task retirement and maintenance for AWS Fargate on Amazon ECS You can use any of the following pages to learn the most important operational best practices for task definitions. Best practice overview Learn more Container images Best practices for Amazon ECS container images Task size Best practices for Amazon ECS task sizes Volume best practices Storage options for Amazon ECS tasks You can use any of the following pages to learn the most important operational best practices for clusters and capacity. Best practice overview Learn more Fargate security Fargate security best practices in Amazon ECS 164 Amazon Elastic Container Service Developer Guide Best practice overview"} +{"global_id": 475, "doc_id": "ecs", "chunk_id": "4", "question_id": 4, "question": "What are best practices for Amazon ECS container images?", "answer_span": "Best practices for Amazon ECS container images", "chunk": "for Amazon ECS networking. Best practice overview Learn more Connect applications to the internet Connect Amazon ECS applications to the internet Receive inbound connectio ns to Amazon ECS from the internet. Best practices for receiving inbound connections to Amazon ECS from the internet Connect Amazon ECS to other AWS services within a VPC Best practices for connecting Amazon ECS to AWS services from inside your VPC Network services across AWS accounts and VPCs Best practices for networkin g Amazon ECS services across AWS accounts and VPCs Troubleshoot network issues AWS services for Amazon ECS networking troubleshooting You can use any of the following pages to learn the most important operational best practices for Fargate on Amazon ECS. Best practice overview Learn more Fargate security Fargate security best practices in Amazon ECS Fargate security considera tions Fargate security considera tions for Amazon ECS 163 Amazon Elastic Container Service Developer Guide Best practice overview Learn more Linux containers on Fargate container image pull behavior Linux containers on Fargate container image pull behavior for Amazon ECS Windows containers on Fargate container image pull behavior Windows containers on Fargate container image pull behavior for Amazon ECS Fargate task retirement Task retirement and maintenance for AWS Fargate on Amazon ECS You can use any of the following pages to learn the most important operational best practices for task definitions. Best practice overview Learn more Container images Best practices for Amazon ECS container images Task size Best practices for Amazon ECS task sizes Volume best practices Storage options for Amazon ECS tasks You can use any of the following pages to learn the most important operational best practices for clusters and capacity. Best practice overview Learn more Fargate security Fargate security best practices in Amazon ECS 164 Amazon Elastic Container Service Developer Guide Best practice overview"} +{"global_id": 476, "doc_id": "ecs", "chunk_id": "5", "question_id": 1, "question": "What technology can you use with Amazon ECS to run containers without managing servers?", "answer_span": "AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances.", "chunk": "Storage options for Amazon ECS tasks You can use any of the following pages to learn the most important operational best practices for clusters and capacity. Best practice overview Learn more Fargate security Fargate security best practices in Amazon ECS 164 Amazon Elastic Container Service Developer Guide Best practice overview Learn more EC2 container instance security considerations Amazon EC2 container instance security considera tions for Amazon ECS Cluster auto scaling Optimize Amazon ECS cluster auto scaling Operating at scale Operating Amazon ECS at scale Auto scaling and capacity management Amazon ECS Auto scaling and capacity management best practices You can use any of the following pages to learn the most important operational best practices for tasks and services. Best practice overview Learn more Optimize task launch time Optimize Amazon ECS task launch time Service parameters Best practices for Amazon ECS service parameters Optimize load balancer health check parameters Optimize load balancer health check parameters for Amazon ECS Optimize load balancer connection draining parameters Optimize load balancer connection draining parameters for Amazon ECS Optimize service auto scaling Optimizing Amazon ECS service auto scaling 165 Amazon Elastic Container Service Developer Guide You can use any of the following pages to learn the most important operational best practices for security. Best practice overview Learn more Network security Network security best practices for Amazon ECS Task and container security Amazon ECS task and container security best practices 166 Amazon Elastic Container Service Developer Guide AWS Fargate for Amazon ECS AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when"} +{"global_id": 477, "doc_id": "ecs", "chunk_id": "5", "question_id": 2, "question": "What does AWS Fargate remove the need for?", "answer_span": "This removes the need to choose server types, decide when", "chunk": "Storage options for Amazon ECS tasks You can use any of the following pages to learn the most important operational best practices for clusters and capacity. Best practice overview Learn more Fargate security Fargate security best practices in Amazon ECS 164 Amazon Elastic Container Service Developer Guide Best practice overview Learn more EC2 container instance security considerations Amazon EC2 container instance security considera tions for Amazon ECS Cluster auto scaling Optimize Amazon ECS cluster auto scaling Operating at scale Operating Amazon ECS at scale Auto scaling and capacity management Amazon ECS Auto scaling and capacity management best practices You can use any of the following pages to learn the most important operational best practices for tasks and services. Best practice overview Learn more Optimize task launch time Optimize Amazon ECS task launch time Service parameters Best practices for Amazon ECS service parameters Optimize load balancer health check parameters Optimize load balancer health check parameters for Amazon ECS Optimize load balancer connection draining parameters Optimize load balancer connection draining parameters for Amazon ECS Optimize service auto scaling Optimizing Amazon ECS service auto scaling 165 Amazon Elastic Container Service Developer Guide You can use any of the following pages to learn the most important operational best practices for security. Best practice overview Learn more Network security Network security best practices for Amazon ECS Task and container security Amazon ECS task and container security best practices 166 Amazon Elastic Container Service Developer Guide AWS Fargate for Amazon ECS AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when"} +{"global_id": 478, "doc_id": "ecs", "chunk_id": "5", "question_id": 3, "question": "What are the best practices for Amazon ECS service parameters?", "answer_span": "Best practices for Amazon ECS service parameters", "chunk": "Storage options for Amazon ECS tasks You can use any of the following pages to learn the most important operational best practices for clusters and capacity. Best practice overview Learn more Fargate security Fargate security best practices in Amazon ECS 164 Amazon Elastic Container Service Developer Guide Best practice overview Learn more EC2 container instance security considerations Amazon EC2 container instance security considera tions for Amazon ECS Cluster auto scaling Optimize Amazon ECS cluster auto scaling Operating at scale Operating Amazon ECS at scale Auto scaling and capacity management Amazon ECS Auto scaling and capacity management best practices You can use any of the following pages to learn the most important operational best practices for tasks and services. Best practice overview Learn more Optimize task launch time Optimize Amazon ECS task launch time Service parameters Best practices for Amazon ECS service parameters Optimize load balancer health check parameters Optimize load balancer health check parameters for Amazon ECS Optimize load balancer connection draining parameters Optimize load balancer connection draining parameters for Amazon ECS Optimize service auto scaling Optimizing Amazon ECS service auto scaling 165 Amazon Elastic Container Service Developer Guide You can use any of the following pages to learn the most important operational best practices for security. Best practice overview Learn more Network security Network security best practices for Amazon ECS Task and container security Amazon ECS task and container security best practices 166 Amazon Elastic Container Service Developer Guide AWS Fargate for Amazon ECS AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when"} +{"global_id": 479, "doc_id": "ecs", "chunk_id": "5", "question_id": 4, "question": "What can you learn about Amazon ECS cluster auto scaling?", "answer_span": "Optimize Amazon ECS cluster auto scaling", "chunk": "Storage options for Amazon ECS tasks You can use any of the following pages to learn the most important operational best practices for clusters and capacity. Best practice overview Learn more Fargate security Fargate security best practices in Amazon ECS 164 Amazon Elastic Container Service Developer Guide Best practice overview Learn more EC2 container instance security considerations Amazon EC2 container instance security considera tions for Amazon ECS Cluster auto scaling Optimize Amazon ECS cluster auto scaling Operating at scale Operating Amazon ECS at scale Auto scaling and capacity management Amazon ECS Auto scaling and capacity management best practices You can use any of the following pages to learn the most important operational best practices for tasks and services. Best practice overview Learn more Optimize task launch time Optimize Amazon ECS task launch time Service parameters Best practices for Amazon ECS service parameters Optimize load balancer health check parameters Optimize load balancer health check parameters for Amazon ECS Optimize load balancer connection draining parameters Optimize load balancer connection draining parameters for Amazon ECS Optimize service auto scaling Optimizing Amazon ECS service auto scaling 165 Amazon Elastic Container Service Developer Guide You can use any of the following pages to learn the most important operational best practices for security. Best practice overview Learn more Network security Network security best practices for Amazon ECS Task and container security Amazon ECS task and container security best practices 166 Amazon Elastic Container Service Developer Guide AWS Fargate for Amazon ECS AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when"} +{"global_id": 480, "doc_id": "ecs", "chunk_id": "6", "question_id": 1, "question": "What does AWS Fargate allow you to do?", "answer_span": "that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances.", "chunk": "that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. When you run your tasks and services with the Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task. You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE. For more information, see Launch types. Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.Unless otherwise specified, the information on this page applies to all Fargate platforms. This topic describes the different components of Fargate tasks and services, and calls out special considerations for using Fargate with Amazon ECS. For information about the Regions that support Linux containers on Fargate, see the section called “Linux containers on AWS Fargate”. For information about the Regions that support Windows containers on Fargate, see the section called “Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task"} +{"global_id": 481, "doc_id": "ecs", "chunk_id": "6", "question_id": 2, "question": "What do you no longer have to do with AWS Fargate?", "answer_span": "With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers.", "chunk": "that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. When you run your tasks and services with the Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task. You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE. For more information, see Launch types. Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.Unless otherwise specified, the information on this page applies to all Fargate platforms. This topic describes the different components of Fargate tasks and services, and calls out special considerations for using Fargate with Amazon ECS. For information about the Regions that support Linux containers on Fargate, see the section called “Linux containers on AWS Fargate”. For information about the Regions that support Windows containers on Fargate, see the section called “Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task"} +{"global_id": 482, "doc_id": "ecs", "chunk_id": "6", "question_id": 3, "question": "What must you set the requiresCompatibilities task definition parameter to for Fargate?", "answer_span": "You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE.", "chunk": "that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. When you run your tasks and services with the Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task. You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE. For more information, see Launch types. Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.Unless otherwise specified, the information on this page applies to all Fargate platforms. This topic describes the different components of Fargate tasks and services, and calls out special considerations for using Fargate with Amazon ECS. For information about the Regions that support Linux containers on Fargate, see the section called “Linux containers on AWS Fargate”. For information about the Regions that support Windows containers on Fargate, see the section called “Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task"} +{"global_id": 483, "doc_id": "ecs", "chunk_id": "6", "question_id": 4, "question": "What platform versions does Fargate offer?", "answer_span": "Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.", "chunk": "that you can use with Amazon ECS to run containers without having to manage servers or clusters of Amazon EC2 instances. With AWS Fargate, you no longer have to provision, configure, or scale clusters of virtual machines to run containers. This removes the need to choose server types, decide when to scale your clusters, or optimize cluster packing. When you run your tasks and services with the Fargate launch type, you package your application in containers, specify the CPU and memory requirements, define networking and IAM policies, and launch the application. Each Fargate task has its own isolation boundary and does not share the underlying kernel, CPU resources, memory resources, or elastic network interface with another task. You configure your task definitions for Fargate by setting the requiresCompatibilities task definition parameter to FARGATE. For more information, see Launch types. Fargate offers platform versions for Amazon Linux 2 (platform version 1.3.0), Bottlerocket operating system (platform version 1.4.0), and Microsoft Windows 2019 Server Full and Core editions.Unless otherwise specified, the information on this page applies to all Fargate platforms. This topic describes the different components of Fargate tasks and services, and calls out special considerations for using Fargate with Amazon ECS. For information about the Regions that support Linux containers on Fargate, see the section called “Linux containers on AWS Fargate”. For information about the Regions that support Windows containers on Fargate, see the section called “Windows containers on AWS Fargate”. Walkthroughs For information about how to get started using the console, see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task"} +{"global_id": 484, "doc_id": "ecs", "chunk_id": "7", "question_id": 1, "question": "What type of tasks can you create for the Fargate launch type?", "answer_span": "Learn how to create an Amazon ECS Linux task for the Fargate launch type", "chunk": "see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Walkthroughs 167 Amazon Elastic Container Service Developer Guide • Creating an Amazon ECS Windows task for the Fargate launch type with the AWS CLI Capacity providers The following capacity providers are available: • Fargate • Fargate Spot - Run interruption tolerant Amazon ECS tasks at a discounted rate compared to the AWS Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks will be interrupted with a two-minute warning. For more information, see Amazon ECS clusters for Fargate. Task definitions Tasks that use the Fargate launch type don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently for Fargate tasks. For more information, see Task CPU and memory. Platform versions AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the"} +{"global_id": 485, "doc_id": "ecs", "chunk_id": "7", "question_id": 2, "question": "What are the available capacity providers for Amazon ECS?", "answer_span": "The following capacity providers are available: • Fargate • Fargate Spot", "chunk": "see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Walkthroughs 167 Amazon Elastic Container Service Developer Guide • Creating an Amazon ECS Windows task for the Fargate launch type with the AWS CLI Capacity providers The following capacity providers are available: • Fargate • Fargate Spot - Run interruption tolerant Amazon ECS tasks at a discounted rate compared to the AWS Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks will be interrupted with a two-minute warning. For more information, see Amazon ECS clusters for Fargate. Task definitions Tasks that use the Fargate launch type don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently for Fargate tasks. For more information, see Task CPU and memory. Platform versions AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the"} +{"global_id": 486, "doc_id": "ecs", "chunk_id": "7", "question_id": 3, "question": "What happens when AWS needs the capacity back for Fargate Spot tasks?", "answer_span": "your tasks will be interrupted with a two-minute warning", "chunk": "see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Walkthroughs 167 Amazon Elastic Container Service Developer Guide • Creating an Amazon ECS Windows task for the Fargate launch type with the AWS CLI Capacity providers The following capacity providers are available: • Fargate • Fargate Spot - Run interruption tolerant Amazon ECS tasks at a discounted rate compared to the AWS Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks will be interrupted with a two-minute warning. For more information, see Amazon ECS clusters for Fargate. Task definitions Tasks that use the Fargate launch type don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently for Fargate tasks. For more information, see Task CPU and memory. Platform versions AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the"} +{"global_id": 487, "doc_id": "ecs", "chunk_id": "7", "question_id": 4, "question": "What do AWS Fargate platform versions refer to?", "answer_span": "AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure", "chunk": "see: • Learn how to create an Amazon ECS Linux task for the Fargate launch type • Learn how to create an Amazon ECS Windows task for the Fargate launch type For information about how to get started using the AWS CLI, see: • Creating an Amazon ECS Linux task for the Fargate launch type with the AWS CLI Walkthroughs 167 Amazon Elastic Container Service Developer Guide • Creating an Amazon ECS Windows task for the Fargate launch type with the AWS CLI Capacity providers The following capacity providers are available: • Fargate • Fargate Spot - Run interruption tolerant Amazon ECS tasks at a discounted rate compared to the AWS Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks will be interrupted with a two-minute warning. For more information, see Amazon ECS clusters for Fargate. Task definitions Tasks that use the Fargate launch type don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently for Fargate tasks. For more information, see Task CPU and memory. Platform versions AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the"} +{"global_id": 488, "doc_id": "ecs", "chunk_id": "8", "question_id": 1, "question": "What happens if a security issue is found that affects an existing platform version?", "answer_span": "AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision.", "chunk": "runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . Capacity providers 168 Amazon Elastic Container Service Developer Guide For more information see Fargate platform versions for Amazon ECS. Service load balancing Your Amazon ECS service on AWS Fargate can optionally be configured to use Elastic Load Balancing to distribute traffic evenly across the tasks in your service. Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types. Application Load Balancers are used to route HTTP/HTTPS (or layer 7) traffic. Network Load Balancers are used to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information,"} +{"global_id": 489, "doc_id": "ecs", "chunk_id": "8", "question_id": 2, "question": "What must you do to use the latest platform version revision?", "answer_span": "If you want to use the latest platform version revision, then you must start a new task.", "chunk": "runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . Capacity providers 168 Amazon Elastic Container Service Developer Guide For more information see Fargate platform versions for Amazon ECS. Service load balancing Your Amazon ECS service on AWS Fargate can optionally be configured to use Elastic Load Balancing to distribute traffic evenly across the tasks in your service. Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types. Application Load Balancers are used to route HTTP/HTTPS (or layer 7) traffic. Network Load Balancers are used to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information,"} +{"global_id": 490, "doc_id": "ecs", "chunk_id": "8", "question_id": 3, "question": "What types of load balancers are supported by Amazon ECS services on AWS Fargate?", "answer_span": "Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types.", "chunk": "runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . Capacity providers 168 Amazon Elastic Container Service Developer Guide For more information see Fargate platform versions for Amazon ECS. Service load balancing Your Amazon ECS service on AWS Fargate can optionally be configured to use Elastic Load Balancing to distribute traffic evenly across the tasks in your service. Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types. Application Load Balancers are used to route HTTP/HTTPS (or layer 7) traffic. Network Load Balancers are used to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information,"} +{"global_id": 491, "doc_id": "ecs", "chunk_id": "8", "question_id": 4, "question": "What is required when creating a target group for these services?", "answer_span": "you must choose ip as the target type, not instance.", "chunk": "runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . Capacity providers 168 Amazon Elastic Container Service Developer Guide For more information see Fargate platform versions for Amazon ECS. Service load balancing Your Amazon ECS service on AWS Fargate can optionally be configured to use Elastic Load Balancing to distribute traffic evenly across the tasks in your service. Amazon ECS services on AWS Fargate support the Application Load Balancer, Network Load Balancer, and load balancer types. Application Load Balancers are used to route HTTP/HTTPS (or layer 7) traffic. Network Load Balancers are used to route TCP or UDP (or layer 4) traffic. For more information, see Use load balancing to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information,"} +{"global_id": 492, "doc_id": "ecs", "chunk_id": "9", "question_id": 1, "question": "What must you choose as the target type when creating a target group for Amazon ECS services?", "answer_span": "you must choose ip as the target type, not instance.", "chunk": "to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information, see Use load balancing to distribute Amazon ECS service traffic. Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later. Usage metrics You can use CloudWatch usage metrics to provide visibility into your accounts usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards. AWS Fargate usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about AWS Fargate service quotas, Amazon ECS endpoints and quotas in the Amazon Web Services General Reference.. For more information about AWS Fargate usage metrics, see AWS Fargate usage metrics. Amazon ECS security considerations for when to use the Fargate launch type We recommend that customers looking for strong isolation for their tasks use Fargate. Fargate runs each task in a hardware virtualization environment. This ensures that these containerized workloads do not share network interfaces, Fargate ephemeral storage, CPU, or memory with other tasks. For more information, see Security Overview of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or"} +{"global_id": 493, "doc_id": "ecs", "chunk_id": "9", "question_id": 2, "question": "What is only supported when using platform version 1.4 or later?", "answer_span": "Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later.", "chunk": "to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information, see Use load balancing to distribute Amazon ECS service traffic. Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later. Usage metrics You can use CloudWatch usage metrics to provide visibility into your accounts usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards. AWS Fargate usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about AWS Fargate service quotas, Amazon ECS endpoints and quotas in the Amazon Web Services General Reference.. For more information about AWS Fargate usage metrics, see AWS Fargate usage metrics. Amazon ECS security considerations for when to use the Fargate launch type We recommend that customers looking for strong isolation for their tasks use Fargate. Fargate runs each task in a hardware virtualization environment. This ensures that these containerized workloads do not share network interfaces, Fargate ephemeral storage, CPU, or memory with other tasks. For more information, see Security Overview of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or"} +{"global_id": 494, "doc_id": "ecs", "chunk_id": "9", "question_id": 3, "question": "What do AWS Fargate usage metrics correspond to?", "answer_span": "AWS Fargate usage metrics correspond to AWS service quotas.", "chunk": "to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information, see Use load balancing to distribute Amazon ECS service traffic. Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later. Usage metrics You can use CloudWatch usage metrics to provide visibility into your accounts usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards. AWS Fargate usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about AWS Fargate service quotas, Amazon ECS endpoints and quotas in the Amazon Web Services General Reference.. For more information about AWS Fargate usage metrics, see AWS Fargate usage metrics. Amazon ECS security considerations for when to use the Fargate launch type We recommend that customers looking for strong isolation for their tasks use Fargate. Fargate runs each task in a hardware virtualization environment. This ensures that these containerized workloads do not share network interfaces, Fargate ephemeral storage, CPU, or memory with other tasks. For more information, see Security Overview of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or"} +{"global_id": 495, "doc_id": "ecs", "chunk_id": "9", "question_id": 4, "question": "What do we recommend for customers looking for strong isolation for their tasks?", "answer_span": "We recommend that customers looking for strong isolation for their tasks use Fargate.", "chunk": "to distribute Amazon ECS service traffic. When you create a target group for these services, you must choose ip as the target type, not instance. This is because tasks that use the awsvpc network mode are associated with an elastic network interface, not an Amazon EC2 instance. For more information, see Use load balancing to distribute Amazon ECS service traffic. Using a Network Load Balancer to route UDP traffic to your Amazon ECS on AWS Fargate tasks is only supported when using platform version 1.4 or later. Usage metrics You can use CloudWatch usage metrics to provide visibility into your accounts usage of resources. Use these metrics to visualize your current service usage on CloudWatch graphs and dashboards. AWS Fargate usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota. For more information about AWS Fargate service quotas, Amazon ECS endpoints and quotas in the Amazon Web Services General Reference.. For more information about AWS Fargate usage metrics, see AWS Fargate usage metrics. Amazon ECS security considerations for when to use the Fargate launch type We recommend that customers looking for strong isolation for their tasks use Fargate. Fargate runs each task in a hardware virtualization environment. This ensures that these containerized workloads do not share network interfaces, Fargate ephemeral storage, CPU, or memory with other tasks. For more information, see Security Overview of AWS Fargate. Service load balancing 169 Amazon Elastic Container Service Developer Guide Fargate security best practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or"} +{"global_id": 496, "doc_id": "ecs", "chunk_id": "10", "question_id": 1, "question": "What should you use to encrypt ephemeral storage for Fargate?", "answer_span": "Use AWS KMS to encrypt ephemeral storage for Fargate", "chunk": "practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or your own customer managed keys. For tasks that are hosted on Fargate using platform version 1.4.0 or later, each task receives 20 GiB of ephemeral storage. For more information, see customer managed key (CMK). You can increase the total amount of ephemeral storage, up to a maximum of 200 GiB, by specifying the ephemeralStorage parameter in your task definition. For such tasks that were launched on May 28, 2020 or later, the ephemeral storage is encrypted with an AES-256 encryption algorithm using an encryption key managed by Fargate. For more information, see Storage options for Amazon ECS tasks. Example: Launching an task on Fargate platform version 1.4.0 with ephemeral storage encryption The following command will launch a task on Fargate platform version 1.4. Because this task is launched as part of the cluster, it uses the 20 GiB of ephemeral storage that's automatically encrypted. aws ecs run-task --cluster clustername \\ --task-definition taskdefinition:version \\ --count 1 --launch-type \"FARGATE\" \\ --platform-version 1.4.0 \\ --network-configuration \"awsvpcConfiguration={subnets=[subnetid],securityGroups=[securitygroupid]}\" \\ --region region SYS_PTRACE capability for kernel syscall tracing with Fargate The default configuration of Linux capabilities that are added or removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on"} +{"global_id": 497, "doc_id": "ecs", "chunk_id": "10", "question_id": 2, "question": "What is the maximum amount of ephemeral storage you can specify in your task definition?", "answer_span": "you can increase the total amount of ephemeral storage, up to a maximum of 200 GiB", "chunk": "practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or your own customer managed keys. For tasks that are hosted on Fargate using platform version 1.4.0 or later, each task receives 20 GiB of ephemeral storage. For more information, see customer managed key (CMK). You can increase the total amount of ephemeral storage, up to a maximum of 200 GiB, by specifying the ephemeralStorage parameter in your task definition. For such tasks that were launched on May 28, 2020 or later, the ephemeral storage is encrypted with an AES-256 encryption algorithm using an encryption key managed by Fargate. For more information, see Storage options for Amazon ECS tasks. Example: Launching an task on Fargate platform version 1.4.0 with ephemeral storage encryption The following command will launch a task on Fargate platform version 1.4. Because this task is launched as part of the cluster, it uses the 20 GiB of ephemeral storage that's automatically encrypted. aws ecs run-task --cluster clustername \\ --task-definition taskdefinition:version \\ --count 1 --launch-type \"FARGATE\" \\ --platform-version 1.4.0 \\ --network-configuration \"awsvpcConfiguration={subnets=[subnetid],securityGroups=[securitygroupid]}\" \\ --region region SYS_PTRACE capability for kernel syscall tracing with Fargate The default configuration of Linux capabilities that are added or removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on"} +{"global_id": 498, "doc_id": "ecs", "chunk_id": "10", "question_id": 3, "question": "What encryption algorithm is used for ephemeral storage encrypted by Fargate?", "answer_span": "the ephemeral storage is encrypted with an AES-256 encryption algorithm", "chunk": "practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or your own customer managed keys. For tasks that are hosted on Fargate using platform version 1.4.0 or later, each task receives 20 GiB of ephemeral storage. For more information, see customer managed key (CMK). You can increase the total amount of ephemeral storage, up to a maximum of 200 GiB, by specifying the ephemeralStorage parameter in your task definition. For such tasks that were launched on May 28, 2020 or later, the ephemeral storage is encrypted with an AES-256 encryption algorithm using an encryption key managed by Fargate. For more information, see Storage options for Amazon ECS tasks. Example: Launching an task on Fargate platform version 1.4.0 with ephemeral storage encryption The following command will launch a task on Fargate platform version 1.4. Because this task is launched as part of the cluster, it uses the 20 GiB of ephemeral storage that's automatically encrypted. aws ecs run-task --cluster clustername \\ --task-definition taskdefinition:version \\ --count 1 --launch-type \"FARGATE\" \\ --platform-version 1.4.0 \\ --network-configuration \"awsvpcConfiguration={subnets=[subnetid],securityGroups=[securitygroupid]}\" \\ --region region SYS_PTRACE capability for kernel syscall tracing with Fargate The default configuration of Linux capabilities that are added or removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on"} +{"global_id": 499, "doc_id": "ecs", "chunk_id": "10", "question_id": 4, "question": "What kernel capability do tasks launched on Fargate support?", "answer_span": "Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability", "chunk": "practices in Amazon ECS We recommend that you take into account the following best practices when you use AWS Fargate. For additional guidance, see Security overview of AWS Fargate. Use AWS KMS to encrypt ephemeral storage for Fargate You should have your ephemeral storage encrypted by either AWS KMS or your own customer managed keys. For tasks that are hosted on Fargate using platform version 1.4.0 or later, each task receives 20 GiB of ephemeral storage. For more information, see customer managed key (CMK). You can increase the total amount of ephemeral storage, up to a maximum of 200 GiB, by specifying the ephemeralStorage parameter in your task definition. For such tasks that were launched on May 28, 2020 or later, the ephemeral storage is encrypted with an AES-256 encryption algorithm using an encryption key managed by Fargate. For more information, see Storage options for Amazon ECS tasks. Example: Launching an task on Fargate platform version 1.4.0 with ephemeral storage encryption The following command will launch a task on Fargate platform version 1.4. Because this task is launched as part of the cluster, it uses the 20 GiB of ephemeral storage that's automatically encrypted. aws ecs run-task --cluster clustername \\ --task-definition taskdefinition:version \\ --count 1 --launch-type \"FARGATE\" \\ --platform-version 1.4.0 \\ --network-configuration \"awsvpcConfiguration={subnets=[subnetid],securityGroups=[securitygroupid]}\" \\ --region region SYS_PTRACE capability for kernel syscall tracing with Fargate The default configuration of Linux capabilities that are added or removed from your container are provided by Docker. Tasks that are launched on Fargate only support adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on"} +{"global_id": 500, "doc_id": "ecs", "chunk_id": "11", "question_id": 1, "question": "What kernel capability is mentioned in the text?", "answer_span": "adding the SYS_PTRACE kernel capability.", "chunk": "adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on GitHub here. Use Amazon GuardDuty with Fargate Runtime Monitoring Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment. Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior. Runtime Monitoring uses a lightweight, fully managed GuardDuty security agent that analyzes onhost behavior, such as file access, process execution, and network connections. This covers issues including escalation of privileges, use of exposed credentials, or communication with malicious IP addresses, domains, and the presence of malware on your Amazon EC2 instances and container workloads. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. Fargate security considerations for Amazon ECS Each task has a dedicated infrastructure capacity because Fargate runs each workload on an isolated virtual environment. Workloads that run on Fargate do not share network interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated"} +{"global_id": 501, "doc_id": "ecs", "chunk_id": "11", "question_id": 2, "question": "What service does Amazon GuardDuty provide?", "answer_span": "Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment.", "chunk": "adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on GitHub here. Use Amazon GuardDuty with Fargate Runtime Monitoring Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment. Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior. Runtime Monitoring uses a lightweight, fully managed GuardDuty security agent that analyzes onhost behavior, such as file access, process execution, and network connections. This covers issues including escalation of privileges, use of exposed credentials, or communication with malicious IP addresses, domains, and the presence of malware on your Amazon EC2 instances and container workloads. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. Fargate security considerations for Amazon ECS Each task has a dedicated infrastructure capacity because Fargate runs each workload on an isolated virtual environment. Workloads that run on Fargate do not share network interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated"} +{"global_id": 502, "doc_id": "ecs", "chunk_id": "11", "question_id": 3, "question": "How does Runtime Monitoring in GuardDuty protect workloads running on Fargate?", "answer_span": "Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior.", "chunk": "adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on GitHub here. Use Amazon GuardDuty with Fargate Runtime Monitoring Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment. Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior. Runtime Monitoring uses a lightweight, fully managed GuardDuty security agent that analyzes onhost behavior, such as file access, process execution, and network connections. This covers issues including escalation of privileges, use of exposed credentials, or communication with malicious IP addresses, domains, and the presence of malware on your Amazon EC2 instances and container workloads. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. Fargate security considerations for Amazon ECS Each task has a dedicated infrastructure capacity because Fargate runs each workload on an isolated virtual environment. Workloads that run on Fargate do not share network interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated"} +{"global_id": 503, "doc_id": "ecs", "chunk_id": "11", "question_id": 4, "question": "What is a sidecar in the context of Amazon ECS tasks?", "answer_span": "A sidecar is a container that runs alongside an application container in an Amazon ECS task.", "chunk": "adding the SYS_PTRACE kernel capability. The following video shows how to use this feature through the Sysdig Falco project. Fargate security best practices 170 Amazon Elastic Container Service Developer Guide #ContainersFromTheCouch - Troubleshooting your Fargate Task using SYS_PTRACE capability The code discussed in the previous video can be found on GitHub here. Use Amazon GuardDuty with Fargate Runtime Monitoring Amazon GuardDuty is a threat detection service that helps protect your accounts, containers, workloads, and the data within your AWS environment. Using machine learning (ML) models, and anomaly and threat detection capabilities, GuardDuty continuously monitors different log sources and runtime activity to identify and prioritize potential security risks and malicious activities in your environment. Runtime Monitoring in GuardDuty protects workloads running on Fargate by continuously monitoring AWS log and networking activity to identify malicious or unauthorized behavior. Runtime Monitoring uses a lightweight, fully managed GuardDuty security agent that analyzes onhost behavior, such as file access, process execution, and network connections. This covers issues including escalation of privileges, use of exposed credentials, or communication with malicious IP addresses, domains, and the presence of malware on your Amazon EC2 instances and container workloads. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. Fargate security considerations for Amazon ECS Each task has a dedicated infrastructure capacity because Fargate runs each workload on an isolated virtual environment. Workloads that run on Fargate do not share network interfaces, ephemeral storage, CPU, or memory with other tasks. You can run multiple containers within a task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated"} +{"global_id": 504, "doc_id": "ecs", "chunk_id": "12", "question_id": 1, "question": "What is a sidecar in the context of Amazon ECS tasks?", "answer_span": "A sidecar is a container that runs alongside an application container in an Amazon ECS task.", "chunk": "task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application. Containers that are part of the same task share resources for the Fargate launch type because these containers will always run on the same host and share compute resources. These containers also share the ephemeral storage provided by Fargate. Linux containers in a task share network namespaces, including the IP address and network ports. Inside a task, containers that belong to the task can inter-communicate over localhost. The runtime environment in Fargate prevents you from using certain controller features that are supported on EC2 instances. Consider the following when you architect workloads that run on Fargate: Use Amazon GuardDuty with Fargate Runtime Monitoring 171 Amazon Elastic Container Service Developer Guide • No privileged containers or access - Features such as privileged containers or access are currently unavailable on Fargate. This will affect uses cases such as running Docker in Docker. • Limited access to Linux capabilities - The environment in which containers run on Fargate is locked down. Additional Linux capabilities, such as CAP_SYS_ADMIN and CAP_NET_ADMIN, are restricted to prevent a privilege escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You"} +{"global_id": 505, "doc_id": "ecs", "chunk_id": "12", "question_id": 2, "question": "What do sidecars help you do with application functions?", "answer_span": "Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application.", "chunk": "task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application. Containers that are part of the same task share resources for the Fargate launch type because these containers will always run on the same host and share compute resources. These containers also share the ephemeral storage provided by Fargate. Linux containers in a task share network namespaces, including the IP address and network ports. Inside a task, containers that belong to the task can inter-communicate over localhost. The runtime environment in Fargate prevents you from using certain controller features that are supported on EC2 instances. Consider the following when you architect workloads that run on Fargate: Use Amazon GuardDuty with Fargate Runtime Monitoring 171 Amazon Elastic Container Service Developer Guide • No privileged containers or access - Features such as privileged containers or access are currently unavailable on Fargate. This will affect uses cases such as running Docker in Docker. • Limited access to Linux capabilities - The environment in which containers run on Fargate is locked down. Additional Linux capabilities, such as CAP_SYS_ADMIN and CAP_NET_ADMIN, are restricted to prevent a privilege escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You"} +{"global_id": 506, "doc_id": "ecs", "chunk_id": "12", "question_id": 3, "question": "What is restricted in the Fargate environment regarding Linux capabilities?", "answer_span": "Additional Linux capabilities, such as CAP_SYS_ADMIN and CAP_NET_ADMIN, are restricted to prevent a privilege escalation.", "chunk": "task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application. Containers that are part of the same task share resources for the Fargate launch type because these containers will always run on the same host and share compute resources. These containers also share the ephemeral storage provided by Fargate. Linux containers in a task share network namespaces, including the IP address and network ports. Inside a task, containers that belong to the task can inter-communicate over localhost. The runtime environment in Fargate prevents you from using certain controller features that are supported on EC2 instances. Consider the following when you architect workloads that run on Fargate: Use Amazon GuardDuty with Fargate Runtime Monitoring 171 Amazon Elastic Container Service Developer Guide • No privileged containers or access - Features such as privileged containers or access are currently unavailable on Fargate. This will affect uses cases such as running Docker in Docker. • Limited access to Linux capabilities - The environment in which containers run on Fargate is locked down. Additional Linux capabilities, such as CAP_SYS_ADMIN and CAP_NET_ADMIN, are restricted to prevent a privilege escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You"} +{"global_id": 507, "doc_id": "ecs", "chunk_id": "12", "question_id": 4, "question": "Can customers connect to a host running customer workloads on Fargate?", "answer_span": "Neither customers nor AWS operators can connect to a host running customer workloads.", "chunk": "task including application containers and sidecar containers, or simply sidecars. A sidecar is a container that runs alongside an application container in an Amazon ECS task. While the application container runs core application code, processes running in sidecars can augment the application. Sidecars help you segregate application functions into dedicated containers, making it easier for you to update parts of your application. Containers that are part of the same task share resources for the Fargate launch type because these containers will always run on the same host and share compute resources. These containers also share the ephemeral storage provided by Fargate. Linux containers in a task share network namespaces, including the IP address and network ports. Inside a task, containers that belong to the task can inter-communicate over localhost. The runtime environment in Fargate prevents you from using certain controller features that are supported on EC2 instances. Consider the following when you architect workloads that run on Fargate: Use Amazon GuardDuty with Fargate Runtime Monitoring 171 Amazon Elastic Container Service Developer Guide • No privileged containers or access - Features such as privileged containers or access are currently unavailable on Fargate. This will affect uses cases such as running Docker in Docker. • Limited access to Linux capabilities - The environment in which containers run on Fargate is locked down. Additional Linux capabilities, such as CAP_SYS_ADMIN and CAP_NET_ADMIN, are restricted to prevent a privilege escalation. Fargate supports adding the CAP_SYS_PTRACE Linux capability to tasks to allow observability and security tools deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You"} +{"global_id": 508, "doc_id": "ecs", "chunk_id": "13", "question_id": 1, "question": "What can you use ECS exec for?", "answer_span": "You can use ECS exec to run commands in or get a shell to a container running on Fargate.", "chunk": "deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You can use ECS exec to help collect diagnostic information for debugging. Fargate also prevents containers from accessing the underlying host’s resources, such as the file system, devices, networking, and container runtime. • Networking - You can use security groups and network ACLs to control inbound and outbound traffic. Fargate tasks receive an IP address from the configured subnet in your VPC. Fargate platform versions for Amazon ECS AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you"} +{"global_id": 509, "doc_id": "ecs", "chunk_id": "13", "question_id": 2, "question": "What prevents containers from accessing the underlying host’s resources?", "answer_span": "Fargate also prevents containers from accessing the underlying host’s resources, such as the file system, devices, networking, and container runtime.", "chunk": "deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You can use ECS exec to help collect diagnostic information for debugging. Fargate also prevents containers from accessing the underlying host’s resources, such as the file system, devices, networking, and container runtime. • Networking - You can use security groups and network ACLs to control inbound and outbound traffic. Fargate tasks receive an IP address from the configured subnet in your VPC. Fargate platform versions for Amazon ECS AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you"} +{"global_id": 510, "doc_id": "ecs", "chunk_id": "13", "question_id": 3, "question": "What do Fargate tasks receive from the configured subnet in your VPC?", "answer_span": "Fargate tasks receive an IP address from the configured subnet in your VPC.", "chunk": "deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You can use ECS exec to help collect diagnostic information for debugging. Fargate also prevents containers from accessing the underlying host’s resources, such as the file system, devices, networking, and container runtime. • Networking - You can use security groups and network ACLs to control inbound and outbound traffic. Fargate tasks receive an IP address from the configured subnet in your VPC. Fargate platform versions for Amazon ECS AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you"} +{"global_id": 511, "doc_id": "ecs", "chunk_id": "13", "question_id": 4, "question": "What happens if a security issue is found that affects an existing platform version?", "answer_span": "AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision.", "chunk": "deployed within the task to monitor the containerized application. • No access to the underlying host - Neither customers nor AWS operators can connect to a host running customer workloads. You can use ECS exec to run commands in or get a shell to a container running on Fargate. You can use ECS exec to help collect diagnostic information for debugging. Fargate also prevents containers from accessing the underlying host’s resources, such as the file system, devices, networking, and container runtime. • Networking - You can use security groups and network ACLs to control inbound and outbound traffic. Fargate tasks receive an IP address from the configured subnet in your VPC. Fargate platform versions for Amazon ECS AWS Fargate platform versions are used to refer to a specific runtime environment for Fargate task infrastructure. It is a combination of the kernel and container runtime versions. You select a platform version when you run a task or when you create a service to maintain a number of identical tasks. New revisions of platform versions are released as the runtime environment evolves, for example, if there are kernel or operating system updates, new features, bug fixes, or security updates. A Fargate platform version is updated by making a new platform version revision. Each task runs on one platform version revision during its lifecycle. If you want to use the latest platform version revision, then you must start a new task. A new task that runs on Fargate always runs on the latest revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you"} +{"global_id": 512, "doc_id": "ecs", "chunk_id": "14", "question_id": 1, "question": "What does AWS do if a security issue is found that affects an existing platform version?", "answer_span": "AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision.", "chunk": "revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . You specify the platform version when you run a task, or deploy a service. Fargate platform versions 172 Amazon Elastic Container Service Developer Guide Amazon ECS clusters An Amazon ECS cluster is a logical grouping of tasks or services. In addition to tasks and services, a cluster consists of the following resources: • The infrastructure capacity which can be a combination of the following: • Amazon EC2 instances in the AWS cloud • Serverless (AWS Fargate) in the AWS cloud • On-premises virtual machines (VM) or servers • The network (VPC and subnet) where your tasks and services run When you use Amazon EC2 instances for the capacity, the subnet can be in Availability Zones, Local Zones, Wavelength Zones or AWS Outposts. • An optional namespace The namespace is used for service-to-service communication with Service Connect. • A monitoring option CloudWatch Container Insights comes at an additional cost and is a fully managed service. It automatically collects, aggregates, and summarizes Amazon ECS metrics and logs. The following are general concepts about Amazon ECS clusters. • You create clusters to separate your resources. • Clusters are AWS Region specific. • Clusters can be in any of the following states. ACTIVE The cluster is ready to accept tasks and, if applicable, you can register container instances with the cluster. PROVISIONING The cluster has capacity"} +{"global_id": 513, "doc_id": "ecs", "chunk_id": "14", "question_id": 2, "question": "What is an Amazon ECS cluster?", "answer_span": "An Amazon ECS cluster is a logical grouping of tasks or services.", "chunk": "revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . You specify the platform version when you run a task, or deploy a service. Fargate platform versions 172 Amazon Elastic Container Service Developer Guide Amazon ECS clusters An Amazon ECS cluster is a logical grouping of tasks or services. In addition to tasks and services, a cluster consists of the following resources: • The infrastructure capacity which can be a combination of the following: • Amazon EC2 instances in the AWS cloud • Serverless (AWS Fargate) in the AWS cloud • On-premises virtual machines (VM) or servers • The network (VPC and subnet) where your tasks and services run When you use Amazon EC2 instances for the capacity, the subnet can be in Availability Zones, Local Zones, Wavelength Zones or AWS Outposts. • An optional namespace The namespace is used for service-to-service communication with Service Connect. • A monitoring option CloudWatch Container Insights comes at an additional cost and is a fully managed service. It automatically collects, aggregates, and summarizes Amazon ECS metrics and logs. The following are general concepts about Amazon ECS clusters. • You create clusters to separate your resources. • Clusters are AWS Region specific. • Clusters can be in any of the following states. ACTIVE The cluster is ready to accept tasks and, if applicable, you can register container instances with the cluster. PROVISIONING The cluster has capacity"} +{"global_id": 514, "doc_id": "ecs", "chunk_id": "14", "question_id": 3, "question": "What can the infrastructure capacity of a cluster consist of?", "answer_span": "The infrastructure capacity which can be a combination of the following: • Amazon EC2 instances in the AWS cloud • Serverless (AWS Fargate) in the AWS cloud • On-premises virtual machines (VM) or servers.", "chunk": "revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . You specify the platform version when you run a task, or deploy a service. Fargate platform versions 172 Amazon Elastic Container Service Developer Guide Amazon ECS clusters An Amazon ECS cluster is a logical grouping of tasks or services. In addition to tasks and services, a cluster consists of the following resources: • The infrastructure capacity which can be a combination of the following: • Amazon EC2 instances in the AWS cloud • Serverless (AWS Fargate) in the AWS cloud • On-premises virtual machines (VM) or servers • The network (VPC and subnet) where your tasks and services run When you use Amazon EC2 instances for the capacity, the subnet can be in Availability Zones, Local Zones, Wavelength Zones or AWS Outposts. • An optional namespace The namespace is used for service-to-service communication with Service Connect. • A monitoring option CloudWatch Container Insights comes at an additional cost and is a fully managed service. It automatically collects, aggregates, and summarizes Amazon ECS metrics and logs. The following are general concepts about Amazon ECS clusters. • You create clusters to separate your resources. • Clusters are AWS Region specific. • Clusters can be in any of the following states. ACTIVE The cluster is ready to accept tasks and, if applicable, you can register container instances with the cluster. PROVISIONING The cluster has capacity"} +{"global_id": 515, "doc_id": "ecs", "chunk_id": "14", "question_id": 4, "question": "What does CloudWatch Container Insights do?", "answer_span": "CloudWatch Container Insights comes at an additional cost and is a fully managed service. It automatically collects, aggregates, and summarizes Amazon ECS metrics and logs.", "chunk": "revision of a platform version, ensuring that tasks are always started on secure and patched infrastructure. If a security issue is found that affects an existing platform version, AWS creates a new patched revision of the platform version and retires tasks running on the vulnerable revision. In some cases, you may be notified that your tasks on Fargate have been scheduled for retirement. For more information, see Task retirement and maintenance for AWS Fargate on Amazon ECS . You specify the platform version when you run a task, or deploy a service. Fargate platform versions 172 Amazon Elastic Container Service Developer Guide Amazon ECS clusters An Amazon ECS cluster is a logical grouping of tasks or services. In addition to tasks and services, a cluster consists of the following resources: • The infrastructure capacity which can be a combination of the following: • Amazon EC2 instances in the AWS cloud • Serverless (AWS Fargate) in the AWS cloud • On-premises virtual machines (VM) or servers • The network (VPC and subnet) where your tasks and services run When you use Amazon EC2 instances for the capacity, the subnet can be in Availability Zones, Local Zones, Wavelength Zones or AWS Outposts. • An optional namespace The namespace is used for service-to-service communication with Service Connect. • A monitoring option CloudWatch Container Insights comes at an additional cost and is a fully managed service. It automatically collects, aggregates, and summarizes Amazon ECS metrics and logs. The following are general concepts about Amazon ECS clusters. • You create clusters to separate your resources. • Clusters are AWS Region specific. • Clusters can be in any of the following states. ACTIVE The cluster is ready to accept tasks and, if applicable, you can register container instances with the cluster. PROVISIONING The cluster has capacity"} +{"global_id": 516, "doc_id": "ecs", "chunk_id": "15", "question_id": 1, "question": "What is the purpose of creating clusters?", "answer_span": "You create clusters to separate your resources.", "chunk": "ECS clusters. • You create clusters to separate your resources. • Clusters are AWS Region specific. • Clusters can be in any of the following states. ACTIVE The cluster is ready to accept tasks and, if applicable, you can register container instances with the cluster. PROVISIONING The cluster has capacity providers associated with it and the resources needed for the capacity provider are being created. 685 Amazon Elastic Container Service Developer Guide DEPROVISIONING The cluster has capacity providers associated with it and the resources needed for the capacity provider are being deleted. FAILED The cluster has capacity providers associated with it and the resources needed for the capacity provider have failed to create. INACTIVE The cluster has been deleted. Clusters with an INACTIVE status may remain discoverable in your account for a period of time. This behavior is subject to change in the future, so make sure you do not rely on INACTIVE clusters persisting. • You can use different instance types for the EC2 launch type or Auto Scaling group capacity providers. An instance can only be registered to one cluster at a time. • You can restrict access to clusters by creating custom IAM policies. For information, see Amazon ECS cluster examples section in Identity-based policy examples for Amazon Elastic Container Service. • You can use Service Auto Scaling to scale Fargate tasks. For more information, see Automatically scale your Amazon ECS service. • You can configure a default Service Connect namespace for a cluster. After you set a default Service Connect namespace, any new services created in the cluster can be added as client services in the namespace by turning on Service Connect. No additional configuration is required. For more information, see Use Service Connect to connect Amazon ECS services with short names. Capacity providers Amazon ECS"} +{"global_id": 517, "doc_id": "ecs", "chunk_id": "15", "question_id": 2, "question": "What does the ACTIVE state of a cluster indicate?", "answer_span": "The cluster is ready to accept tasks and, if applicable, you can register container instances with the cluster.", "chunk": "ECS clusters. • You create clusters to separate your resources. • Clusters are AWS Region specific. • Clusters can be in any of the following states. ACTIVE The cluster is ready to accept tasks and, if applicable, you can register container instances with the cluster. PROVISIONING The cluster has capacity providers associated with it and the resources needed for the capacity provider are being created. 685 Amazon Elastic Container Service Developer Guide DEPROVISIONING The cluster has capacity providers associated with it and the resources needed for the capacity provider are being deleted. FAILED The cluster has capacity providers associated with it and the resources needed for the capacity provider have failed to create. INACTIVE The cluster has been deleted. Clusters with an INACTIVE status may remain discoverable in your account for a period of time. This behavior is subject to change in the future, so make sure you do not rely on INACTIVE clusters persisting. • You can use different instance types for the EC2 launch type or Auto Scaling group capacity providers. An instance can only be registered to one cluster at a time. • You can restrict access to clusters by creating custom IAM policies. For information, see Amazon ECS cluster examples section in Identity-based policy examples for Amazon Elastic Container Service. • You can use Service Auto Scaling to scale Fargate tasks. For more information, see Automatically scale your Amazon ECS service. • You can configure a default Service Connect namespace for a cluster. After you set a default Service Connect namespace, any new services created in the cluster can be added as client services in the namespace by turning on Service Connect. No additional configuration is required. For more information, see Use Service Connect to connect Amazon ECS services with short names. Capacity providers Amazon ECS"} +{"global_id": 518, "doc_id": "ecs", "chunk_id": "15", "question_id": 3, "question": "What happens to clusters with an INACTIVE status?", "answer_span": "Clusters with an INACTIVE status may remain discoverable in your account for a period of time.", "chunk": "ECS clusters. • You create clusters to separate your resources. • Clusters are AWS Region specific. • Clusters can be in any of the following states. ACTIVE The cluster is ready to accept tasks and, if applicable, you can register container instances with the cluster. PROVISIONING The cluster has capacity providers associated with it and the resources needed for the capacity provider are being created. 685 Amazon Elastic Container Service Developer Guide DEPROVISIONING The cluster has capacity providers associated with it and the resources needed for the capacity provider are being deleted. FAILED The cluster has capacity providers associated with it and the resources needed for the capacity provider have failed to create. INACTIVE The cluster has been deleted. Clusters with an INACTIVE status may remain discoverable in your account for a period of time. This behavior is subject to change in the future, so make sure you do not rely on INACTIVE clusters persisting. • You can use different instance types for the EC2 launch type or Auto Scaling group capacity providers. An instance can only be registered to one cluster at a time. • You can restrict access to clusters by creating custom IAM policies. For information, see Amazon ECS cluster examples section in Identity-based policy examples for Amazon Elastic Container Service. • You can use Service Auto Scaling to scale Fargate tasks. For more information, see Automatically scale your Amazon ECS service. • You can configure a default Service Connect namespace for a cluster. After you set a default Service Connect namespace, any new services created in the cluster can be added as client services in the namespace by turning on Service Connect. No additional configuration is required. For more information, see Use Service Connect to connect Amazon ECS services with short names. Capacity providers Amazon ECS"} +{"global_id": 519, "doc_id": "ecs", "chunk_id": "15", "question_id": 4, "question": "Can an instance be registered to multiple clusters at the same time?", "answer_span": "An instance can only be registered to one cluster at a time.", "chunk": "ECS clusters. • You create clusters to separate your resources. • Clusters are AWS Region specific. • Clusters can be in any of the following states. ACTIVE The cluster is ready to accept tasks and, if applicable, you can register container instances with the cluster. PROVISIONING The cluster has capacity providers associated with it and the resources needed for the capacity provider are being created. 685 Amazon Elastic Container Service Developer Guide DEPROVISIONING The cluster has capacity providers associated with it and the resources needed for the capacity provider are being deleted. FAILED The cluster has capacity providers associated with it and the resources needed for the capacity provider have failed to create. INACTIVE The cluster has been deleted. Clusters with an INACTIVE status may remain discoverable in your account for a period of time. This behavior is subject to change in the future, so make sure you do not rely on INACTIVE clusters persisting. • You can use different instance types for the EC2 launch type or Auto Scaling group capacity providers. An instance can only be registered to one cluster at a time. • You can restrict access to clusters by creating custom IAM policies. For information, see Amazon ECS cluster examples section in Identity-based policy examples for Amazon Elastic Container Service. • You can use Service Auto Scaling to scale Fargate tasks. For more information, see Automatically scale your Amazon ECS service. • You can configure a default Service Connect namespace for a cluster. After you set a default Service Connect namespace, any new services created in the cluster can be added as client services in the namespace by turning on Service Connect. No additional configuration is required. For more information, see Use Service Connect to connect Amazon ECS services with short names. Capacity providers Amazon ECS"} +{"global_id": 520, "doc_id": "ecs", "chunk_id": "16", "question_id": 1, "question": "What happens when you turn on Service Connect in a default Service Connect namespace?", "answer_span": "any new services created in the cluster can be added as client services in the namespace by turning on Service Connect.", "chunk": "a default Service Connect namespace, any new services created in the cluster can be added as client services in the namespace by turning on Service Connect. No additional configuration is required. For more information, see Use Service Connect to connect Amazon ECS services with short names. Capacity providers Amazon ECS capacity providers manage the scaling of infrastructure for tasks in your clusters. Each cluster can have one or more capacity providers and an optional capacity provider strategy. You can assign a default capacity provider strategy to the cluster. The capacity provider strategy determines how the tasks are spread across the cluster's capacity providers. When you run a standalone task or create a service, you either use the cluster's default capacity provider strategy or a capacity provider strategy that overrides the default one.The cluster's default capacity provider strategy only applies when you don't specify a launch type, or capacity provider strategy for your task or service. If you provide either of these parameters, the default strategy isn't used. Capacity providers 686 Amazon Elastic Container Service Developer Guide For Fargate, you do not need to create or manage the capacity. You just need to associate any of the following pre-defined capacity providers with the cluster: • Fargate • Fargate Spot When you use EC2 instances for your capacity, you use Auto Scaling group to manage the EC2 instances. Auto Scaling helps ensure that you have the correct number of EC2 instances available to handle the application load. A cluster can contain a mix of tasks that are hosted on AWS Fargate, Amazon EC2 instances, or external instances. Tasks can run on Fargate or EC2 infrastructure as a launch type or a capacity provider strategy. If you use EC2 as a launch type, Amazon ECS doesn't track and scale the capacity of Amazon"} +{"global_id": 521, "doc_id": "ecs", "chunk_id": "16", "question_id": 2, "question": "What do Amazon ECS capacity providers manage?", "answer_span": "Amazon ECS capacity providers manage the scaling of infrastructure for tasks in your clusters.", "chunk": "a default Service Connect namespace, any new services created in the cluster can be added as client services in the namespace by turning on Service Connect. No additional configuration is required. For more information, see Use Service Connect to connect Amazon ECS services with short names. Capacity providers Amazon ECS capacity providers manage the scaling of infrastructure for tasks in your clusters. Each cluster can have one or more capacity providers and an optional capacity provider strategy. You can assign a default capacity provider strategy to the cluster. The capacity provider strategy determines how the tasks are spread across the cluster's capacity providers. When you run a standalone task or create a service, you either use the cluster's default capacity provider strategy or a capacity provider strategy that overrides the default one.The cluster's default capacity provider strategy only applies when you don't specify a launch type, or capacity provider strategy for your task or service. If you provide either of these parameters, the default strategy isn't used. Capacity providers 686 Amazon Elastic Container Service Developer Guide For Fargate, you do not need to create or manage the capacity. You just need to associate any of the following pre-defined capacity providers with the cluster: • Fargate • Fargate Spot When you use EC2 instances for your capacity, you use Auto Scaling group to manage the EC2 instances. Auto Scaling helps ensure that you have the correct number of EC2 instances available to handle the application load. A cluster can contain a mix of tasks that are hosted on AWS Fargate, Amazon EC2 instances, or external instances. Tasks can run on Fargate or EC2 infrastructure as a launch type or a capacity provider strategy. If you use EC2 as a launch type, Amazon ECS doesn't track and scale the capacity of Amazon"} +{"global_id": 522, "doc_id": "ecs", "chunk_id": "16", "question_id": 3, "question": "What is required to use Fargate as a capacity provider?", "answer_span": "you do not need to create or manage the capacity.", "chunk": "a default Service Connect namespace, any new services created in the cluster can be added as client services in the namespace by turning on Service Connect. No additional configuration is required. For more information, see Use Service Connect to connect Amazon ECS services with short names. Capacity providers Amazon ECS capacity providers manage the scaling of infrastructure for tasks in your clusters. Each cluster can have one or more capacity providers and an optional capacity provider strategy. You can assign a default capacity provider strategy to the cluster. The capacity provider strategy determines how the tasks are spread across the cluster's capacity providers. When you run a standalone task or create a service, you either use the cluster's default capacity provider strategy or a capacity provider strategy that overrides the default one.The cluster's default capacity provider strategy only applies when you don't specify a launch type, or capacity provider strategy for your task or service. If you provide either of these parameters, the default strategy isn't used. Capacity providers 686 Amazon Elastic Container Service Developer Guide For Fargate, you do not need to create or manage the capacity. You just need to associate any of the following pre-defined capacity providers with the cluster: • Fargate • Fargate Spot When you use EC2 instances for your capacity, you use Auto Scaling group to manage the EC2 instances. Auto Scaling helps ensure that you have the correct number of EC2 instances available to handle the application load. A cluster can contain a mix of tasks that are hosted on AWS Fargate, Amazon EC2 instances, or external instances. Tasks can run on Fargate or EC2 infrastructure as a launch type or a capacity provider strategy. If you use EC2 as a launch type, Amazon ECS doesn't track and scale the capacity of Amazon"} +{"global_id": 523, "doc_id": "ecs", "chunk_id": "16", "question_id": 4, "question": "What helps ensure that you have the correct number of EC2 instances available?", "answer_span": "Auto Scaling helps ensure that you have the correct number of EC2 instances available to handle the application load.", "chunk": "a default Service Connect namespace, any new services created in the cluster can be added as client services in the namespace by turning on Service Connect. No additional configuration is required. For more information, see Use Service Connect to connect Amazon ECS services with short names. Capacity providers Amazon ECS capacity providers manage the scaling of infrastructure for tasks in your clusters. Each cluster can have one or more capacity providers and an optional capacity provider strategy. You can assign a default capacity provider strategy to the cluster. The capacity provider strategy determines how the tasks are spread across the cluster's capacity providers. When you run a standalone task or create a service, you either use the cluster's default capacity provider strategy or a capacity provider strategy that overrides the default one.The cluster's default capacity provider strategy only applies when you don't specify a launch type, or capacity provider strategy for your task or service. If you provide either of these parameters, the default strategy isn't used. Capacity providers 686 Amazon Elastic Container Service Developer Guide For Fargate, you do not need to create or manage the capacity. You just need to associate any of the following pre-defined capacity providers with the cluster: • Fargate • Fargate Spot When you use EC2 instances for your capacity, you use Auto Scaling group to manage the EC2 instances. Auto Scaling helps ensure that you have the correct number of EC2 instances available to handle the application load. A cluster can contain a mix of tasks that are hosted on AWS Fargate, Amazon EC2 instances, or external instances. Tasks can run on Fargate or EC2 infrastructure as a launch type or a capacity provider strategy. If you use EC2 as a launch type, Amazon ECS doesn't track and scale the capacity of Amazon"} +{"global_id": 524, "doc_id": "ecs", "chunk_id": "17", "question_id": 1, "question": "What types of infrastructure can tasks run on?", "answer_span": "Tasks can run on Fargate or EC2 infrastructure as a launch type or a capacity provider strategy.", "chunk": "mix of tasks that are hosted on AWS Fargate, Amazon EC2 instances, or external instances. Tasks can run on Fargate or EC2 infrastructure as a launch type or a capacity provider strategy. If you use EC2 as a launch type, Amazon ECS doesn't track and scale the capacity of Amazon EC2 Auto Scaling groups. For more information about launch types, see Amazon ECS launch types. A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers. A capacity provider strategy can only contain Auto Scaling group capacity providers or Fargate capacity providers. Amazon ECS clusters for Fargate Amazon ECS capacity providers manage the scaling of infrastructure for tasks in your clusters. Each cluster can have one or more capacity providers and an optional capacity provider strategy. The capacity provider strategy determines how the tasks are spread across the cluster's capacity providers. When you run a standalone task or create a service, you either use the cluster's default capacity provider strategy or a capacity provider strategy that overrides the default one. When you run your tasks on AWS Fargate, you do not need to create or manage the capacity. You just need to associate any of the following pre-defined capacity providers with the cluster: • Fargate • Fargate Spot With Amazon ECS on AWS Fargate capacity providers, you can use both Fargate and Fargate Spot capacity with your Amazon ECS tasks. Clusters for Fargate 687 Amazon Elastic Container Service Developer Guide With Fargate Spot, you can run interruption tolerant Amazon ECS tasks at a rate that's discounted compared to the Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks are interrupted with a two-minute warning. When tasks that use the Fargate and Fargate Spot capacity providers"} +{"global_id": 525, "doc_id": "ecs", "chunk_id": "17", "question_id": 2, "question": "What does Amazon ECS not track when using EC2 as a launch type?", "answer_span": "Amazon ECS doesn't track and scale the capacity of Amazon EC2 Auto Scaling groups.", "chunk": "mix of tasks that are hosted on AWS Fargate, Amazon EC2 instances, or external instances. Tasks can run on Fargate or EC2 infrastructure as a launch type or a capacity provider strategy. If you use EC2 as a launch type, Amazon ECS doesn't track and scale the capacity of Amazon EC2 Auto Scaling groups. For more information about launch types, see Amazon ECS launch types. A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers. A capacity provider strategy can only contain Auto Scaling group capacity providers or Fargate capacity providers. Amazon ECS clusters for Fargate Amazon ECS capacity providers manage the scaling of infrastructure for tasks in your clusters. Each cluster can have one or more capacity providers and an optional capacity provider strategy. The capacity provider strategy determines how the tasks are spread across the cluster's capacity providers. When you run a standalone task or create a service, you either use the cluster's default capacity provider strategy or a capacity provider strategy that overrides the default one. When you run your tasks on AWS Fargate, you do not need to create or manage the capacity. You just need to associate any of the following pre-defined capacity providers with the cluster: • Fargate • Fargate Spot With Amazon ECS on AWS Fargate capacity providers, you can use both Fargate and Fargate Spot capacity with your Amazon ECS tasks. Clusters for Fargate 687 Amazon Elastic Container Service Developer Guide With Fargate Spot, you can run interruption tolerant Amazon ECS tasks at a rate that's discounted compared to the Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks are interrupted with a two-minute warning. When tasks that use the Fargate and Fargate Spot capacity providers"} +{"global_id": 526, "doc_id": "ecs", "chunk_id": "17", "question_id": 3, "question": "What can a cluster contain in terms of capacity providers?", "answer_span": "A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers.", "chunk": "mix of tasks that are hosted on AWS Fargate, Amazon EC2 instances, or external instances. Tasks can run on Fargate or EC2 infrastructure as a launch type or a capacity provider strategy. If you use EC2 as a launch type, Amazon ECS doesn't track and scale the capacity of Amazon EC2 Auto Scaling groups. For more information about launch types, see Amazon ECS launch types. A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers. A capacity provider strategy can only contain Auto Scaling group capacity providers or Fargate capacity providers. Amazon ECS clusters for Fargate Amazon ECS capacity providers manage the scaling of infrastructure for tasks in your clusters. Each cluster can have one or more capacity providers and an optional capacity provider strategy. The capacity provider strategy determines how the tasks are spread across the cluster's capacity providers. When you run a standalone task or create a service, you either use the cluster's default capacity provider strategy or a capacity provider strategy that overrides the default one. When you run your tasks on AWS Fargate, you do not need to create or manage the capacity. You just need to associate any of the following pre-defined capacity providers with the cluster: • Fargate • Fargate Spot With Amazon ECS on AWS Fargate capacity providers, you can use both Fargate and Fargate Spot capacity with your Amazon ECS tasks. Clusters for Fargate 687 Amazon Elastic Container Service Developer Guide With Fargate Spot, you can run interruption tolerant Amazon ECS tasks at a rate that's discounted compared to the Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks are interrupted with a two-minute warning. When tasks that use the Fargate and Fargate Spot capacity providers"} +{"global_id": 527, "doc_id": "ecs", "chunk_id": "17", "question_id": 4, "question": "What happens when AWS needs the capacity back for Fargate Spot tasks?", "answer_span": "When AWS needs the capacity back, your tasks are interrupted with a two-minute warning.", "chunk": "mix of tasks that are hosted on AWS Fargate, Amazon EC2 instances, or external instances. Tasks can run on Fargate or EC2 infrastructure as a launch type or a capacity provider strategy. If you use EC2 as a launch type, Amazon ECS doesn't track and scale the capacity of Amazon EC2 Auto Scaling groups. For more information about launch types, see Amazon ECS launch types. A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers. A capacity provider strategy can only contain Auto Scaling group capacity providers or Fargate capacity providers. Amazon ECS clusters for Fargate Amazon ECS capacity providers manage the scaling of infrastructure for tasks in your clusters. Each cluster can have one or more capacity providers and an optional capacity provider strategy. The capacity provider strategy determines how the tasks are spread across the cluster's capacity providers. When you run a standalone task or create a service, you either use the cluster's default capacity provider strategy or a capacity provider strategy that overrides the default one. When you run your tasks on AWS Fargate, you do not need to create or manage the capacity. You just need to associate any of the following pre-defined capacity providers with the cluster: • Fargate • Fargate Spot With Amazon ECS on AWS Fargate capacity providers, you can use both Fargate and Fargate Spot capacity with your Amazon ECS tasks. Clusters for Fargate 687 Amazon Elastic Container Service Developer Guide With Fargate Spot, you can run interruption tolerant Amazon ECS tasks at a rate that's discounted compared to the Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks are interrupted with a two-minute warning. When tasks that use the Fargate and Fargate Spot capacity providers"} +{"global_id": 528, "doc_id": "ecs", "chunk_id": "18", "question_id": 1, "question": "What type of tasks can run interruption tolerant on Amazon ECS?", "answer_span": "can run interruption tolerant Amazon ECS tasks at a rate that's discounted compared to the Fargate price.", "chunk": "can run interruption tolerant Amazon ECS tasks at a rate that's discounted compared to the Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks are interrupted with a two-minute warning. When tasks that use the Fargate and Fargate Spot capacity providers are stopped, the task state change event is sent to Amazon EventBridge. The stopped reason describes the cause. For more information, see Amazon ECS task state change events. A cluster can contain a mix of Fargate and Auto Scaling group capacity providers. However, a capacity provider strategy can only contain either Fargate or Auto Scaling group capacity providers, but not both. For more information, see Auto Scaling Group Capacity Providers. Consider the following when using capacity providers: • You must associate a capacity provider with a cluster before you associate it with the capacity provider strategy. • You can specify a maximum of 20 capacity providers for a capacity provider strategy. • You can't update a service using an Auto Scaling group capacity provider to use a Fargate capacity provider. The opposite is also the case. • In a capacity provider strategy, if no weight value is specified for a capacity provider in the console, then the default value of 1 is used. If using the API or AWS CLI, the default value of 0 is used. • When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value that's greater than zero. Any capacity providers with a weight of zero aren't used to place tasks. If you specify multiple capacity providers in a strategy with all the same weight of zero, then any RunTask or CreateService actions using the capacity provider strategy fail. • In a capacity"} +{"global_id": 529, "doc_id": "ecs", "chunk_id": "18", "question_id": 2, "question": "What happens when AWS needs the capacity back from Fargate Spot?", "answer_span": "When AWS needs the capacity back, your tasks are interrupted with a two-minute warning.", "chunk": "can run interruption tolerant Amazon ECS tasks at a rate that's discounted compared to the Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks are interrupted with a two-minute warning. When tasks that use the Fargate and Fargate Spot capacity providers are stopped, the task state change event is sent to Amazon EventBridge. The stopped reason describes the cause. For more information, see Amazon ECS task state change events. A cluster can contain a mix of Fargate and Auto Scaling group capacity providers. However, a capacity provider strategy can only contain either Fargate or Auto Scaling group capacity providers, but not both. For more information, see Auto Scaling Group Capacity Providers. Consider the following when using capacity providers: • You must associate a capacity provider with a cluster before you associate it with the capacity provider strategy. • You can specify a maximum of 20 capacity providers for a capacity provider strategy. • You can't update a service using an Auto Scaling group capacity provider to use a Fargate capacity provider. The opposite is also the case. • In a capacity provider strategy, if no weight value is specified for a capacity provider in the console, then the default value of 1 is used. If using the API or AWS CLI, the default value of 0 is used. • When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value that's greater than zero. Any capacity providers with a weight of zero aren't used to place tasks. If you specify multiple capacity providers in a strategy with all the same weight of zero, then any RunTask or CreateService actions using the capacity provider strategy fail. • In a capacity"} +{"global_id": 530, "doc_id": "ecs", "chunk_id": "18", "question_id": 3, "question": "How many capacity providers can you specify for a capacity provider strategy?", "answer_span": "You can specify a maximum of 20 capacity providers for a capacity provider strategy.", "chunk": "can run interruption tolerant Amazon ECS tasks at a rate that's discounted compared to the Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks are interrupted with a two-minute warning. When tasks that use the Fargate and Fargate Spot capacity providers are stopped, the task state change event is sent to Amazon EventBridge. The stopped reason describes the cause. For more information, see Amazon ECS task state change events. A cluster can contain a mix of Fargate and Auto Scaling group capacity providers. However, a capacity provider strategy can only contain either Fargate or Auto Scaling group capacity providers, but not both. For more information, see Auto Scaling Group Capacity Providers. Consider the following when using capacity providers: • You must associate a capacity provider with a cluster before you associate it with the capacity provider strategy. • You can specify a maximum of 20 capacity providers for a capacity provider strategy. • You can't update a service using an Auto Scaling group capacity provider to use a Fargate capacity provider. The opposite is also the case. • In a capacity provider strategy, if no weight value is specified for a capacity provider in the console, then the default value of 1 is used. If using the API or AWS CLI, the default value of 0 is used. • When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value that's greater than zero. Any capacity providers with a weight of zero aren't used to place tasks. If you specify multiple capacity providers in a strategy with all the same weight of zero, then any RunTask or CreateService actions using the capacity provider strategy fail. • In a capacity"} +{"global_id": 531, "doc_id": "ecs", "chunk_id": "18", "question_id": 4, "question": "What is the default weight value used in a capacity provider strategy if none is specified?", "answer_span": "if no weight value is specified for a capacity provider in the console, then the default value of 1 is used.", "chunk": "can run interruption tolerant Amazon ECS tasks at a rate that's discounted compared to the Fargate price. Fargate Spot runs tasks on spare compute capacity. When AWS needs the capacity back, your tasks are interrupted with a two-minute warning. When tasks that use the Fargate and Fargate Spot capacity providers are stopped, the task state change event is sent to Amazon EventBridge. The stopped reason describes the cause. For more information, see Amazon ECS task state change events. A cluster can contain a mix of Fargate and Auto Scaling group capacity providers. However, a capacity provider strategy can only contain either Fargate or Auto Scaling group capacity providers, but not both. For more information, see Auto Scaling Group Capacity Providers. Consider the following when using capacity providers: • You must associate a capacity provider with a cluster before you associate it with the capacity provider strategy. • You can specify a maximum of 20 capacity providers for a capacity provider strategy. • You can't update a service using an Auto Scaling group capacity provider to use a Fargate capacity provider. The opposite is also the case. • In a capacity provider strategy, if no weight value is specified for a capacity provider in the console, then the default value of 1 is used. If using the API or AWS CLI, the default value of 0 is used. • When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value that's greater than zero. Any capacity providers with a weight of zero aren't used to place tasks. If you specify multiple capacity providers in a strategy with all the same weight of zero, then any RunTask or CreateService actions using the capacity provider strategy fail. • In a capacity"} +{"global_id": 532, "doc_id": "ecs", "chunk_id": "19", "question_id": 1, "question": "What happens if you specify multiple capacity providers in a strategy with all the same weight of zero?", "answer_span": "then any RunTask or CreateService actions using the capacity provider strategy fail.", "chunk": "value that's greater than zero. Any capacity providers with a weight of zero aren't used to place tasks. If you specify multiple capacity providers in a strategy with all the same weight of zero, then any RunTask or CreateService actions using the capacity provider strategy fail. • In a capacity provider strategy, only one capacity provider can have a defined base value. If no base value is specified, the default value of zero is used. • A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers. However, a capacity provider strategy can only contain Auto Scaling group or Fargate capacity providers, but not both. • A cluster can contain a mix of services and standalone tasks that use both capacity providers and launch types. A service can be updated to use a capacity provider strategy rather than a launch type. However, you must force a new deployment when doing so. Clusters for Fargate 688 Amazon Elastic Container Service 9. Developer Guide Choose Create. Next steps After you create the cluster, you can create task definitions for your applications and then run them as standalone tasks, or as part of a service. For more information, see the following: • Amazon ECS task definitions • Running an application as an Amazon ECS task • Creating an Amazon ECS rolling update deployment Amazon ECS capacity providers for the EC2 launch type When you use Amazon EC2 instances for your capacity, you use Auto Scaling groups to manage the Amazon EC2 instances registered to their clusters. Auto Scaling helps ensure that you have the correct number of Amazon EC2 instances available to handle the application load. You can use the managed scaling feature to have Amazon ECS manage the scale-in and scaleout actions of the Auto Scaling"} +{"global_id": 533, "doc_id": "ecs", "chunk_id": "19", "question_id": 2, "question": "How many capacity providers can have a defined base value in a capacity provider strategy?", "answer_span": "only one capacity provider can have a defined base value.", "chunk": "value that's greater than zero. Any capacity providers with a weight of zero aren't used to place tasks. If you specify multiple capacity providers in a strategy with all the same weight of zero, then any RunTask or CreateService actions using the capacity provider strategy fail. • In a capacity provider strategy, only one capacity provider can have a defined base value. If no base value is specified, the default value of zero is used. • A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers. However, a capacity provider strategy can only contain Auto Scaling group or Fargate capacity providers, but not both. • A cluster can contain a mix of services and standalone tasks that use both capacity providers and launch types. A service can be updated to use a capacity provider strategy rather than a launch type. However, you must force a new deployment when doing so. Clusters for Fargate 688 Amazon Elastic Container Service 9. Developer Guide Choose Create. Next steps After you create the cluster, you can create task definitions for your applications and then run them as standalone tasks, or as part of a service. For more information, see the following: • Amazon ECS task definitions • Running an application as an Amazon ECS task • Creating an Amazon ECS rolling update deployment Amazon ECS capacity providers for the EC2 launch type When you use Amazon EC2 instances for your capacity, you use Auto Scaling groups to manage the Amazon EC2 instances registered to their clusters. Auto Scaling helps ensure that you have the correct number of Amazon EC2 instances available to handle the application load. You can use the managed scaling feature to have Amazon ECS manage the scale-in and scaleout actions of the Auto Scaling"} +{"global_id": 534, "doc_id": "ecs", "chunk_id": "19", "question_id": 3, "question": "What types of capacity providers can a cluster contain?", "answer_span": "A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers.", "chunk": "value that's greater than zero. Any capacity providers with a weight of zero aren't used to place tasks. If you specify multiple capacity providers in a strategy with all the same weight of zero, then any RunTask or CreateService actions using the capacity provider strategy fail. • In a capacity provider strategy, only one capacity provider can have a defined base value. If no base value is specified, the default value of zero is used. • A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers. However, a capacity provider strategy can only contain Auto Scaling group or Fargate capacity providers, but not both. • A cluster can contain a mix of services and standalone tasks that use both capacity providers and launch types. A service can be updated to use a capacity provider strategy rather than a launch type. However, you must force a new deployment when doing so. Clusters for Fargate 688 Amazon Elastic Container Service 9. Developer Guide Choose Create. Next steps After you create the cluster, you can create task definitions for your applications and then run them as standalone tasks, or as part of a service. For more information, see the following: • Amazon ECS task definitions • Running an application as an Amazon ECS task • Creating an Amazon ECS rolling update deployment Amazon ECS capacity providers for the EC2 launch type When you use Amazon EC2 instances for your capacity, you use Auto Scaling groups to manage the Amazon EC2 instances registered to their clusters. Auto Scaling helps ensure that you have the correct number of Amazon EC2 instances available to handle the application load. You can use the managed scaling feature to have Amazon ECS manage the scale-in and scaleout actions of the Auto Scaling"} +{"global_id": 535, "doc_id": "ecs", "chunk_id": "19", "question_id": 4, "question": "What must you do when updating a service to use a capacity provider strategy rather than a launch type?", "answer_span": "you must force a new deployment when doing so.", "chunk": "value that's greater than zero. Any capacity providers with a weight of zero aren't used to place tasks. If you specify multiple capacity providers in a strategy with all the same weight of zero, then any RunTask or CreateService actions using the capacity provider strategy fail. • In a capacity provider strategy, only one capacity provider can have a defined base value. If no base value is specified, the default value of zero is used. • A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers. However, a capacity provider strategy can only contain Auto Scaling group or Fargate capacity providers, but not both. • A cluster can contain a mix of services and standalone tasks that use both capacity providers and launch types. A service can be updated to use a capacity provider strategy rather than a launch type. However, you must force a new deployment when doing so. Clusters for Fargate 688 Amazon Elastic Container Service 9. Developer Guide Choose Create. Next steps After you create the cluster, you can create task definitions for your applications and then run them as standalone tasks, or as part of a service. For more information, see the following: • Amazon ECS task definitions • Running an application as an Amazon ECS task • Creating an Amazon ECS rolling update deployment Amazon ECS capacity providers for the EC2 launch type When you use Amazon EC2 instances for your capacity, you use Auto Scaling groups to manage the Amazon EC2 instances registered to their clusters. Auto Scaling helps ensure that you have the correct number of Amazon EC2 instances available to handle the application load. You can use the managed scaling feature to have Amazon ECS manage the scale-in and scaleout actions of the Auto Scaling"} +{"global_id": 536, "doc_id": "ecs", "chunk_id": "20", "question_id": 1, "question": "What does Auto Scaling help ensure?", "answer_span": "Auto Scaling helps ensure that you have the correct number of Amazon EC2 instances available to handle the application load.", "chunk": "manage the Amazon EC2 instances registered to their clusters. Auto Scaling helps ensure that you have the correct number of Amazon EC2 instances available to handle the application load. You can use the managed scaling feature to have Amazon ECS manage the scale-in and scaleout actions of the Auto Scaling group, or you can manage the scaling actions yourself. For more information, see Automatically manage Amazon ECS capacity with cluster auto scaling. We recommend that you create a new empty Auto Scaling group. If you use an existing Auto Scaling group, any Amazon EC2 instances that are associated with the group that were already running and registered to an Amazon ECS cluster before the Auto Scaling group being used to create a capacity provider might not be properly registered with the capacity provider. This might cause issues when using the capacity provider in a capacity provider strategy. Use DescribeContainerInstances to confirm whether a container instance is associated with a capacity provider or not. Note To create an empty Auto Scaling group, set the desired count to zero. After you created the capacity provider and associated it with a cluster, you can then scale it out. When you use the Amazon ECS console, Amazon ECS creates an Amazon EC2 launch template and Auto Scaling group on your behalf as part of the AWS CloudFormation stack. They are prefixed with EC2ContainerService-. You can use the Auto Scaling group as a capacity provider for that cluster. Capacity providers for the EC2 launch type 693 Amazon Elastic Container Service Developer Guide We recommend you use managed instance draining to allow for graceful termination of Amazon EC2 instances that won't disrupt your workloads. This feature is on by default. For more information, see Safely stop Amazon ECS workloads running on EC2 instances Consider the following"} +{"global_id": 537, "doc_id": "ecs", "chunk_id": "20", "question_id": 2, "question": "What should you set the desired count to create an empty Auto Scaling group?", "answer_span": "set the desired count to zero.", "chunk": "manage the Amazon EC2 instances registered to their clusters. Auto Scaling helps ensure that you have the correct number of Amazon EC2 instances available to handle the application load. You can use the managed scaling feature to have Amazon ECS manage the scale-in and scaleout actions of the Auto Scaling group, or you can manage the scaling actions yourself. For more information, see Automatically manage Amazon ECS capacity with cluster auto scaling. We recommend that you create a new empty Auto Scaling group. If you use an existing Auto Scaling group, any Amazon EC2 instances that are associated with the group that were already running and registered to an Amazon ECS cluster before the Auto Scaling group being used to create a capacity provider might not be properly registered with the capacity provider. This might cause issues when using the capacity provider in a capacity provider strategy. Use DescribeContainerInstances to confirm whether a container instance is associated with a capacity provider or not. Note To create an empty Auto Scaling group, set the desired count to zero. After you created the capacity provider and associated it with a cluster, you can then scale it out. When you use the Amazon ECS console, Amazon ECS creates an Amazon EC2 launch template and Auto Scaling group on your behalf as part of the AWS CloudFormation stack. They are prefixed with EC2ContainerService-. You can use the Auto Scaling group as a capacity provider for that cluster. Capacity providers for the EC2 launch type 693 Amazon Elastic Container Service Developer Guide We recommend you use managed instance draining to allow for graceful termination of Amazon EC2 instances that won't disrupt your workloads. This feature is on by default. For more information, see Safely stop Amazon ECS workloads running on EC2 instances Consider the following"} +{"global_id": 538, "doc_id": "ecs", "chunk_id": "20", "question_id": 3, "question": "What does Amazon ECS create on your behalf when using the console?", "answer_span": "Amazon ECS creates an Amazon EC2 launch template and Auto Scaling group on your behalf as part of the AWS CloudFormation stack.", "chunk": "manage the Amazon EC2 instances registered to their clusters. Auto Scaling helps ensure that you have the correct number of Amazon EC2 instances available to handle the application load. You can use the managed scaling feature to have Amazon ECS manage the scale-in and scaleout actions of the Auto Scaling group, or you can manage the scaling actions yourself. For more information, see Automatically manage Amazon ECS capacity with cluster auto scaling. We recommend that you create a new empty Auto Scaling group. If you use an existing Auto Scaling group, any Amazon EC2 instances that are associated with the group that were already running and registered to an Amazon ECS cluster before the Auto Scaling group being used to create a capacity provider might not be properly registered with the capacity provider. This might cause issues when using the capacity provider in a capacity provider strategy. Use DescribeContainerInstances to confirm whether a container instance is associated with a capacity provider or not. Note To create an empty Auto Scaling group, set the desired count to zero. After you created the capacity provider and associated it with a cluster, you can then scale it out. When you use the Amazon ECS console, Amazon ECS creates an Amazon EC2 launch template and Auto Scaling group on your behalf as part of the AWS CloudFormation stack. They are prefixed with EC2ContainerService-. You can use the Auto Scaling group as a capacity provider for that cluster. Capacity providers for the EC2 launch type 693 Amazon Elastic Container Service Developer Guide We recommend you use managed instance draining to allow for graceful termination of Amazon EC2 instances that won't disrupt your workloads. This feature is on by default. For more information, see Safely stop Amazon ECS workloads running on EC2 instances Consider the following"} +{"global_id": 539, "doc_id": "ecs", "chunk_id": "20", "question_id": 4, "question": "What feature is recommended to allow for graceful termination of Amazon EC2 instances?", "answer_span": "we recommend you use managed instance draining to allow for graceful termination of Amazon EC2 instances that won't disrupt your workloads.", "chunk": "manage the Amazon EC2 instances registered to their clusters. Auto Scaling helps ensure that you have the correct number of Amazon EC2 instances available to handle the application load. You can use the managed scaling feature to have Amazon ECS manage the scale-in and scaleout actions of the Auto Scaling group, or you can manage the scaling actions yourself. For more information, see Automatically manage Amazon ECS capacity with cluster auto scaling. We recommend that you create a new empty Auto Scaling group. If you use an existing Auto Scaling group, any Amazon EC2 instances that are associated with the group that were already running and registered to an Amazon ECS cluster before the Auto Scaling group being used to create a capacity provider might not be properly registered with the capacity provider. This might cause issues when using the capacity provider in a capacity provider strategy. Use DescribeContainerInstances to confirm whether a container instance is associated with a capacity provider or not. Note To create an empty Auto Scaling group, set the desired count to zero. After you created the capacity provider and associated it with a cluster, you can then scale it out. When you use the Amazon ECS console, Amazon ECS creates an Amazon EC2 launch template and Auto Scaling group on your behalf as part of the AWS CloudFormation stack. They are prefixed with EC2ContainerService-. You can use the Auto Scaling group as a capacity provider for that cluster. Capacity providers for the EC2 launch type 693 Amazon Elastic Container Service Developer Guide We recommend you use managed instance draining to allow for graceful termination of Amazon EC2 instances that won't disrupt your workloads. This feature is on by default. For more information, see Safely stop Amazon ECS workloads running on EC2 instances Consider the following"} +{"global_id": 540, "doc_id": "ecs", "chunk_id": "21", "question_id": 1, "question": "What is recommended to allow for graceful termination of Amazon EC2 instances?", "answer_span": "We recommend you use managed instance draining to allow for graceful termination of Amazon EC2 instances that won't disrupt your workloads.", "chunk": "693 Amazon Elastic Container Service Developer Guide We recommend you use managed instance draining to allow for graceful termination of Amazon EC2 instances that won't disrupt your workloads. This feature is on by default. For more information, see Safely stop Amazon ECS workloads running on EC2 instances Consider the following when using Auto Scaling group capacity providers in the console: • An Auto Scaling group must have a MaxSize greater than zero to scale out. • The Auto Scaling group can't have instance weighting settings. • If the Auto Scaling group can't scale out to accommodate the number of tasks run, the tasks fails to transition beyond the PROVISIONING state. • Don't modify the scaling policy resource associated with your Auto Scaling groups that are managed by capacity providers. • If managed scaling is turned on when you create a capacity provider, the Auto Scaling group desired count can be set to 0. When managed scaling is turned on, Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group. • You must associate capacity provider with a cluster before you associate it with the capacity provider strategy. • You can specify a maximum of 20 capacity providers for a capacity provider strategy. • You can't update a service using an Auto Scaling group capacity provider to use a Fargate capacity provider. The opposite is also the case. • In a capacity provider strategy, if no weight value is specified for a capacity provider in the console, then the default value of 1 is used. If using the API or AWS CLI, the default value of 0 is used. • When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value that's greater than zero. Any"} +{"global_id": 541, "doc_id": "ecs", "chunk_id": "21", "question_id": 2, "question": "What must an Auto Scaling group have to scale out?", "answer_span": "An Auto Scaling group must have a MaxSize greater than zero to scale out.", "chunk": "693 Amazon Elastic Container Service Developer Guide We recommend you use managed instance draining to allow for graceful termination of Amazon EC2 instances that won't disrupt your workloads. This feature is on by default. For more information, see Safely stop Amazon ECS workloads running on EC2 instances Consider the following when using Auto Scaling group capacity providers in the console: • An Auto Scaling group must have a MaxSize greater than zero to scale out. • The Auto Scaling group can't have instance weighting settings. • If the Auto Scaling group can't scale out to accommodate the number of tasks run, the tasks fails to transition beyond the PROVISIONING state. • Don't modify the scaling policy resource associated with your Auto Scaling groups that are managed by capacity providers. • If managed scaling is turned on when you create a capacity provider, the Auto Scaling group desired count can be set to 0. When managed scaling is turned on, Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group. • You must associate capacity provider with a cluster before you associate it with the capacity provider strategy. • You can specify a maximum of 20 capacity providers for a capacity provider strategy. • You can't update a service using an Auto Scaling group capacity provider to use a Fargate capacity provider. The opposite is also the case. • In a capacity provider strategy, if no weight value is specified for a capacity provider in the console, then the default value of 1 is used. If using the API or AWS CLI, the default value of 0 is used. • When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value that's greater than zero. Any"} +{"global_id": 542, "doc_id": "ecs", "chunk_id": "21", "question_id": 3, "question": "What happens if the Auto Scaling group can't scale out to accommodate the number of tasks run?", "answer_span": "If the Auto Scaling group can't scale out to accommodate the number of tasks run, the tasks fails to transition beyond the PROVISIONING state.", "chunk": "693 Amazon Elastic Container Service Developer Guide We recommend you use managed instance draining to allow for graceful termination of Amazon EC2 instances that won't disrupt your workloads. This feature is on by default. For more information, see Safely stop Amazon ECS workloads running on EC2 instances Consider the following when using Auto Scaling group capacity providers in the console: • An Auto Scaling group must have a MaxSize greater than zero to scale out. • The Auto Scaling group can't have instance weighting settings. • If the Auto Scaling group can't scale out to accommodate the number of tasks run, the tasks fails to transition beyond the PROVISIONING state. • Don't modify the scaling policy resource associated with your Auto Scaling groups that are managed by capacity providers. • If managed scaling is turned on when you create a capacity provider, the Auto Scaling group desired count can be set to 0. When managed scaling is turned on, Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group. • You must associate capacity provider with a cluster before you associate it with the capacity provider strategy. • You can specify a maximum of 20 capacity providers for a capacity provider strategy. • You can't update a service using an Auto Scaling group capacity provider to use a Fargate capacity provider. The opposite is also the case. • In a capacity provider strategy, if no weight value is specified for a capacity provider in the console, then the default value of 1 is used. If using the API or AWS CLI, the default value of 0 is used. • When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value that's greater than zero. Any"} +{"global_id": 543, "doc_id": "ecs", "chunk_id": "21", "question_id": 4, "question": "What is the default weight value used in a capacity provider strategy if no weight value is specified?", "answer_span": "if no weight value is specified for a capacity provider in the console, then the default value of 1 is used.", "chunk": "693 Amazon Elastic Container Service Developer Guide We recommend you use managed instance draining to allow for graceful termination of Amazon EC2 instances that won't disrupt your workloads. This feature is on by default. For more information, see Safely stop Amazon ECS workloads running on EC2 instances Consider the following when using Auto Scaling group capacity providers in the console: • An Auto Scaling group must have a MaxSize greater than zero to scale out. • The Auto Scaling group can't have instance weighting settings. • If the Auto Scaling group can't scale out to accommodate the number of tasks run, the tasks fails to transition beyond the PROVISIONING state. • Don't modify the scaling policy resource associated with your Auto Scaling groups that are managed by capacity providers. • If managed scaling is turned on when you create a capacity provider, the Auto Scaling group desired count can be set to 0. When managed scaling is turned on, Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group. • You must associate capacity provider with a cluster before you associate it with the capacity provider strategy. • You can specify a maximum of 20 capacity providers for a capacity provider strategy. • You can't update a service using an Auto Scaling group capacity provider to use a Fargate capacity provider. The opposite is also the case. • In a capacity provider strategy, if no weight value is specified for a capacity provider in the console, then the default value of 1 is used. If using the API or AWS CLI, the default value of 0 is used. • When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value that's greater than zero. Any"} +{"global_id": 544, "doc_id": "ecs", "chunk_id": "22", "question_id": 1, "question": "What is the default value used when using the API or AWS CLI?", "answer_span": "the default value of 0 is used.", "chunk": "the default value of 1 is used. If using the API or AWS CLI, the default value of 0 is used. • When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value that's greater than zero. Any capacity providers with a weight of zero aren't used to place tasks. If you specify multiple capacity providers in a strategy with all the same weight of zero, then any RunTask or CreateService actions using the capacity provider strategy fail. • In a capacity provider strategy, only one capacity provider can have a defined base value. If no base value is specified, the default value of zero is used. • A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers. However, a capacity provider strategy can only contain Auto Scaling group or Fargate capacity providers, but not both. Capacity providers for the EC2 launch type 694 Amazon Elastic Container Service Developer Guide • A cluster can contain a mix of services and standalone tasks that use both capacity providers and launch types. A service can be updated to use a capacity provider strategy rather than a launch type. However, you must force a new deployment when doing so. • Amazon ECS supports Amazon EC2 Auto Scaling warm pools. A warm pool is a group of preinitialized Amazon EC2 instances ready to be placed into service. Whenever your application needs to scale out, Amazon EC2 Auto Scaling uses the pre-initialized instances from the warm pool rather than launching cold instances. This allows for any final initialization process to run before the instance is placed into service. For more information, see Configuring pre-initialized instances for your Amazon ECS Auto Scaling group. For more information about"} +{"global_id": 545, "doc_id": "ecs", "chunk_id": "22", "question_id": 2, "question": "What must at least one capacity provider have in a capacity provider strategy?", "answer_span": "at least one of the capacity providers must have a weight value that's greater than zero.", "chunk": "the default value of 1 is used. If using the API or AWS CLI, the default value of 0 is used. • When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value that's greater than zero. Any capacity providers with a weight of zero aren't used to place tasks. If you specify multiple capacity providers in a strategy with all the same weight of zero, then any RunTask or CreateService actions using the capacity provider strategy fail. • In a capacity provider strategy, only one capacity provider can have a defined base value. If no base value is specified, the default value of zero is used. • A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers. However, a capacity provider strategy can only contain Auto Scaling group or Fargate capacity providers, but not both. Capacity providers for the EC2 launch type 694 Amazon Elastic Container Service Developer Guide • A cluster can contain a mix of services and standalone tasks that use both capacity providers and launch types. A service can be updated to use a capacity provider strategy rather than a launch type. However, you must force a new deployment when doing so. • Amazon ECS supports Amazon EC2 Auto Scaling warm pools. A warm pool is a group of preinitialized Amazon EC2 instances ready to be placed into service. Whenever your application needs to scale out, Amazon EC2 Auto Scaling uses the pre-initialized instances from the warm pool rather than launching cold instances. This allows for any final initialization process to run before the instance is placed into service. For more information, see Configuring pre-initialized instances for your Amazon ECS Auto Scaling group. For more information about"} +{"global_id": 546, "doc_id": "ecs", "chunk_id": "22", "question_id": 3, "question": "What happens if no base value is specified in a capacity provider strategy?", "answer_span": "the default value of zero is used.", "chunk": "the default value of 1 is used. If using the API or AWS CLI, the default value of 0 is used. • When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value that's greater than zero. Any capacity providers with a weight of zero aren't used to place tasks. If you specify multiple capacity providers in a strategy with all the same weight of zero, then any RunTask or CreateService actions using the capacity provider strategy fail. • In a capacity provider strategy, only one capacity provider can have a defined base value. If no base value is specified, the default value of zero is used. • A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers. However, a capacity provider strategy can only contain Auto Scaling group or Fargate capacity providers, but not both. Capacity providers for the EC2 launch type 694 Amazon Elastic Container Service Developer Guide • A cluster can contain a mix of services and standalone tasks that use both capacity providers and launch types. A service can be updated to use a capacity provider strategy rather than a launch type. However, you must force a new deployment when doing so. • Amazon ECS supports Amazon EC2 Auto Scaling warm pools. A warm pool is a group of preinitialized Amazon EC2 instances ready to be placed into service. Whenever your application needs to scale out, Amazon EC2 Auto Scaling uses the pre-initialized instances from the warm pool rather than launching cold instances. This allows for any final initialization process to run before the instance is placed into service. For more information, see Configuring pre-initialized instances for your Amazon ECS Auto Scaling group. For more information about"} +{"global_id": 547, "doc_id": "ecs", "chunk_id": "22", "question_id": 4, "question": "What does Amazon EC2 Auto Scaling use from the warm pool when scaling out?", "answer_span": "the pre-initialized instances from the warm pool rather than launching cold instances.", "chunk": "the default value of 1 is used. If using the API or AWS CLI, the default value of 0 is used. • When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value that's greater than zero. Any capacity providers with a weight of zero aren't used to place tasks. If you specify multiple capacity providers in a strategy with all the same weight of zero, then any RunTask or CreateService actions using the capacity provider strategy fail. • In a capacity provider strategy, only one capacity provider can have a defined base value. If no base value is specified, the default value of zero is used. • A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers. However, a capacity provider strategy can only contain Auto Scaling group or Fargate capacity providers, but not both. Capacity providers for the EC2 launch type 694 Amazon Elastic Container Service Developer Guide • A cluster can contain a mix of services and standalone tasks that use both capacity providers and launch types. A service can be updated to use a capacity provider strategy rather than a launch type. However, you must force a new deployment when doing so. • Amazon ECS supports Amazon EC2 Auto Scaling warm pools. A warm pool is a group of preinitialized Amazon EC2 instances ready to be placed into service. Whenever your application needs to scale out, Amazon EC2 Auto Scaling uses the pre-initialized instances from the warm pool rather than launching cold instances. This allows for any final initialization process to run before the instance is placed into service. For more information, see Configuring pre-initialized instances for your Amazon ECS Auto Scaling group. For more information about"} +{"global_id": 548, "doc_id": "ecs", "chunk_id": "23", "question_id": 1, "question": "What does EC2 Auto Scaling use from the warm pool?", "answer_span": "EC2 Auto Scaling uses the pre-initialized instances from the warm pool rather than launching cold instances.", "chunk": "EC2 Auto Scaling uses the pre-initialized instances from the warm pool rather than launching cold instances. This allows for any final initialization process to run before the instance is placed into service. For more information, see Configuring pre-initialized instances for your Amazon ECS Auto Scaling group. For more information about creating an Amazon EC2 Auto Scaling launch template, see Auto Scaling launch templates in the Amazon EC2 Auto Scaling User Guide. For more information about creating an Amazon EC2 Auto Scaling group, see Auto Scaling groups in the Amazon EC2 Auto Scaling User Guide. Amazon EC2 container instance security considerations for Amazon ECS You should consider a single container instance and its access within your threat model. For example, a single affected task might be able to leverage the IAM permissions of a non-infected task on the same instance. We recommend that you use the following to help prevent this: • Do not use administrator privileges when running your tasks. • Assign a task role with least-privileged access to your tasks. The container agent automatically creates a token with a unique credential ID which are used to access Amazon ECS resources. • To prevent containers run by tasks that use the awsvpc network mode from accessing the credential information supplied to the Amazon EC2 instance profile, while still allowing the permissions that are provided by the task role set the ECS_AWSVPC_BLOCK_IMDS agent configuration variable to true in the agent configuration file and restart the agent. • Use Amazon GuardDuty Runtime Monitoring to detect threats for clusters and containers within your AWS environment. Runtime Monitoring uses a GuardDuty security agent that adds runtime visibility into individual Amazon ECS workloads, for example, file access, process execution, and network connections. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. EC2"} +{"global_id": 549, "doc_id": "ecs", "chunk_id": "23", "question_id": 2, "question": "What should you consider within your threat model regarding a single container instance?", "answer_span": "You should consider a single container instance and its access within your threat model.", "chunk": "EC2 Auto Scaling uses the pre-initialized instances from the warm pool rather than launching cold instances. This allows for any final initialization process to run before the instance is placed into service. For more information, see Configuring pre-initialized instances for your Amazon ECS Auto Scaling group. For more information about creating an Amazon EC2 Auto Scaling launch template, see Auto Scaling launch templates in the Amazon EC2 Auto Scaling User Guide. For more information about creating an Amazon EC2 Auto Scaling group, see Auto Scaling groups in the Amazon EC2 Auto Scaling User Guide. Amazon EC2 container instance security considerations for Amazon ECS You should consider a single container instance and its access within your threat model. For example, a single affected task might be able to leverage the IAM permissions of a non-infected task on the same instance. We recommend that you use the following to help prevent this: • Do not use administrator privileges when running your tasks. • Assign a task role with least-privileged access to your tasks. The container agent automatically creates a token with a unique credential ID which are used to access Amazon ECS resources. • To prevent containers run by tasks that use the awsvpc network mode from accessing the credential information supplied to the Amazon EC2 instance profile, while still allowing the permissions that are provided by the task role set the ECS_AWSVPC_BLOCK_IMDS agent configuration variable to true in the agent configuration file and restart the agent. • Use Amazon GuardDuty Runtime Monitoring to detect threats for clusters and containers within your AWS environment. Runtime Monitoring uses a GuardDuty security agent that adds runtime visibility into individual Amazon ECS workloads, for example, file access, process execution, and network connections. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. EC2"} +{"global_id": 550, "doc_id": "ecs", "chunk_id": "23", "question_id": 3, "question": "What is recommended to help prevent security issues with tasks?", "answer_span": "We recommend that you use the following to help prevent this: • Do not use administrator privileges when running your tasks. • Assign a task role with least-privileged access to your tasks.", "chunk": "EC2 Auto Scaling uses the pre-initialized instances from the warm pool rather than launching cold instances. This allows for any final initialization process to run before the instance is placed into service. For more information, see Configuring pre-initialized instances for your Amazon ECS Auto Scaling group. For more information about creating an Amazon EC2 Auto Scaling launch template, see Auto Scaling launch templates in the Amazon EC2 Auto Scaling User Guide. For more information about creating an Amazon EC2 Auto Scaling group, see Auto Scaling groups in the Amazon EC2 Auto Scaling User Guide. Amazon EC2 container instance security considerations for Amazon ECS You should consider a single container instance and its access within your threat model. For example, a single affected task might be able to leverage the IAM permissions of a non-infected task on the same instance. We recommend that you use the following to help prevent this: • Do not use administrator privileges when running your tasks. • Assign a task role with least-privileged access to your tasks. The container agent automatically creates a token with a unique credential ID which are used to access Amazon ECS resources. • To prevent containers run by tasks that use the awsvpc network mode from accessing the credential information supplied to the Amazon EC2 instance profile, while still allowing the permissions that are provided by the task role set the ECS_AWSVPC_BLOCK_IMDS agent configuration variable to true in the agent configuration file and restart the agent. • Use Amazon GuardDuty Runtime Monitoring to detect threats for clusters and containers within your AWS environment. Runtime Monitoring uses a GuardDuty security agent that adds runtime visibility into individual Amazon ECS workloads, for example, file access, process execution, and network connections. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. EC2"} +{"global_id": 551, "doc_id": "ecs", "chunk_id": "23", "question_id": 4, "question": "What does Runtime Monitoring use to detect threats for clusters and containers?", "answer_span": "Runtime Monitoring uses a GuardDuty security agent that adds runtime visibility into individual Amazon ECS workloads.", "chunk": "EC2 Auto Scaling uses the pre-initialized instances from the warm pool rather than launching cold instances. This allows for any final initialization process to run before the instance is placed into service. For more information, see Configuring pre-initialized instances for your Amazon ECS Auto Scaling group. For more information about creating an Amazon EC2 Auto Scaling launch template, see Auto Scaling launch templates in the Amazon EC2 Auto Scaling User Guide. For more information about creating an Amazon EC2 Auto Scaling group, see Auto Scaling groups in the Amazon EC2 Auto Scaling User Guide. Amazon EC2 container instance security considerations for Amazon ECS You should consider a single container instance and its access within your threat model. For example, a single affected task might be able to leverage the IAM permissions of a non-infected task on the same instance. We recommend that you use the following to help prevent this: • Do not use administrator privileges when running your tasks. • Assign a task role with least-privileged access to your tasks. The container agent automatically creates a token with a unique credential ID which are used to access Amazon ECS resources. • To prevent containers run by tasks that use the awsvpc network mode from accessing the credential information supplied to the Amazon EC2 instance profile, while still allowing the permissions that are provided by the task role set the ECS_AWSVPC_BLOCK_IMDS agent configuration variable to true in the agent configuration file and restart the agent. • Use Amazon GuardDuty Runtime Monitoring to detect threats for clusters and containers within your AWS environment. Runtime Monitoring uses a GuardDuty security agent that adds runtime visibility into individual Amazon ECS workloads, for example, file access, process execution, and network connections. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. EC2"} +{"global_id": 552, "doc_id": "ecs", "chunk_id": "24", "question_id": 1, "question": "What does Runtime Monitoring use to add runtime visibility into individual Amazon ECS workloads?", "answer_span": "Runtime Monitoring uses a GuardDuty security agent that adds runtime visibility into individual Amazon ECS workloads", "chunk": "Monitoring to detect threats for clusters and containers within your AWS environment. Runtime Monitoring uses a GuardDuty security agent that adds runtime visibility into individual Amazon ECS workloads, for example, file access, process execution, and network connections. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. EC2 container instance security 695 Amazon Elastic Container Service Developer Guide Creating an Amazon ECS cluster for the Amazon EC2 launch type You create a cluster to define the infrastructure your tasks and services run on. Before you begin, be sure that you've completed the steps in Set up to use Amazon ECS and assign the appropriate IAM permission. For more information, see the section called “Amazon ECS cluster examples”. The Amazon ECS console provides a simple way to create the resources that are needed by an Amazon ECS cluster by creating a AWS CloudFormation stack. To make the cluster creation process as easy as possible, the console has default selections for many choices which we describe below. There are also help panels available for most of the sections in the console which provide further context. You can register Amazon EC2 instances when you create the cluster or register additional instances with the cluster after it has been created. You can modify the following default options: • Change the subnets where your instances launch • Change the security groups used to control traffic to the container instances • Add a namespace to the cluster. A namespace allows services that you create in the cluster can connect to the other services in the namespace without additional configuration. For more information, see Interconnect Amazon ECS services. • Turn on Container Insights with enhanced observability, or Container Insights . CloudWatch Container Insights collects, aggregates, and summarizes metrics and logs from your containerized applications and"} +{"global_id": 553, "doc_id": "ecs", "chunk_id": "24", "question_id": 2, "question": "What is the purpose of creating a cluster in Amazon ECS?", "answer_span": "You create a cluster to define the infrastructure your tasks and services run on.", "chunk": "Monitoring to detect threats for clusters and containers within your AWS environment. Runtime Monitoring uses a GuardDuty security agent that adds runtime visibility into individual Amazon ECS workloads, for example, file access, process execution, and network connections. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. EC2 container instance security 695 Amazon Elastic Container Service Developer Guide Creating an Amazon ECS cluster for the Amazon EC2 launch type You create a cluster to define the infrastructure your tasks and services run on. Before you begin, be sure that you've completed the steps in Set up to use Amazon ECS and assign the appropriate IAM permission. For more information, see the section called “Amazon ECS cluster examples”. The Amazon ECS console provides a simple way to create the resources that are needed by an Amazon ECS cluster by creating a AWS CloudFormation stack. To make the cluster creation process as easy as possible, the console has default selections for many choices which we describe below. There are also help panels available for most of the sections in the console which provide further context. You can register Amazon EC2 instances when you create the cluster or register additional instances with the cluster after it has been created. You can modify the following default options: • Change the subnets where your instances launch • Change the security groups used to control traffic to the container instances • Add a namespace to the cluster. A namespace allows services that you create in the cluster can connect to the other services in the namespace without additional configuration. For more information, see Interconnect Amazon ECS services. • Turn on Container Insights with enhanced observability, or Container Insights . CloudWatch Container Insights collects, aggregates, and summarizes metrics and logs from your containerized applications and"} +{"global_id": 554, "doc_id": "ecs", "chunk_id": "24", "question_id": 3, "question": "What can you modify when creating an Amazon ECS cluster?", "answer_span": "You can modify the following default options: • Change the subnets where your instances launch • Change the security groups used to control traffic to the container instances • Add a namespace to the cluster.", "chunk": "Monitoring to detect threats for clusters and containers within your AWS environment. Runtime Monitoring uses a GuardDuty security agent that adds runtime visibility into individual Amazon ECS workloads, for example, file access, process execution, and network connections. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. EC2 container instance security 695 Amazon Elastic Container Service Developer Guide Creating an Amazon ECS cluster for the Amazon EC2 launch type You create a cluster to define the infrastructure your tasks and services run on. Before you begin, be sure that you've completed the steps in Set up to use Amazon ECS and assign the appropriate IAM permission. For more information, see the section called “Amazon ECS cluster examples”. The Amazon ECS console provides a simple way to create the resources that are needed by an Amazon ECS cluster by creating a AWS CloudFormation stack. To make the cluster creation process as easy as possible, the console has default selections for many choices which we describe below. There are also help panels available for most of the sections in the console which provide further context. You can register Amazon EC2 instances when you create the cluster or register additional instances with the cluster after it has been created. You can modify the following default options: • Change the subnets where your instances launch • Change the security groups used to control traffic to the container instances • Add a namespace to the cluster. A namespace allows services that you create in the cluster can connect to the other services in the namespace without additional configuration. For more information, see Interconnect Amazon ECS services. • Turn on Container Insights with enhanced observability, or Container Insights . CloudWatch Container Insights collects, aggregates, and summarizes metrics and logs from your containerized applications and"} +{"global_id": 555, "doc_id": "ecs", "chunk_id": "24", "question_id": 4, "question": "What does CloudWatch Container Insights do?", "answer_span": "CloudWatch Container Insights collects, aggregates, and summarizes metrics and logs from your containerized applications", "chunk": "Monitoring to detect threats for clusters and containers within your AWS environment. Runtime Monitoring uses a GuardDuty security agent that adds runtime visibility into individual Amazon ECS workloads, for example, file access, process execution, and network connections. For more information, see GuardDuty Runtime Monitoring in the GuardDuty User Guide. EC2 container instance security 695 Amazon Elastic Container Service Developer Guide Creating an Amazon ECS cluster for the Amazon EC2 launch type You create a cluster to define the infrastructure your tasks and services run on. Before you begin, be sure that you've completed the steps in Set up to use Amazon ECS and assign the appropriate IAM permission. For more information, see the section called “Amazon ECS cluster examples”. The Amazon ECS console provides a simple way to create the resources that are needed by an Amazon ECS cluster by creating a AWS CloudFormation stack. To make the cluster creation process as easy as possible, the console has default selections for many choices which we describe below. There are also help panels available for most of the sections in the console which provide further context. You can register Amazon EC2 instances when you create the cluster or register additional instances with the cluster after it has been created. You can modify the following default options: • Change the subnets where your instances launch • Change the security groups used to control traffic to the container instances • Add a namespace to the cluster. A namespace allows services that you create in the cluster can connect to the other services in the namespace without additional configuration. For more information, see Interconnect Amazon ECS services. • Turn on Container Insights with enhanced observability, or Container Insights . CloudWatch Container Insights collects, aggregates, and summarizes metrics and logs from your containerized applications and"} +{"global_id": 556, "doc_id": "ecs", "chunk_id": "25", "question_id": 1, "question": "What does CloudWatch Container Insights collect?", "answer_span": "CloudWatch Container Insights collects, aggregates, and summarizes metrics and logs from your containerized applications and microservices.", "chunk": "in the cluster can connect to the other services in the namespace without additional configuration. For more information, see Interconnect Amazon ECS services. • Turn on Container Insights with enhanced observability, or Container Insights . CloudWatch Container Insights collects, aggregates, and summarizes metrics and logs from your containerized applications and microservices. Container Insights also provides diagnostic information, such as container restart failures, that you use to isolate issues and resolve them quickly. For more information, see the section called “Monitor Amazon ECS containers using Container Insights with enhanced observability”. On December 2, 2024, AWS released Container Insights with enhanced observability for Amazon ECS. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays your data in dashboards that show you a variety of metrics and dimensions. You can then use these outCreating a cluster for the Amazon EC2 launch type 696 Amazon Elastic Container Service Developer Guide of-the-box dashboards on the Container Insights console to better understand your container health and performance, and to mitigate issues faster by identifying anomalies. We recommend that you use Container Insights with enhanced observability instead of Container Insights because it provides detailed visibility in your container environment, reducing the mean time to resolution. • Assign a AWS KMS key for your managed storage. For information about how to create a key, see Create a KMS key in the AWS Key Management Service User Guide. • Assign a AWS KMS key for your Fargate ephemeral storage. For information about how to create a key, see Create a KMS key in the AWS Key Management Service User Guide."} +{"global_id": 557, "doc_id": "ecs", "chunk_id": "25", "question_id": 2, "question": "When was Container Insights with enhanced observability for Amazon ECS released?", "answer_span": "On December 2, 2024, AWS released Container Insights with enhanced observability for Amazon ECS.", "chunk": "in the cluster can connect to the other services in the namespace without additional configuration. For more information, see Interconnect Amazon ECS services. • Turn on Container Insights with enhanced observability, or Container Insights . CloudWatch Container Insights collects, aggregates, and summarizes metrics and logs from your containerized applications and microservices. Container Insights also provides diagnostic information, such as container restart failures, that you use to isolate issues and resolve them quickly. For more information, see the section called “Monitor Amazon ECS containers using Container Insights with enhanced observability”. On December 2, 2024, AWS released Container Insights with enhanced observability for Amazon ECS. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays your data in dashboards that show you a variety of metrics and dimensions. You can then use these outCreating a cluster for the Amazon EC2 launch type 696 Amazon Elastic Container Service Developer Guide of-the-box dashboards on the Container Insights console to better understand your container health and performance, and to mitigate issues faster by identifying anomalies. We recommend that you use Container Insights with enhanced observability instead of Container Insights because it provides detailed visibility in your container environment, reducing the mean time to resolution. • Assign a AWS KMS key for your managed storage. For information about how to create a key, see Create a KMS key in the AWS Key Management Service User Guide. • Assign a AWS KMS key for your Fargate ephemeral storage. For information about how to create a key, see Create a KMS key in the AWS Key Management Service User Guide."} +{"global_id": 558, "doc_id": "ecs", "chunk_id": "25", "question_id": 3, "question": "What does Container Insights auto-collect after configuration?", "answer_span": "Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment.", "chunk": "in the cluster can connect to the other services in the namespace without additional configuration. For more information, see Interconnect Amazon ECS services. • Turn on Container Insights with enhanced observability, or Container Insights . CloudWatch Container Insights collects, aggregates, and summarizes metrics and logs from your containerized applications and microservices. Container Insights also provides diagnostic information, such as container restart failures, that you use to isolate issues and resolve them quickly. For more information, see the section called “Monitor Amazon ECS containers using Container Insights with enhanced observability”. On December 2, 2024, AWS released Container Insights with enhanced observability for Amazon ECS. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays your data in dashboards that show you a variety of metrics and dimensions. You can then use these outCreating a cluster for the Amazon EC2 launch type 696 Amazon Elastic Container Service Developer Guide of-the-box dashboards on the Container Insights console to better understand your container health and performance, and to mitigate issues faster by identifying anomalies. We recommend that you use Container Insights with enhanced observability instead of Container Insights because it provides detailed visibility in your container environment, reducing the mean time to resolution. • Assign a AWS KMS key for your managed storage. For information about how to create a key, see Create a KMS key in the AWS Key Management Service User Guide. • Assign a AWS KMS key for your Fargate ephemeral storage. For information about how to create a key, see Create a KMS key in the AWS Key Management Service User Guide."} +{"global_id": 559, "doc_id": "ecs", "chunk_id": "25", "question_id": 4, "question": "What is recommended instead of Container Insights?", "answer_span": "We recommend that you use Container Insights with enhanced observability instead of Container Insights because it provides detailed visibility in your container environment, reducing the mean time to resolution.", "chunk": "in the cluster can connect to the other services in the namespace without additional configuration. For more information, see Interconnect Amazon ECS services. • Turn on Container Insights with enhanced observability, or Container Insights . CloudWatch Container Insights collects, aggregates, and summarizes metrics and logs from your containerized applications and microservices. Container Insights also provides diagnostic information, such as container restart failures, that you use to isolate issues and resolve them quickly. For more information, see the section called “Monitor Amazon ECS containers using Container Insights with enhanced observability”. On December 2, 2024, AWS released Container Insights with enhanced observability for Amazon ECS. This version supports enhanced observability for Amazon ECS clusters using the Amazon EC2 and Fargate launch types. After you configure Container Insights with enhanced observability on Amazon ECS, Container Insights auto-collects detailed infrastructure telemetry from the cluster level down to the container level in your environment and displays your data in dashboards that show you a variety of metrics and dimensions. You can then use these outCreating a cluster for the Amazon EC2 launch type 696 Amazon Elastic Container Service Developer Guide of-the-box dashboards on the Container Insights console to better understand your container health and performance, and to mitigate issues faster by identifying anomalies. We recommend that you use Container Insights with enhanced observability instead of Container Insights because it provides detailed visibility in your container environment, reducing the mean time to resolution. • Assign a AWS KMS key for your managed storage. For information about how to create a key, see Create a KMS key in the AWS Key Management Service User Guide. • Assign a AWS KMS key for your Fargate ephemeral storage. For information about how to create a key, see Create a KMS key in the AWS Key Management Service User Guide."} +{"global_id": 560, "doc_id": "ecs", "chunk_id": "26", "question_id": 1, "question": "How do you create a key?", "answer_span": "see Create a KMS key in the AWS Key Management Service User Guide.", "chunk": "how to create a key, see Create a KMS key in the AWS Key Management Service User Guide. • Assign a AWS KMS key for your Fargate ephemeral storage. For information about how to create a key, see Create a KMS key in the AWS Key Management Service User Guide. • Add tags to help you identify your cluster. Auto Scaling group options When you use Amazon EC2 instances, you must specify an Auto Scaling group to manage the infrastructure that your tasks and services run on. When you choose to create a new Auto Scaling group, it is automatically configured for the following behavior: • Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group. • Amazon ECS will not prevent Amazon EC2 instances that contain tasks and that are in an Auto Scaling group from being terminated during a scale-in action. For more information, see Instance Protection in the AWS Auto Scaling User Guide. You configure the following Auto Scaling group properties which determine the type and number of instances to launch for the group: • The Amazon ECS-optimized AMI. • The instance type. • The SSH key pair that proves your identity when you connect to the instance. For information about how to create SSH keys, see Amazon EC2 key pairs and Linux instances in the Amazon EC2 User Guide. • The minimum number of instances to launch for the Auto Scaling group. • The maximum number of instances that are started for the Auto Scaling group. Creating a cluster for the Amazon EC2 launch type 697"} +{"global_id": 561, "doc_id": "ecs", "chunk_id": "26", "question_id": 2, "question": "What must you specify when using Amazon EC2 instances?", "answer_span": "you must specify an Auto Scaling group to manage the infrastructure that your tasks and services run on.", "chunk": "how to create a key, see Create a KMS key in the AWS Key Management Service User Guide. • Assign a AWS KMS key for your Fargate ephemeral storage. For information about how to create a key, see Create a KMS key in the AWS Key Management Service User Guide. • Add tags to help you identify your cluster. Auto Scaling group options When you use Amazon EC2 instances, you must specify an Auto Scaling group to manage the infrastructure that your tasks and services run on. When you choose to create a new Auto Scaling group, it is automatically configured for the following behavior: • Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group. • Amazon ECS will not prevent Amazon EC2 instances that contain tasks and that are in an Auto Scaling group from being terminated during a scale-in action. For more information, see Instance Protection in the AWS Auto Scaling User Guide. You configure the following Auto Scaling group properties which determine the type and number of instances to launch for the group: • The Amazon ECS-optimized AMI. • The instance type. • The SSH key pair that proves your identity when you connect to the instance. For information about how to create SSH keys, see Amazon EC2 key pairs and Linux instances in the Amazon EC2 User Guide. • The minimum number of instances to launch for the Auto Scaling group. • The maximum number of instances that are started for the Auto Scaling group. Creating a cluster for the Amazon EC2 launch type 697"} +{"global_id": 562, "doc_id": "ecs", "chunk_id": "26", "question_id": 3, "question": "What does Amazon ECS manage in the Auto Scaling group?", "answer_span": "Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group.", "chunk": "how to create a key, see Create a KMS key in the AWS Key Management Service User Guide. • Assign a AWS KMS key for your Fargate ephemeral storage. For information about how to create a key, see Create a KMS key in the AWS Key Management Service User Guide. • Add tags to help you identify your cluster. Auto Scaling group options When you use Amazon EC2 instances, you must specify an Auto Scaling group to manage the infrastructure that your tasks and services run on. When you choose to create a new Auto Scaling group, it is automatically configured for the following behavior: • Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group. • Amazon ECS will not prevent Amazon EC2 instances that contain tasks and that are in an Auto Scaling group from being terminated during a scale-in action. For more information, see Instance Protection in the AWS Auto Scaling User Guide. You configure the following Auto Scaling group properties which determine the type and number of instances to launch for the group: • The Amazon ECS-optimized AMI. • The instance type. • The SSH key pair that proves your identity when you connect to the instance. For information about how to create SSH keys, see Amazon EC2 key pairs and Linux instances in the Amazon EC2 User Guide. • The minimum number of instances to launch for the Auto Scaling group. • The maximum number of instances that are started for the Auto Scaling group. Creating a cluster for the Amazon EC2 launch type 697"} +{"global_id": 563, "doc_id": "ecs", "chunk_id": "26", "question_id": 4, "question": "What is required to prove your identity when connecting to the instance?", "answer_span": "The SSH key pair that proves your identity when you connect to the instance.", "chunk": "how to create a key, see Create a KMS key in the AWS Key Management Service User Guide. • Assign a AWS KMS key for your Fargate ephemeral storage. For information about how to create a key, see Create a KMS key in the AWS Key Management Service User Guide. • Add tags to help you identify your cluster. Auto Scaling group options When you use Amazon EC2 instances, you must specify an Auto Scaling group to manage the infrastructure that your tasks and services run on. When you choose to create a new Auto Scaling group, it is automatically configured for the following behavior: • Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group. • Amazon ECS will not prevent Amazon EC2 instances that contain tasks and that are in an Auto Scaling group from being terminated during a scale-in action. For more information, see Instance Protection in the AWS Auto Scaling User Guide. You configure the following Auto Scaling group properties which determine the type and number of instances to launch for the group: • The Amazon ECS-optimized AMI. • The instance type. • The SSH key pair that proves your identity when you connect to the instance. For information about how to create SSH keys, see Amazon EC2 key pairs and Linux instances in the Amazon EC2 User Guide. • The minimum number of instances to launch for the Auto Scaling group. • The maximum number of instances that are started for the Auto Scaling group. Creating a cluster for the Amazon EC2 launch type 697"} +{"global_id": 564, "doc_id": "ecs", "chunk_id": "27", "question_id": 1, "question": "What is being created for the Amazon EC2 launch type?", "answer_span": "Creating a cluster for the Amazon EC2 launch type", "chunk": "Auto Scaling group. Creating a cluster for the Amazon EC2 launch type 697"} +{"global_id": 565, "doc_id": "ecs", "chunk_id": "27", "question_id": 2, "question": "What type of group is mentioned in the text?", "answer_span": "Auto Scaling group", "chunk": "Auto Scaling group. Creating a cluster for the Amazon EC2 launch type 697"} +{"global_id": 566, "doc_id": "ecs", "chunk_id": "27", "question_id": 3, "question": "What is the launch type number mentioned?", "answer_span": "launch type 697", "chunk": "Auto Scaling group. Creating a cluster for the Amazon EC2 launch type 697"} +{"global_id": 567, "doc_id": "ecs", "chunk_id": "27", "question_id": 4, "question": "What is the purpose of the Auto Scaling group?", "answer_span": "Creating a cluster for the Amazon EC2 launch type", "chunk": "Auto Scaling group. Creating a cluster for the Amazon EC2 launch type 697"} +{"global_id": 568, "doc_id": "ec2", "chunk_id": "0", "question_id": 1, "question": "What does Amazon EC2 provide?", "answer_span": "Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud.", "chunk": "Amazon Elastic Compute Cloud User Guide What is Amazon EC2? Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. You can add capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic. When usage decreases, you can reduce capacity (scale down) again. An EC2 instance is a virtual server in the AWS Cloud. When you launch an EC2 instance, the instance type that you specify determines the hardware available to your instance. Each instance type offers a different balance of compute, memory, network, and storage resources. For more information, see the Amazon EC2 Instance Types Guide. Features of Amazon EC2 Amazon EC2 provides the following high-level features: Instances Virtual servers. Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software). Instance types Various configurations of CPU, memory, storage, networking capacity, and graphics hardware for your instances. Features 1 Amazon Elastic Compute Cloud User Guide Amazon EBS volumes Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS). Instance store volumes Storage volumes for temporary data that is deleted when you stop, hibernate, or terminate your instance. Key pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to"} +{"global_id": 569, "doc_id": "ec2", "chunk_id": "0", "question_id": 2, "question": "What is an EC2 instance?", "answer_span": "An EC2 instance is a virtual server in the AWS Cloud.", "chunk": "Amazon Elastic Compute Cloud User Guide What is Amazon EC2? Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. You can add capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic. When usage decreases, you can reduce capacity (scale down) again. An EC2 instance is a virtual server in the AWS Cloud. When you launch an EC2 instance, the instance type that you specify determines the hardware available to your instance. Each instance type offers a different balance of compute, memory, network, and storage resources. For more information, see the Amazon EC2 Instance Types Guide. Features of Amazon EC2 Amazon EC2 provides the following high-level features: Instances Virtual servers. Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software). Instance types Various configurations of CPU, memory, storage, networking capacity, and graphics hardware for your instances. Features 1 Amazon Elastic Compute Cloud User Guide Amazon EBS volumes Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS). Instance store volumes Storage volumes for temporary data that is deleted when you stop, hibernate, or terminate your instance. Key pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to"} +{"global_id": 570, "doc_id": "ec2", "chunk_id": "0", "question_id": 3, "question": "What are Amazon Machine Images (AMIs)?", "answer_span": "Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software).", "chunk": "Amazon Elastic Compute Cloud User Guide What is Amazon EC2? Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. You can add capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic. When usage decreases, you can reduce capacity (scale down) again. An EC2 instance is a virtual server in the AWS Cloud. When you launch an EC2 instance, the instance type that you specify determines the hardware available to your instance. Each instance type offers a different balance of compute, memory, network, and storage resources. For more information, see the Amazon EC2 Instance Types Guide. Features of Amazon EC2 Amazon EC2 provides the following high-level features: Instances Virtual servers. Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software). Instance types Various configurations of CPU, memory, storage, networking capacity, and graphics hardware for your instances. Features 1 Amazon Elastic Compute Cloud User Guide Amazon EBS volumes Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS). Instance store volumes Storage volumes for temporary data that is deleted when you stop, hibernate, or terminate your instance. Key pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to"} +{"global_id": 571, "doc_id": "ec2", "chunk_id": "0", "question_id": 4, "question": "What are security groups in Amazon EC2?", "answer_span": "Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to", "chunk": "Amazon Elastic Compute Cloud User Guide What is Amazon EC2? Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. You can add capacity (scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website traffic. When usage decreases, you can reduce capacity (scale down) again. An EC2 instance is a virtual server in the AWS Cloud. When you launch an EC2 instance, the instance type that you specify determines the hardware available to your instance. Each instance type offers a different balance of compute, memory, network, and storage resources. For more information, see the Amazon EC2 Instance Types Guide. Features of Amazon EC2 Amazon EC2 provides the following high-level features: Instances Virtual servers. Amazon Machine Images (AMIs) Preconfigured templates for your instances that package the components you need for your server (including the operating system and additional software). Instance types Various configurations of CPU, memory, storage, networking capacity, and graphics hardware for your instances. Features 1 Amazon Elastic Compute Cloud User Guide Amazon EBS volumes Persistent storage volumes for your data using Amazon Elastic Block Store (Amazon EBS). Instance store volumes Storage volumes for temporary data that is deleted when you stop, hibernate, or terminate your instance. Key pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to"} +{"global_id": 572, "doc_id": "ec2", "chunk_id": "1", "question_id": 1, "question": "What does AWS store for secure login information?", "answer_span": "AWS stores the public key and you store the private key in a secure place.", "chunk": "pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect. Amazon EC2 supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. Related services Services to use with Amazon EC2 You can use other AWS services with the instances that you deploy using Amazon EC2. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. AWS Backup Automate backing up your Amazon EC2 instances and the Amazon EBS volumes attached to them. Amazon CloudWatch Monitor your instances and Amazon EBS volumes. Related services 2 Amazon Elastic Compute Cloud User Guide Access Amazon EC2 You can create and manage your Amazon EC2 instances using the following interfaces: Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources. If you've signed up for an AWS account, you can access the Amazon EC2 console by signing into the AWS Management Console and selecting EC2 from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon"} +{"global_id": 573, "doc_id": "ec2", "chunk_id": "1", "question_id": 2, "question": "What is a security group?", "answer_span": "A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect.", "chunk": "pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect. Amazon EC2 supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. Related services Services to use with Amazon EC2 You can use other AWS services with the instances that you deploy using Amazon EC2. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. AWS Backup Automate backing up your Amazon EC2 instances and the Amazon EBS volumes attached to them. Amazon CloudWatch Monitor your instances and Amazon EBS volumes. Related services 2 Amazon Elastic Compute Cloud User Guide Access Amazon EC2 You can create and manage your Amazon EC2 instances using the following interfaces: Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources. If you've signed up for an AWS account, you can access the Amazon EC2 console by signing into the AWS Management Console and selecting EC2 from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon"} +{"global_id": 574, "doc_id": "ec2", "chunk_id": "1", "question_id": 3, "question": "What standard has Amazon EC2 been validated as compliant with?", "answer_span": "has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS).", "chunk": "pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect. Amazon EC2 supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. Related services Services to use with Amazon EC2 You can use other AWS services with the instances that you deploy using Amazon EC2. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. AWS Backup Automate backing up your Amazon EC2 instances and the Amazon EBS volumes attached to them. Amazon CloudWatch Monitor your instances and Amazon EBS volumes. Related services 2 Amazon Elastic Compute Cloud User Guide Access Amazon EC2 You can create and manage your Amazon EC2 instances using the following interfaces: Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources. If you've signed up for an AWS account, you can access the Amazon EC2 console by signing into the AWS Management Console and selecting EC2 from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon"} +{"global_id": 575, "doc_id": "ec2", "chunk_id": "1", "question_id": 4, "question": "What interface allows you to create and manage Amazon EC2 instances?", "answer_span": "Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources.", "chunk": "pairs Secure login information for your instances. AWS stores the public key and you store the private key in a secure place. Security groups A virtual firewall that allows you to specify the protocols, ports, and source IP ranges that can reach your instances, and the destination IP ranges to which your instances can connect. Amazon EC2 supports the processing, storage, and transmission of credit card data by a merchant or service provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI Compliance Package, see PCI DSS Level 1. Related services Services to use with Amazon EC2 You can use other AWS services with the instances that you deploy using Amazon EC2. Amazon EC2 Auto Scaling Helps ensure you have the correct number of Amazon EC2 instances available to handle the load for your application. AWS Backup Automate backing up your Amazon EC2 instances and the Amazon EBS volumes attached to them. Amazon CloudWatch Monitor your instances and Amazon EBS volumes. Related services 2 Amazon Elastic Compute Cloud User Guide Access Amazon EC2 You can create and manage your Amazon EC2 instances using the following interfaces: Amazon EC2 console A simple web interface to create and manage Amazon EC2 instances and resources. If you've signed up for an AWS account, you can access the Amazon EC2 console by signing into the AWS Management Console and selecting EC2 from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon"} +{"global_id": 576, "doc_id": "ec2", "chunk_id": "2", "question_id": 1, "question": "What does the AWS Command Line Interface enable you to do?", "answer_span": "AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell.", "chunk": "from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon EC2 commands in the AWS CLI Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether in the same Region and account or in multiple Regions and accounts. For more information about supported resource types and properties for Amazon EC2, see EC2 resource type reference in the AWS CloudFormation User Guide. AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, AWS provides libraries, sample code, tutorials, and other resources for software developers. These libraries provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses, making it easier for you to get started. For more information, see Tools to Build on AWS. AWS Tools for PowerShell A set of PowerShell modules that are built on the functionality exposed by the SDK for .NET. The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line. To get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute"} +{"global_id": 577, "doc_id": "ec2", "chunk_id": "2", "question_id": 2, "question": "What format can you use to create a template for AWS CloudFormation?", "answer_span": "You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you.", "chunk": "from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon EC2 commands in the AWS CLI Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether in the same Region and account or in multiple Regions and accounts. For more information about supported resource types and properties for Amazon EC2, see EC2 resource type reference in the AWS CloudFormation User Guide. AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, AWS provides libraries, sample code, tutorials, and other resources for software developers. These libraries provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses, making it easier for you to get started. For more information, see Tools to Build on AWS. AWS Tools for PowerShell A set of PowerShell modules that are built on the functionality exposed by the SDK for .NET. The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line. To get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute"} +{"global_id": 578, "doc_id": "ec2", "chunk_id": "2", "question_id": 3, "question": "What do AWS SDKs provide for software developers?", "answer_span": "AWS provides libraries, sample code, tutorials, and other resources for software developers.", "chunk": "from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon EC2 commands in the AWS CLI Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether in the same Region and account or in multiple Regions and accounts. For more information about supported resource types and properties for Amazon EC2, see EC2 resource type reference in the AWS CloudFormation User Guide. AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, AWS provides libraries, sample code, tutorials, and other resources for software developers. These libraries provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses, making it easier for you to get started. For more information, see Tools to Build on AWS. AWS Tools for PowerShell A set of PowerShell modules that are built on the functionality exposed by the SDK for .NET. The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line. To get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute"} +{"global_id": 579, "doc_id": "ec2", "chunk_id": "2", "question_id": 4, "question": "What do the Tools for PowerShell enable you to do?", "answer_span": "The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line.", "chunk": "from the console home page. AWS Command Line Interface Enables you to interact with AWS services using commands in your command-line shell. It is supported on Windows, Mac, and Linux. For more information about the AWS CLI , see AWS Command Line Interface User Guide. You can find the Amazon EC2 commands in the AWS CLI Command Reference. AWS CloudFormation Amazon EC2 supports creating resources using AWS CloudFormation. You create a template, in JSON or YAML format, that describes your AWS resources, and AWS CloudFormation provisions and configures those resources for you. You can reuse your CloudFormation templates to provision the same resources multiple times, whether in the same Region and account or in multiple Regions and accounts. For more information about supported resource types and properties for Amazon EC2, see EC2 resource type reference in the AWS CloudFormation User Guide. AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, AWS provides libraries, sample code, tutorials, and other resources for software developers. These libraries provide basic functions that automate tasks such as cryptographically signing your requests, retrying requests, and handling error responses, making it easier for you to get started. For more information, see Tools to Build on AWS. AWS Tools for PowerShell A set of PowerShell modules that are built on the functionality exposed by the SDK for .NET. The Tools for PowerShell enable you to script operations on your AWS resources from the PowerShell command line. To get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute"} +{"global_id": 580, "doc_id": "ec2", "chunk_id": "3", "question_id": 1, "question": "What is an instance in Amazon EC2?", "answer_span": "An instance is a virtual server in the AWS Cloud.", "chunk": "get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect to an EC2 instance. An instance is a virtual server in the AWS Cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance. Overview The following diagram shows the key components that you'll use in this tutorial: • An image – A template that contains the software to run on your instance, such as the operating system. • A key pair – A set of security credentials that you use to prove your identity when connecting to your instance. The public key is on your instance and the private key is on your computer. • A network – A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. To help you get started quickly, your account comes with a default VPC in each AWS Region, and each default VPC has a default subnet in each Availability Zone. • A security group – Acts as a virtual firewall to control inbound and outbound traffic. • An EBS volume – We require a root volume for the image. You can optionally add data volumes. 8 Amazon Elastic Compute Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free"} +{"global_id": 581, "doc_id": "ec2", "chunk_id": "3", "question_id": 2, "question": "What is a key pair used for in Amazon EC2?", "answer_span": "A key pair – A set of security credentials that you use to prove your identity when connecting to your instance.", "chunk": "get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect to an EC2 instance. An instance is a virtual server in the AWS Cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance. Overview The following diagram shows the key components that you'll use in this tutorial: • An image – A template that contains the software to run on your instance, such as the operating system. • A key pair – A set of security credentials that you use to prove your identity when connecting to your instance. The public key is on your instance and the private key is on your computer. • A network – A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. To help you get started quickly, your account comes with a default VPC in each AWS Region, and each default VPC has a default subnet in each Availability Zone. • A security group – Acts as a virtual firewall to control inbound and outbound traffic. • An EBS volume – We require a root volume for the image. You can optionally add data volumes. 8 Amazon Elastic Compute Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free"} +{"global_id": 582, "doc_id": "ec2", "chunk_id": "3", "question_id": 3, "question": "What does a security group do in Amazon EC2?", "answer_span": "A security group – Acts as a virtual firewall to control inbound and outbound traffic.", "chunk": "get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect to an EC2 instance. An instance is a virtual server in the AWS Cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance. Overview The following diagram shows the key components that you'll use in this tutorial: • An image – A template that contains the software to run on your instance, such as the operating system. • A key pair – A set of security credentials that you use to prove your identity when connecting to your instance. The public key is on your instance and the private key is on your computer. • A network – A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. To help you get started quickly, your account comes with a default VPC in each AWS Region, and each default VPC has a default subnet in each Availability Zone. • A security group – Acts as a virtual firewall to control inbound and outbound traffic. • An EBS volume – We require a root volume for the image. You can optionally add data volumes. 8 Amazon Elastic Compute Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free"} +{"global_id": 583, "doc_id": "ec2", "chunk_id": "3", "question_id": 4, "question": "How can you get started with Amazon EC2 for free?", "answer_span": "When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier.", "chunk": "get started, see the AWS Tools for PowerShell User Guide. You can find the cmdlets for Amazon EC2, in the AWS Tools for PowerShell Cmdlet Reference. Access EC2 4 Amazon Elastic Compute Cloud User Guide Get started with Amazon EC2 Use this tutorial to get started with Amazon Elastic Compute Cloud (Amazon EC2). You'll learn how to launch and connect to an EC2 instance. An instance is a virtual server in the AWS Cloud. With Amazon EC2, you can set up and configure the operating system and applications that run on your instance. Overview The following diagram shows the key components that you'll use in this tutorial: • An image – A template that contains the software to run on your instance, such as the operating system. • A key pair – A set of security credentials that you use to prove your identity when connecting to your instance. The public key is on your instance and the private key is on your computer. • A network – A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. To help you get started quickly, your account comes with a default VPC in each AWS Region, and each default VPC has a default subnet in each Availability Zone. • A security group – Acts as a virtual firewall to control inbound and outbound traffic. • An EBS volume – We require a root volume for the image. You can optionally add data volumes. 8 Amazon Elastic Compute Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free"} +{"global_id": 584, "doc_id": "ec2", "chunk_id": "4", "question_id": 1, "question": "What can you use to get started with Amazon EC2 for free?", "answer_span": "you can get started with Amazon EC2 for free using the AWS Free Tier.", "chunk": "Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. Otherwise, you'll incur the standard Amazon EC2 usage fees from the time that you launch the instance (even if it remains idle) until you terminate it. If you created your AWS account on or after July 15, 2025, it's less than 6 months old, and you haven't used up all your credits, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. For information on how to determine whether you are eligible for the Free Tier, see the section called “Track your Free Tier usage”. Tasks • Step 1: Launch an instance • Step 2: Connect to your instance • Step 3: Clean up your instance 9 Amazon Elastic Compute Cloud User Guide • Next steps Step 1: Launch an instance You can launch an EC2 instance using the AWS Management Console as described in the following procedure. This tutorial is intended to help you quickly launch your first instance, so it doesn't cover all possible options. To launch an instance 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console"} +{"global_id": 585, "doc_id": "ec2", "chunk_id": "4", "question_id": 2, "question": "What happens if you created your AWS account before July 15, 2025 and haven't exceeded the Free Tier benefits?", "answer_span": "it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits.", "chunk": "Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. Otherwise, you'll incur the standard Amazon EC2 usage fees from the time that you launch the instance (even if it remains idle) until you terminate it. If you created your AWS account on or after July 15, 2025, it's less than 6 months old, and you haven't used up all your credits, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. For information on how to determine whether you are eligible for the Free Tier, see the section called “Track your Free Tier usage”. Tasks • Step 1: Launch an instance • Step 2: Connect to your instance • Step 3: Clean up your instance 9 Amazon Elastic Compute Cloud User Guide • Next steps Step 1: Launch an instance You can launch an EC2 instance using the AWS Management Console as described in the following procedure. This tutorial is intended to help you quickly launch your first instance, so it doesn't cover all possible options. To launch an instance 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console"} +{"global_id": 586, "doc_id": "ec2", "chunk_id": "4", "question_id": 3, "question": "What should you do to launch an EC2 instance?", "answer_span": "You can launch an EC2 instance using the AWS Management Console as described in the following procedure.", "chunk": "Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. Otherwise, you'll incur the standard Amazon EC2 usage fees from the time that you launch the instance (even if it remains idle) until you terminate it. If you created your AWS account on or after July 15, 2025, it's less than 6 months old, and you haven't used up all your credits, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. For information on how to determine whether you are eligible for the Free Tier, see the section called “Track your Free Tier usage”. Tasks • Step 1: Launch an instance • Step 2: Connect to your instance • Step 3: Clean up your instance 9 Amazon Elastic Compute Cloud User Guide • Next steps Step 1: Launch an instance You can launch an EC2 instance using the AWS Management Console as described in the following procedure. This tutorial is intended to help you quickly launch your first instance, so it doesn't cover all possible options. To launch an instance 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console"} +{"global_id": 587, "doc_id": "ec2", "chunk_id": "4", "question_id": 4, "question": "Where can you find the current AWS Region?", "answer_span": "In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio.", "chunk": "Cloud User Guide Cost for this tutorial When you create your AWS account, you can get started with Amazon EC2 for free using the AWS Free Tier. If you created your AWS account before July 15, 2025, it's less than 12 months old, and you haven't already exceeded the Free Tier benefits for Amazon EC2, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. Otherwise, you'll incur the standard Amazon EC2 usage fees from the time that you launch the instance (even if it remains idle) until you terminate it. If you created your AWS account on or after July 15, 2025, it's less than 6 months old, and you haven't used up all your credits, it won't cost you anything to complete this tutorial, because we help you select options that are within the Free Tier benefits. For information on how to determine whether you are eligible for the Free Tier, see the section called “Track your Free Tier usage”. Tasks • Step 1: Launch an instance • Step 2: Connect to your instance • Step 3: Clean up your instance 9 Amazon Elastic Compute Cloud User Guide • Next steps Step 1: Launch an instance You can launch an EC2 instance using the AWS Management Console as described in the following procedure. This tutorial is intended to help you quickly launch your first instance, so it doesn't cover all possible options. To launch an instance 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console"} +{"global_id": 588, "doc_id": "ec2", "chunk_id": "5", "question_id": 1, "question": "What is the first step to launch an instance?", "answer_span": "1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.", "chunk": "1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4. Under Name and tags, for Name, enter a descriptive name for your instance. 5. Under Application and OS Images (Amazon Machine Image), do the following: a. Choose Quick Start, and then choose the operating system (OS) for your instance. For your first Linux instance, we recommend that you choose Amazon Linux. b. From Amazon Machine Image (AMI), select an AMI that is marked Free Tier eligible. 6. Under Instance type, for Instance type, select an instance type that is marked Free Tier eligible. 7. Under Key pair (login), for Key pair name, choose an existing key pair or choose Create new key pair to create your first key pair. Warning If you choose Proceed without a key pair (Not recommended), you won't be able to connect to your instance using the methods described in this tutorial. 8. Under Network settings, notice that we selected your default VPC, selected the option to use the default subnet in an Availability Zone that we choose for you, and configured a security group with a rule that allows connections to your instance from anywhere (0.0.0.0.0/0). Step 1: Launch an instance 10 Amazon Elastic Compute Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize"} +{"global_id": 589, "doc_id": "ec2", "chunk_id": "5", "question_id": 2, "question": "What should you enter for the Name under Name and tags?", "answer_span": "Under Name and tags, for Name, enter a descriptive name for your instance.", "chunk": "1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4. Under Name and tags, for Name, enter a descriptive name for your instance. 5. Under Application and OS Images (Amazon Machine Image), do the following: a. Choose Quick Start, and then choose the operating system (OS) for your instance. For your first Linux instance, we recommend that you choose Amazon Linux. b. From Amazon Machine Image (AMI), select an AMI that is marked Free Tier eligible. 6. Under Instance type, for Instance type, select an instance type that is marked Free Tier eligible. 7. Under Key pair (login), for Key pair name, choose an existing key pair or choose Create new key pair to create your first key pair. Warning If you choose Proceed without a key pair (Not recommended), you won't be able to connect to your instance using the methods described in this tutorial. 8. Under Network settings, notice that we selected your default VPC, selected the option to use the default subnet in an Availability Zone that we choose for you, and configured a security group with a rule that allows connections to your instance from anywhere (0.0.0.0.0/0). Step 1: Launch an instance 10 Amazon Elastic Compute Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize"} +{"global_id": 590, "doc_id": "ec2", "chunk_id": "5", "question_id": 3, "question": "What type of AMI should you select for your first Linux instance?", "answer_span": "For your first Linux instance, we recommend that you choose Amazon Linux.", "chunk": "1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4. Under Name and tags, for Name, enter a descriptive name for your instance. 5. Under Application and OS Images (Amazon Machine Image), do the following: a. Choose Quick Start, and then choose the operating system (OS) for your instance. For your first Linux instance, we recommend that you choose Amazon Linux. b. From Amazon Machine Image (AMI), select an AMI that is marked Free Tier eligible. 6. Under Instance type, for Instance type, select an instance type that is marked Free Tier eligible. 7. Under Key pair (login), for Key pair name, choose an existing key pair or choose Create new key pair to create your first key pair. Warning If you choose Proceed without a key pair (Not recommended), you won't be able to connect to your instance using the methods described in this tutorial. 8. Under Network settings, notice that we selected your default VPC, selected the option to use the default subnet in an Availability Zone that we choose for you, and configured a security group with a rule that allows connections to your instance from anywhere (0.0.0.0.0/0). Step 1: Launch an instance 10 Amazon Elastic Compute Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize"} +{"global_id": 591, "doc_id": "ec2", "chunk_id": "5", "question_id": 4, "question": "What happens if you choose Proceed without a key pair?", "answer_span": "Warning If you choose Proceed without a key pair (Not recommended), you won't be able to connect to your instance using the methods described in this tutorial.", "chunk": "1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation bar at the top of the screen, we display the current AWS Region — for example, Ohio. You can use the selected Region, or optionally select a Region that is closer to you. 3. From the EC2 console dashboard, in the Launch instance pane, choose Launch instance. 4. Under Name and tags, for Name, enter a descriptive name for your instance. 5. Under Application and OS Images (Amazon Machine Image), do the following: a. Choose Quick Start, and then choose the operating system (OS) for your instance. For your first Linux instance, we recommend that you choose Amazon Linux. b. From Amazon Machine Image (AMI), select an AMI that is marked Free Tier eligible. 6. Under Instance type, for Instance type, select an instance type that is marked Free Tier eligible. 7. Under Key pair (login), for Key pair name, choose an existing key pair or choose Create new key pair to create your first key pair. Warning If you choose Proceed without a key pair (Not recommended), you won't be able to connect to your instance using the methods described in this tutorial. 8. Under Network settings, notice that we selected your default VPC, selected the option to use the default subnet in an Availability Zone that we choose for you, and configured a security group with a rule that allows connections to your instance from anywhere (0.0.0.0.0/0). Step 1: Launch an instance 10 Amazon Elastic Compute Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize"} +{"global_id": 592, "doc_id": "ec2", "chunk_id": "6", "question_id": 1, "question": "What happens if you specify 0.0.0.0/0?", "answer_span": "If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world.", "chunk": "Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range of addresses. For your first instance, we recommend that you use the default settings. Otherwise, you can update your network settings as follows: 9. • (Optional) To use a specific default subnet, choose Edit and then choose a subnet. • (Optional) To use a different VPC, choose Edit and then choose an existing VPC. If the VPC isn't configured for public internet access, you won't be able to connect to your instance. • (Optional) To restrict inbound connection traffic to a specific network, choose Custom instead of Anywhere, and enter the CIDR block for your network. • (Optional) To use a different security group, choose Select existing security group and choose an existing security group. If the security group does not have a rule that allows connection traffic from your network, you won't be able to connect to your instance. For a Linux instance, you must allow SSH traffic. For a Windows instance, you must allow RDP traffic. Under Configure storage, notice that we configured a root volume but no data volumes. This is sufficient for test purposes. 10. Review a summary of your instance configuration in the Summary panel, and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the"} +{"global_id": 593, "doc_id": "ec2", "chunk_id": "6", "question_id": 2, "question": "What is recommended for your first instance?", "answer_span": "For your first instance, we recommend that you use the default settings.", "chunk": "Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range of addresses. For your first instance, we recommend that you use the default settings. Otherwise, you can update your network settings as follows: 9. • (Optional) To use a specific default subnet, choose Edit and then choose a subnet. • (Optional) To use a different VPC, choose Edit and then choose an existing VPC. If the VPC isn't configured for public internet access, you won't be able to connect to your instance. • (Optional) To restrict inbound connection traffic to a specific network, choose Custom instead of Anywhere, and enter the CIDR block for your network. • (Optional) To use a different security group, choose Select existing security group and choose an existing security group. If the security group does not have a rule that allows connection traffic from your network, you won't be able to connect to your instance. For a Linux instance, you must allow SSH traffic. For a Windows instance, you must allow RDP traffic. Under Configure storage, notice that we configured a root volume but no data volumes. This is sufficient for test purposes. 10. Review a summary of your instance configuration in the Summary panel, and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the"} +{"global_id": 594, "doc_id": "ec2", "chunk_id": "6", "question_id": 3, "question": "What must you allow for a Linux instance?", "answer_span": "For a Linux instance, you must allow SSH traffic.", "chunk": "Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range of addresses. For your first instance, we recommend that you use the default settings. Otherwise, you can update your network settings as follows: 9. • (Optional) To use a specific default subnet, choose Edit and then choose a subnet. • (Optional) To use a different VPC, choose Edit and then choose an existing VPC. If the VPC isn't configured for public internet access, you won't be able to connect to your instance. • (Optional) To restrict inbound connection traffic to a specific network, choose Custom instead of Anywhere, and enter the CIDR block for your network. • (Optional) To use a different security group, choose Select existing security group and choose an existing security group. If the security group does not have a rule that allows connection traffic from your network, you won't be able to connect to your instance. For a Linux instance, you must allow SSH traffic. For a Windows instance, you must allow RDP traffic. Under Configure storage, notice that we configured a root volume but no data volumes. This is sufficient for test purposes. 10. Review a summary of your instance configuration in the Summary panel, and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the"} +{"global_id": 595, "doc_id": "ec2", "chunk_id": "6", "question_id": 4, "question": "What is the initial instance state after selection?", "answer_span": "The initial instance state is pending.", "chunk": "Cloud User Guide Warning If you specify 0.0.0.0/0, you are enabling traffic from any IP addresses in the world. For the SSH and RDP protocols, you might consider this acceptable for a short time in a test environment, but it's unsafe for production environments. In production, be sure to authorize access only from the appropriate individual IP address or range of addresses. For your first instance, we recommend that you use the default settings. Otherwise, you can update your network settings as follows: 9. • (Optional) To use a specific default subnet, choose Edit and then choose a subnet. • (Optional) To use a different VPC, choose Edit and then choose an existing VPC. If the VPC isn't configured for public internet access, you won't be able to connect to your instance. • (Optional) To restrict inbound connection traffic to a specific network, choose Custom instead of Anywhere, and enter the CIDR block for your network. • (Optional) To use a different security group, choose Select existing security group and choose an existing security group. If the security group does not have a rule that allows connection traffic from your network, you won't be able to connect to your instance. For a Linux instance, you must allow SSH traffic. For a Windows instance, you must allow RDP traffic. Under Configure storage, notice that we configured a root volume but no data volumes. This is sufficient for test purposes. 10. Review a summary of your instance configuration in the Summary panel, and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the"} +{"global_id": 596, "doc_id": "ec2", "chunk_id": "7", "question_id": 1, "question": "What should you choose if the launch is successful?", "answer_span": "choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch.", "chunk": "and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status and alarms tab. After your instance passes its status checks, it is ready to receive connection requests. Step 1: Launch an instance 11 Amazon Elastic Compute Cloud User Guide Step 2: Connect to your instance The procedure that you use depends on the operating system of the instance. If you can't connect to your instance, see Troubleshoot issues connecting to your Amazon EC2 Linux instance for assistance. Linux instances You can connect to your Linux instance using any SSH client. If you are running Windows on your computer, open a terminal and run the ssh command to verify that you have an SSH client installed. If the command is not found, install OpenSSH for Windows. To connect to your instance using SSH 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the SSH client tab. 5. (Optional) If you created a key pair when you launched the instance and downloaded the private key (.pem file) to a computer running Linux or macOS, run the example chmod command to set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the"} +{"global_id": 597, "doc_id": "ec2", "chunk_id": "7", "question_id": 2, "question": "What is the initial instance state after launching?", "answer_span": "The initial instance state is pending.", "chunk": "and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status and alarms tab. After your instance passes its status checks, it is ready to receive connection requests. Step 1: Launch an instance 11 Amazon Elastic Compute Cloud User Guide Step 2: Connect to your instance The procedure that you use depends on the operating system of the instance. If you can't connect to your instance, see Troubleshoot issues connecting to your Amazon EC2 Linux instance for assistance. Linux instances You can connect to your Linux instance using any SSH client. If you are running Windows on your computer, open a terminal and run the ssh command to verify that you have an SSH client installed. If the command is not found, install OpenSSH for Windows. To connect to your instance using SSH 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the SSH client tab. 5. (Optional) If you created a key pair when you launched the instance and downloaded the private key (.pem file) to a computer running Linux or macOS, run the example chmod command to set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the"} +{"global_id": 598, "doc_id": "ec2", "chunk_id": "7", "question_id": 3, "question": "What can you use to connect to your Linux instance?", "answer_span": "You can connect to your Linux instance using any SSH client.", "chunk": "and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status and alarms tab. After your instance passes its status checks, it is ready to receive connection requests. Step 1: Launch an instance 11 Amazon Elastic Compute Cloud User Guide Step 2: Connect to your instance The procedure that you use depends on the operating system of the instance. If you can't connect to your instance, see Troubleshoot issues connecting to your Amazon EC2 Linux instance for assistance. Linux instances You can connect to your Linux instance using any SSH client. If you are running Windows on your computer, open a terminal and run the ssh command to verify that you have an SSH client installed. If the command is not found, install OpenSSH for Windows. To connect to your instance using SSH 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the SSH client tab. 5. (Optional) If you created a key pair when you launched the instance and downloaded the private key (.pem file) to a computer running Linux or macOS, run the example chmod command to set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the"} +{"global_id": 599, "doc_id": "ec2", "chunk_id": "7", "question_id": 4, "question": "What should you do if the ssh command is not found on Windows?", "answer_span": "install OpenSSH for Windows.", "chunk": "and when you're ready, choose Launch instance. 11. If the launch is successful, choose the ID of the instance from the Success notification to open the Instances page and monitor the status of the launch. 12. Select the checkbox for the instance. The initial instance state is pending. After the instance starts, its state changes to running. Choose the Status and alarms tab. After your instance passes its status checks, it is ready to receive connection requests. Step 1: Launch an instance 11 Amazon Elastic Compute Cloud User Guide Step 2: Connect to your instance The procedure that you use depends on the operating system of the instance. If you can't connect to your instance, see Troubleshoot issues connecting to your Amazon EC2 Linux instance for assistance. Linux instances You can connect to your Linux instance using any SSH client. If you are running Windows on your computer, open a terminal and run the ssh command to verify that you have an SSH client installed. If the command is not found, install OpenSSH for Windows. To connect to your instance using SSH 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the SSH client tab. 5. (Optional) If you created a key pair when you launched the instance and downloaded the private key (.pem file) to a computer running Linux or macOS, run the example chmod command to set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the"} +{"global_id": 600, "doc_id": "ec2", "chunk_id": "8", "question_id": 1, "question": "What must you do if the private key file is not in the current directory?", "answer_span": "If the private key file is not in the current directory, you must specify the fully-qualified path to the key file in this command.", "chunk": "set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window on your computer, run the ssh command that you saved in the previous step. If the private key file is not in the current directory, you must specify the fully-qualified path to the key file in this command. The following is an example response: The authenticity of host 'ec2-198-51-100-1.us-east-2.compute.amazonaws.com (198-51-100-1)' can't be established. ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY. Are you sure you want to continue connecting (yes/no)? Step 2: Connect to your instance 12 Amazon Elastic Compute Cloud 8. User Guide (Optional) Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output when you first start an instance. To get the console output, choose Actions, Monitor and troubleshoot, Get system log. If the fingerprints don't match, someone might be attempting a man-in-the-middle attack. If they match, continue to the next step. 9. Enter yes. The following is an example response: Warning: Permanently added 'ec2-198-51-100-1.useast-2.compute.amazonaws.com' (ECDSA) to the list of known hosts. Windows instances To connect to a Windows instance using RDP, you must retrieve the initial administrator password and then enter this password when you connect to your instance. It takes a few minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in"} +{"global_id": 601, "doc_id": "ec2", "chunk_id": "8", "question_id": 2, "question": "What should you verify to ensure security when connecting to your instance?", "answer_span": "Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output when you first start an instance.", "chunk": "set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window on your computer, run the ssh command that you saved in the previous step. If the private key file is not in the current directory, you must specify the fully-qualified path to the key file in this command. The following is an example response: The authenticity of host 'ec2-198-51-100-1.us-east-2.compute.amazonaws.com (198-51-100-1)' can't be established. ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY. Are you sure you want to continue connecting (yes/no)? Step 2: Connect to your instance 12 Amazon Elastic Compute Cloud 8. User Guide (Optional) Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output when you first start an instance. To get the console output, choose Actions, Monitor and troubleshoot, Get system log. If the fingerprints don't match, someone might be attempting a man-in-the-middle attack. If they match, continue to the next step. 9. Enter yes. The following is an example response: Warning: Permanently added 'ec2-198-51-100-1.useast-2.compute.amazonaws.com' (ECDSA) to the list of known hosts. Windows instances To connect to a Windows instance using RDP, you must retrieve the initial administrator password and then enter this password when you connect to your instance. It takes a few minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in"} +{"global_id": 602, "doc_id": "ec2", "chunk_id": "8", "question_id": 3, "question": "What happens if the fingerprints don't match?", "answer_span": "If the fingerprints don't match, someone might be attempting a man-in-the-middle attack.", "chunk": "set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window on your computer, run the ssh command that you saved in the previous step. If the private key file is not in the current directory, you must specify the fully-qualified path to the key file in this command. The following is an example response: The authenticity of host 'ec2-198-51-100-1.us-east-2.compute.amazonaws.com (198-51-100-1)' can't be established. ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY. Are you sure you want to continue connecting (yes/no)? Step 2: Connect to your instance 12 Amazon Elastic Compute Cloud 8. User Guide (Optional) Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output when you first start an instance. To get the console output, choose Actions, Monitor and troubleshoot, Get system log. If the fingerprints don't match, someone might be attempting a man-in-the-middle attack. If they match, continue to the next step. 9. Enter yes. The following is an example response: Warning: Permanently added 'ec2-198-51-100-1.useast-2.compute.amazonaws.com' (ECDSA) to the list of known hosts. Windows instances To connect to a Windows instance using RDP, you must retrieve the initial administrator password and then enter this password when you connect to your instance. It takes a few minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in"} +{"global_id": 603, "doc_id": "ec2", "chunk_id": "8", "question_id": 4, "question": "What must you do to connect to a Windows instance using RDP?", "answer_span": "To connect to a Windows instance using RDP, you must retrieve the initial administrator password and then enter this password when you connect to your instance.", "chunk": "set the permissions for your private key. 6. Copy the example SSH command. The following is an example, where key-pair-name.pem is the name of your private key file, ec2-user is the username associated with the image, and the string after the @ symbol is the public DNS name of the instance. ssh -i key-pair-name.pem ec2-user@ec2-198-51-100-1.us-east-2.compute.amazonaws.com 7. In a terminal window on your computer, run the ssh command that you saved in the previous step. If the private key file is not in the current directory, you must specify the fully-qualified path to the key file in this command. The following is an example response: The authenticity of host 'ec2-198-51-100-1.us-east-2.compute.amazonaws.com (198-51-100-1)' can't be established. ECDSA key fingerprint is l4UB/neBad9tvkgJf1QZWxheQmR59WgrgzEimCG6kZY. Are you sure you want to continue connecting (yes/no)? Step 2: Connect to your instance 12 Amazon Elastic Compute Cloud 8. User Guide (Optional) Verify that the fingerprint in the security alert matches the instance fingerprint contained in the console output when you first start an instance. To get the console output, choose Actions, Monitor and troubleshoot, Get system log. If the fingerprints don't match, someone might be attempting a man-in-the-middle attack. If they match, continue to the next step. 9. Enter yes. The following is an example response: Warning: Permanently added 'ec2-198-51-100-1.useast-2.compute.amazonaws.com' (ECDSA) to the list of known hosts. Windows instances To connect to a Windows instance using RDP, you must retrieve the initial administrator password and then enter this password when you connect to your instance. It takes a few minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in"} +{"global_id": 604, "doc_id": "ec2", "chunk_id": "9", "question_id": 1, "question": "What must your account have permission to call?", "answer_span": "Your account must have permission to call the GetPasswordData action.", "chunk": "minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language of the OS, and then choose the corresponding username. For example, for an English OS, the username is Administrator, for a French OS it's Administrateur, and for a Portuguese OS it's Administrador. If a language version of the OS does not have a username in the same language, choose the username Administrator (Other). For more information, see Localized Names for Administrator Account in Windows in the Microsoft website. To retrieve the initial administrator password 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the RDP client tab. 5. For Username, choose the default username for the Administrator account. The username you choose must match the language of the operating system (OS) contained in the AMI that you Step 2: Connect to your instance 13 Amazon Elastic Compute Cloud User Guide used to launch your instance. If there is no username in the same language as your OS, choose Administrator (Other). 6. Choose Get password. 7. On the Get Windows password page, do the following: a. Choose Upload private key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password"} +{"global_id": 605, "doc_id": "ec2", "chunk_id": "9", "question_id": 2, "question": "What is the default username for the Administrator account for an English OS?", "answer_span": "for an English OS, the username is Administrator", "chunk": "minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language of the OS, and then choose the corresponding username. For example, for an English OS, the username is Administrator, for a French OS it's Administrateur, and for a Portuguese OS it's Administrador. If a language version of the OS does not have a username in the same language, choose the username Administrator (Other). For more information, see Localized Names for Administrator Account in Windows in the Microsoft website. To retrieve the initial administrator password 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the RDP client tab. 5. For Username, choose the default username for the Administrator account. The username you choose must match the language of the operating system (OS) contained in the AMI that you Step 2: Connect to your instance 13 Amazon Elastic Compute Cloud User Guide used to launch your instance. If there is no username in the same language as your OS, choose Administrator (Other). 6. Choose Get password. 7. On the Get Windows password page, do the following: a. Choose Upload private key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password"} +{"global_id": 606, "doc_id": "ec2", "chunk_id": "9", "question_id": 3, "question": "What should you choose if there is no username in the same language as your OS?", "answer_span": "choose Administrator (Other)", "chunk": "minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language of the OS, and then choose the corresponding username. For example, for an English OS, the username is Administrator, for a French OS it's Administrateur, and for a Portuguese OS it's Administrador. If a language version of the OS does not have a username in the same language, choose the username Administrator (Other). For more information, see Localized Names for Administrator Account in Windows in the Microsoft website. To retrieve the initial administrator password 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the RDP client tab. 5. For Username, choose the default username for the Administrator account. The username you choose must match the language of the operating system (OS) contained in the AMI that you Step 2: Connect to your instance 13 Amazon Elastic Compute Cloud User Guide used to launch your instance. If there is no username in the same language as your OS, choose Administrator (Other). 6. Choose Get password. 7. On the Get Windows password page, do the following: a. Choose Upload private key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password"} +{"global_id": 607, "doc_id": "ec2", "chunk_id": "9", "question_id": 4, "question": "What is the first step to retrieve the initial administrator password?", "answer_span": "Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/", "chunk": "minutes after instance launch before this password is available. Your account must have permission to call the GetPasswordData action. For more information, see Example policies to control access the Amazon EC2 API. The default username for the Administrator account depends on the language of the operating system (OS) contained in the AMI. To determine the correct username, identify the language of the OS, and then choose the corresponding username. For example, for an English OS, the username is Administrator, for a French OS it's Administrateur, and for a Portuguese OS it's Administrador. If a language version of the OS does not have a username in the same language, choose the username Administrator (Other). For more information, see Localized Names for Administrator Account in Windows in the Microsoft website. To retrieve the initial administrator password 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. In the navigation pane, choose Instances. 3. Select the instance and then choose Connect. 4. On the Connect to instance page, choose the RDP client tab. 5. For Username, choose the default username for the Administrator account. The username you choose must match the language of the operating system (OS) contained in the AMI that you Step 2: Connect to your instance 13 Amazon Elastic Compute Cloud User Guide used to launch your instance. If there is no username in the same language as your OS, choose Administrator (Other). 6. Choose Get password. 7. On the Get Windows password page, do the following: a. Choose Upload private key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password"} +{"global_id": 608, "doc_id": "ec2", "chunk_id": "10", "question_id": 1, "question": "What file do you need to navigate to when launching the instance?", "answer_span": "navigate to the private key (.pem) file that you specified when you launched the instance.", "chunk": "key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password link shown previously. c. Copy the password and save it in a safe place. This password is required to connect to the instance. The following procedure uses the Remote Desktop Connection client for Windows (MSTSC). If you're using a different RDP client, download the RDP file and then see the documentation for the RDP client for the steps to establish the RDP connection. To connect to a Windows instance using an RDP client 1. On the Connect to instance page, choose Download remote desktop file. When the file download is finished, choose Cancel to return to the Instances page. The RDP file is downloaded to your Downloads folder. 2. Run mstsc.exe to open the RDP client. 3. Expand Show options, choose Open, and select the .rdp file from your Downloads folder. 4. By default, Computer is the public IPv4 DNS name of the instance and User name is the administrator account. To connect to the instance using IPv6 instead, replace the public IPv4 DNS name of the instance with its IPv6 address. Review the default settings and change them as needed. 5. Choose Connect. If you receive a warning that the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following:"} +{"global_id": 609, "doc_id": "ec2", "chunk_id": "10", "question_id": 2, "question": "What should you do after selecting the private key file?", "answer_span": "Select the file and choose Open to copy the entire contents of the file to this window.", "chunk": "key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password link shown previously. c. Copy the password and save it in a safe place. This password is required to connect to the instance. The following procedure uses the Remote Desktop Connection client for Windows (MSTSC). If you're using a different RDP client, download the RDP file and then see the documentation for the RDP client for the steps to establish the RDP connection. To connect to a Windows instance using an RDP client 1. On the Connect to instance page, choose Download remote desktop file. When the file download is finished, choose Cancel to return to the Instances page. The RDP file is downloaded to your Downloads folder. 2. Run mstsc.exe to open the RDP client. 3. Expand Show options, choose Open, and select the .rdp file from your Downloads folder. 4. By default, Computer is the public IPv4 DNS name of the instance and User name is the administrator account. To connect to the instance using IPv6 instead, replace the public IPv4 DNS name of the instance with its IPv6 address. Review the default settings and change them as needed. 5. Choose Connect. If you receive a warning that the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following:"} +{"global_id": 610, "doc_id": "ec2", "chunk_id": "10", "question_id": 3, "question": "What appears under Password after choosing Decrypt password?", "answer_span": "the default administrator password for the instance appears under Password, replacing the Get password link shown previously.", "chunk": "key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password link shown previously. c. Copy the password and save it in a safe place. This password is required to connect to the instance. The following procedure uses the Remote Desktop Connection client for Windows (MSTSC). If you're using a different RDP client, download the RDP file and then see the documentation for the RDP client for the steps to establish the RDP connection. To connect to a Windows instance using an RDP client 1. On the Connect to instance page, choose Download remote desktop file. When the file download is finished, choose Cancel to return to the Instances page. The RDP file is downloaded to your Downloads folder. 2. Run mstsc.exe to open the RDP client. 3. Expand Show options, choose Open, and select the .rdp file from your Downloads folder. 4. By default, Computer is the public IPv4 DNS name of the instance and User name is the administrator account. To connect to the instance using IPv6 instead, replace the public IPv4 DNS name of the instance with its IPv6 address. Review the default settings and change them as needed. 5. Choose Connect. If you receive a warning that the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following:"} +{"global_id": 611, "doc_id": "ec2", "chunk_id": "10", "question_id": 4, "question": "What is the first step to connect to a Windows instance using an RDP client?", "answer_span": "On the Connect to instance page, choose Download remote desktop file.", "chunk": "key file and navigate to the private key (.pem) file that you specified when you launched the instance. Select the file and choose Open to copy the entire contents of the file to this window. b. Choose Decrypt password. The Get Windows password page closes, and the default administrator password for the instance appears under Password, replacing the Get password link shown previously. c. Copy the password and save it in a safe place. This password is required to connect to the instance. The following procedure uses the Remote Desktop Connection client for Windows (MSTSC). If you're using a different RDP client, download the RDP file and then see the documentation for the RDP client for the steps to establish the RDP connection. To connect to a Windows instance using an RDP client 1. On the Connect to instance page, choose Download remote desktop file. When the file download is finished, choose Cancel to return to the Instances page. The RDP file is downloaded to your Downloads folder. 2. Run mstsc.exe to open the RDP client. 3. Expand Show options, choose Open, and select the .rdp file from your Downloads folder. 4. By default, Computer is the public IPv4 DNS name of the instance and User name is the administrator account. To connect to the instance using IPv6 instead, replace the public IPv4 DNS name of the instance with its IPv6 address. Review the default settings and change them as needed. 5. Choose Connect. If you receive a warning that the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following:"} +{"global_id": 612, "doc_id": "ec2", "chunk_id": "11", "question_id": 1, "question": "What should you do if you trust the certificate?", "answer_span": "If you trust the certificate, choose Yes to connect to your instance.", "chunk": "the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect to your instance. Step 2: Connect to your instance 14 Amazon Elastic Compute Cloud • User Guide [Windows] Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose View certificate and then choose Thumbprint from the Details tab. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. • [Mac OS X] Before you proceed, compare the fingerprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose Show Certificate, expand Details, and choose SHA1 Fingerprints. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. 8. If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop. If you receive an error message instead, see the section called “Remote Desktop can't connect to the remote computer”. When you are finished with the RDP connection, you can close the RDP client. Step 3: Clean up your instance After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that"} +{"global_id": 613, "doc_id": "ec2", "chunk_id": "11", "question_id": 2, "question": "What should you compare to confirm the identity of the remote computer on Windows?", "answer_span": "Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer.", "chunk": "the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect to your instance. Step 2: Connect to your instance 14 Amazon Elastic Compute Cloud • User Guide [Windows] Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose View certificate and then choose Thumbprint from the Details tab. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. • [Mac OS X] Before you proceed, compare the fingerprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose Show Certificate, expand Details, and choose SHA1 Fingerprints. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. 8. If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop. If you receive an error message instead, see the section called “Remote Desktop can't connect to the remote computer”. When you are finished with the RDP connection, you can close the RDP client. Step 3: Clean up your instance After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that"} +{"global_id": 614, "doc_id": "ec2", "chunk_id": "11", "question_id": 3, "question": "What happens if the RDP connection is successful?", "answer_span": "If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop.", "chunk": "the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect to your instance. Step 2: Connect to your instance 14 Amazon Elastic Compute Cloud • User Guide [Windows] Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose View certificate and then choose Thumbprint from the Details tab. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. • [Mac OS X] Before you proceed, compare the fingerprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose Show Certificate, expand Details, and choose SHA1 Fingerprints. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. 8. If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop. If you receive an error message instead, see the section called “Remote Desktop can't connect to the remote computer”. When you are finished with the RDP connection, you can close the RDP client. Step 3: Clean up your instance After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that"} +{"global_id": 615, "doc_id": "ec2", "chunk_id": "11", "question_id": 4, "question": "What should you do after finishing with the instance created for the tutorial?", "answer_span": "After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance.", "chunk": "the publisher of the remote connection is unknown, choose Connect to continue. 6. Enter the password that you saved previously, and then choose OK. 7. Due to the nature of self-signed certificates, you might get a warning that the security certificate could not be authenticated. Do one of the following: • If you trust the certificate, choose Yes to connect to your instance. Step 2: Connect to your instance 14 Amazon Elastic Compute Cloud • User Guide [Windows] Before you proceed, compare the thumbprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose View certificate and then choose Thumbprint from the Details tab. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. • [Mac OS X] Before you proceed, compare the fingerprint of the certificate with the value in the system log to confirm the identity of the remote computer. Choose Show Certificate, expand Details, and choose SHA1 Fingerprints. Compare this value to the value of RDPCERTIFICATE-THUMBPRINT in Actions, Monitor and troubleshoot, Get system log. 8. If the RDP connection is successful, the RDP client displays the Windows login screen and then the Windows desktop. If you receive an error message instead, see the section called “Remote Desktop can't connect to the remote computer”. When you are finished with the RDP connection, you can close the RDP client. Step 3: Clean up your instance After you've finished with the instance that you created for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that"} +{"global_id": 616, "doc_id": "ec2", "chunk_id": "12", "question_id": 1, "question": "What should you do to clean up after this tutorial?", "answer_span": "you should clean up by terminating the instance.", "chunk": "for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits as soon as the instance status changes to shutting down or terminated. To keep your instance for later, but not incur charges or usage that counts against your Free Tier limits, you can stop the instance now and then start it again later. For more information, see Stop and start Amazon EC2 instances. To terminate your instance 1. In the navigation pane, choose Instances. In the list of instances, select the instance. 2. Choose Instance state, Terminate (delete) instance. Step 3: Clean up your instance 15 Amazon Elastic Compute Cloud 3. User Guide Choose Terminate (delete) when prompted for confirmation. Amazon EC2 shuts down and terminates your instance. After your instance is terminated, it remains visible on the console for a short while, and then the entry is automatically deleted. You cannot remove the terminated instance from the console display yourself. Next steps After you start your instance, you might want to explore the following next steps: • Explore the Amazon EC2 core concepts with the introductory tutorials. For more information, see Tutorials for launching EC2 instances. • Learn how to track your Amazon EC2 Free Tier usage using the console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User"} +{"global_id": 617, "doc_id": "ec2", "chunk_id": "12", "question_id": 2, "question": "What happens when you terminate an instance?", "answer_span": "Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it.", "chunk": "for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits as soon as the instance status changes to shutting down or terminated. To keep your instance for later, but not incur charges or usage that counts against your Free Tier limits, you can stop the instance now and then start it again later. For more information, see Stop and start Amazon EC2 instances. To terminate your instance 1. In the navigation pane, choose Instances. In the list of instances, select the instance. 2. Choose Instance state, Terminate (delete) instance. Step 3: Clean up your instance 15 Amazon Elastic Compute Cloud 3. User Guide Choose Terminate (delete) when prompted for confirmation. Amazon EC2 shuts down and terminates your instance. After your instance is terminated, it remains visible on the console for a short while, and then the entry is automatically deleted. You cannot remove the terminated instance from the console display yourself. Next steps After you start your instance, you might want to explore the following next steps: • Explore the Amazon EC2 core concepts with the introductory tutorials. For more information, see Tutorials for launching EC2 instances. • Learn how to track your Amazon EC2 Free Tier usage using the console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User"} +{"global_id": 618, "doc_id": "ec2", "chunk_id": "12", "question_id": 3, "question": "How can you avoid incurring charges while keeping your instance for later?", "answer_span": "you can stop the instance now and then start it again later.", "chunk": "for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits as soon as the instance status changes to shutting down or terminated. To keep your instance for later, but not incur charges or usage that counts against your Free Tier limits, you can stop the instance now and then start it again later. For more information, see Stop and start Amazon EC2 instances. To terminate your instance 1. In the navigation pane, choose Instances. In the list of instances, select the instance. 2. Choose Instance state, Terminate (delete) instance. Step 3: Clean up your instance 15 Amazon Elastic Compute Cloud 3. User Guide Choose Terminate (delete) when prompted for confirmation. Amazon EC2 shuts down and terminates your instance. After your instance is terminated, it remains visible on the console for a short while, and then the entry is automatically deleted. You cannot remove the terminated instance from the console display yourself. Next steps After you start your instance, you might want to explore the following next steps: • Explore the Amazon EC2 core concepts with the introductory tutorials. For more information, see Tutorials for launching EC2 instances. • Learn how to track your Amazon EC2 Free Tier usage using the console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User"} +{"global_id": 619, "doc_id": "ec2", "chunk_id": "12", "question_id": 4, "question": "What should you choose when prompted for confirmation to terminate your instance?", "answer_span": "Choose Terminate (delete) when prompted for confirmation.", "chunk": "for this tutorial, you should clean up by terminating the instance. If you want to do more with this instance before you clean up, see Next steps. Important Terminating an instance effectively deletes it; you can't reconnect to an instance after you've terminated it. You'll stop incurring charges for that instance or usage that counts against your Free Tier limits as soon as the instance status changes to shutting down or terminated. To keep your instance for later, but not incur charges or usage that counts against your Free Tier limits, you can stop the instance now and then start it again later. For more information, see Stop and start Amazon EC2 instances. To terminate your instance 1. In the navigation pane, choose Instances. In the list of instances, select the instance. 2. Choose Instance state, Terminate (delete) instance. Step 3: Clean up your instance 15 Amazon Elastic Compute Cloud 3. User Guide Choose Terminate (delete) when prompted for confirmation. Amazon EC2 shuts down and terminates your instance. After your instance is terminated, it remains visible on the console for a short while, and then the entry is automatically deleted. You cannot remove the terminated instance from the console display yourself. Next steps After you start your instance, you might want to explore the following next steps: • Explore the Amazon EC2 core concepts with the introductory tutorials. For more information, see Tutorials for launching EC2 instances. • Learn how to track your Amazon EC2 Free Tier usage using the console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User"} +{"global_id": 620, "doc_id": "ec2", "chunk_id": "13", "question_id": 1, "question": "What should you configure to notify you if your usage exceeds the Free Tier?", "answer_span": "Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025).", "chunk": "console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see Create an Amazon EBS volume in the Amazon EBS User Guide. • Learn how to remotely manage your EC2 instance using the Run command. For more information, see AWS Systems Manager Run Command in the AWS Systems Manager User Guide. • Learn about instance purchasing options. For more information, see Amazon EC2 billing and purchasing options. • Get advice about instance types. For more information, see Get recommendations from EC2 instance type finder. Next steps 16 Amazon Elastic Compute Cloud User Guide Best practices for Amazon EC2 To ensure the maximum benefit from Amazon EC2, we recommend that you perform the following best practices. Security • Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible. For more information, see Creating IAM policies in the IAM User Guide. • Implement the least permissive rules for your security group. • Regularly patch, update, and secure the operating system and applications on your instance. For more information, see Update management. For guidelines specific to Windows operating systems, see Security best practices for Windows instances. • Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud"} +{"global_id": 621, "doc_id": "ec2", "chunk_id": "13", "question_id": 2, "question": "Where can you find more information about tracking your AWS Free Tier usage?", "answer_span": "For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide.", "chunk": "console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see Create an Amazon EBS volume in the Amazon EBS User Guide. • Learn how to remotely manage your EC2 instance using the Run command. For more information, see AWS Systems Manager Run Command in the AWS Systems Manager User Guide. • Learn about instance purchasing options. For more information, see Amazon EC2 billing and purchasing options. • Get advice about instance types. For more information, see Get recommendations from EC2 instance type finder. Next steps 16 Amazon Elastic Compute Cloud User Guide Best practices for Amazon EC2 To ensure the maximum benefit from Amazon EC2, we recommend that you perform the following best practices. Security • Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible. For more information, see Creating IAM policies in the IAM User Guide. • Implement the least permissive rules for your security group. • Regularly patch, update, and secure the operating system and applications on your instance. For more information, see Update management. For guidelines specific to Windows operating systems, see Security best practices for Windows instances. • Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud"} +{"global_id": 622, "doc_id": "ec2", "chunk_id": "13", "question_id": 3, "question": "What is recommended for managing access to AWS resources and APIs?", "answer_span": "Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible.", "chunk": "console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see Create an Amazon EBS volume in the Amazon EBS User Guide. • Learn how to remotely manage your EC2 instance using the Run command. For more information, see AWS Systems Manager Run Command in the AWS Systems Manager User Guide. • Learn about instance purchasing options. For more information, see Amazon EC2 billing and purchasing options. • Get advice about instance types. For more information, see Get recommendations from EC2 instance type finder. Next steps 16 Amazon Elastic Compute Cloud User Guide Best practices for Amazon EC2 To ensure the maximum benefit from Amazon EC2, we recommend that you perform the following best practices. Security • Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible. For more information, see Creating IAM policies in the IAM User Guide. • Implement the least permissive rules for your security group. • Regularly patch, update, and secure the operating system and applications on your instance. For more information, see Update management. For guidelines specific to Windows operating systems, see Security best practices for Windows instances. • Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud"} +{"global_id": 623, "doc_id": "ec2", "chunk_id": "13", "question_id": 4, "question": "What tool can you use to automatically discover and scan Amazon EC2 instances for software vulnerabilities?", "answer_span": "Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure.", "chunk": "console. For more information, see the section called “Track your Free Tier usage”. • Configure a CloudWatch alarm to notify you if your usage exceeds the Free Tier (for accounts created before July 15, 2025). For more information, see Tracking your AWS Free Tier usage in the AWS Billing User Guide. • Add an EBS volume. For more information, see Create an Amazon EBS volume in the Amazon EBS User Guide. • Learn how to remotely manage your EC2 instance using the Run command. For more information, see AWS Systems Manager Run Command in the AWS Systems Manager User Guide. • Learn about instance purchasing options. For more information, see Amazon EC2 billing and purchasing options. • Get advice about instance types. For more information, see Get recommendations from EC2 instance type finder. Next steps 16 Amazon Elastic Compute Cloud User Guide Best practices for Amazon EC2 To ensure the maximum benefit from Amazon EC2, we recommend that you perform the following best practices. Security • Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible. For more information, see Creating IAM policies in the IAM User Guide. • Implement the least permissive rules for your security group. • Regularly patch, update, and secure the operating system and applications on your instance. For more information, see Update management. For guidelines specific to Windows operating systems, see Security best practices for Windows instances. • Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud"} +{"global_id": 624, "doc_id": "ec2", "chunk_id": "14", "question_id": 1, "question": "What should you use to monitor your Amazon EC2 resources against security best practices?", "answer_span": "Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards.", "chunk": "Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage • Understand the implications of the root device type for data persistence, backup, and recovery. For more information, see Root device type. • Use separate Amazon EBS volumes for the operating system versus your data. Ensure that the volume with your data persists after instance termination. For more information, see Preserve data when an instance is terminated. • Use the instance store available for your instance to store temporary data. Remember that the data stored in instance store is deleted when you stop, hibernate, or terminate your instance. If you use instance store for database storage, ensure that you have a cluster with a replication factor that ensures fault tolerance. • Encrypt EBS volumes and snapshots. For more information, see Amazon EBS encryption in the Amazon EBS User Guide. 17 Amazon Elastic Compute Cloud User Guide Resource management • Use instance metadata and custom resource tags to track and identify your AWS resources. For more information, see Use instance metadata to manage your EC2 instance and Tag your Amazon EC2 resources. • View your current limits for Amazon EC2. Plan to request any limit increases in advance of the time that you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the"} +{"global_id": 625, "doc_id": "ec2", "chunk_id": "14", "question_id": 2, "question": "What should you understand regarding the root device type?", "answer_span": "Understand the implications of the root device type for data persistence, backup, and recovery.", "chunk": "Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage • Understand the implications of the root device type for data persistence, backup, and recovery. For more information, see Root device type. • Use separate Amazon EBS volumes for the operating system versus your data. Ensure that the volume with your data persists after instance termination. For more information, see Preserve data when an instance is terminated. • Use the instance store available for your instance to store temporary data. Remember that the data stored in instance store is deleted when you stop, hibernate, or terminate your instance. If you use instance store for database storage, ensure that you have a cluster with a replication factor that ensures fault tolerance. • Encrypt EBS volumes and snapshots. For more information, see Amazon EBS encryption in the Amazon EBS User Guide. 17 Amazon Elastic Compute Cloud User Guide Resource management • Use instance metadata and custom resource tags to track and identify your AWS resources. For more information, see Use instance metadata to manage your EC2 instance and Tag your Amazon EC2 resources. • View your current limits for Amazon EC2. Plan to request any limit increases in advance of the time that you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the"} +{"global_id": 626, "doc_id": "ec2", "chunk_id": "14", "question_id": 3, "question": "What should you use to store temporary data?", "answer_span": "Use the instance store available for your instance to store temporary data.", "chunk": "Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage • Understand the implications of the root device type for data persistence, backup, and recovery. For more information, see Root device type. • Use separate Amazon EBS volumes for the operating system versus your data. Ensure that the volume with your data persists after instance termination. For more information, see Preserve data when an instance is terminated. • Use the instance store available for your instance to store temporary data. Remember that the data stored in instance store is deleted when you stop, hibernate, or terminate your instance. If you use instance store for database storage, ensure that you have a cluster with a replication factor that ensures fault tolerance. • Encrypt EBS volumes and snapshots. For more information, see Amazon EBS encryption in the Amazon EBS User Guide. 17 Amazon Elastic Compute Cloud User Guide Resource management • Use instance metadata and custom resource tags to track and identify your AWS resources. For more information, see Use instance metadata to manage your EC2 instance and Tag your Amazon EC2 resources. • View your current limits for Amazon EC2. Plan to request any limit increases in advance of the time that you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the"} +{"global_id": 627, "doc_id": "ec2", "chunk_id": "14", "question_id": 4, "question": "What should you do to track and identify your AWS resources?", "answer_span": "Use instance metadata and custom resource tags to track and identify your AWS resources.", "chunk": "Amazon EC2 instances for software vulnerabilities and unintended network exposure. For more information, see the Amazon Inspector User Guide. • Use AWS Security Hub controls to monitor your Amazon EC2 resources against security best practices and security standards. For more information about using Security Hub, see Amazon Elastic Compute Cloud controls in the AWS Security Hub User Guide. Storage • Understand the implications of the root device type for data persistence, backup, and recovery. For more information, see Root device type. • Use separate Amazon EBS volumes for the operating system versus your data. Ensure that the volume with your data persists after instance termination. For more information, see Preserve data when an instance is terminated. • Use the instance store available for your instance to store temporary data. Remember that the data stored in instance store is deleted when you stop, hibernate, or terminate your instance. If you use instance store for database storage, ensure that you have a cluster with a replication factor that ensures fault tolerance. • Encrypt EBS volumes and snapshots. For more information, see Amazon EBS encryption in the Amazon EBS User Guide. 17 Amazon Elastic Compute Cloud User Guide Resource management • Use instance metadata and custom resource tags to track and identify your AWS resources. For more information, see Use instance metadata to manage your EC2 instance and Tag your Amazon EC2 resources. • View your current limits for Amazon EC2. Plan to request any limit increases in advance of the time that you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the"} +{"global_id": 628, "doc_id": "ec2", "chunk_id": "15", "question_id": 1, "question": "What should you use to inspect your AWS environment?", "answer_span": "Use AWS Trusted Advisor to inspect your AWS environment", "chunk": "you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back up your EBS volumes using Amazon EBS snapshots, and create an Amazon Machine Image (AMI) from your instance to save the configuration as a template for launching future instances. For more information about AWS services that help achieve this use case, see AWS Backup and Amazon Data Lifecycle Manager. • Deploy critical components of your application across multiple Availability Zones, and replicate your data appropriately. • Design your applications to handle dynamic IP addressing when your instance restarts. For more information, see Amazon EC2 instance IP addressing. • Monitor and respond to events. For more information, see Monitor Amazon EC2 resources. • Ensure that you are prepared to handle failover. For a basic solution, you can manually attach a network interface or Elastic IP address to a replacement instance. For more information, see Elastic network interfaces. For an automated solution, you can use Amazon EC2 Auto Scaling. For more information, see the Amazon EC2 Auto Scaling User Guide. • Regularly test the process of recovering your instances and Amazon EBS volumes to ensure data and services are restored successfully. Networking • Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI)"} +{"global_id": 629, "doc_id": "ec2", "chunk_id": "15", "question_id": 2, "question": "How can you regularly back up your EBS volumes?", "answer_span": "Regularly back up your EBS volumes using Amazon EBS snapshots", "chunk": "you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back up your EBS volumes using Amazon EBS snapshots, and create an Amazon Machine Image (AMI) from your instance to save the configuration as a template for launching future instances. For more information about AWS services that help achieve this use case, see AWS Backup and Amazon Data Lifecycle Manager. • Deploy critical components of your application across multiple Availability Zones, and replicate your data appropriately. • Design your applications to handle dynamic IP addressing when your instance restarts. For more information, see Amazon EC2 instance IP addressing. • Monitor and respond to events. For more information, see Monitor Amazon EC2 resources. • Ensure that you are prepared to handle failover. For a basic solution, you can manually attach a network interface or Elastic IP address to a replacement instance. For more information, see Elastic network interfaces. For an automated solution, you can use Amazon EC2 Auto Scaling. For more information, see the Amazon EC2 Auto Scaling User Guide. • Regularly test the process of recovering your instances and Amazon EBS volumes to ensure data and services are restored successfully. Networking • Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI)"} +{"global_id": 630, "doc_id": "ec2", "chunk_id": "15", "question_id": 3, "question": "What is a basic solution for handling failover?", "answer_span": "For a basic solution, you can manually attach a network interface or Elastic IP address to a replacement instance", "chunk": "you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back up your EBS volumes using Amazon EBS snapshots, and create an Amazon Machine Image (AMI) from your instance to save the configuration as a template for launching future instances. For more information about AWS services that help achieve this use case, see AWS Backup and Amazon Data Lifecycle Manager. • Deploy critical components of your application across multiple Availability Zones, and replicate your data appropriately. • Design your applications to handle dynamic IP addressing when your instance restarts. For more information, see Amazon EC2 instance IP addressing. • Monitor and respond to events. For more information, see Monitor Amazon EC2 resources. • Ensure that you are prepared to handle failover. For a basic solution, you can manually attach a network interface or Elastic IP address to a replacement instance. For more information, see Elastic network interfaces. For an automated solution, you can use Amazon EC2 Auto Scaling. For more information, see the Amazon EC2 Auto Scaling User Guide. • Regularly test the process of recovering your instances and Amazon EBS volumes to ensure data and services are restored successfully. Networking • Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI)"} +{"global_id": 631, "doc_id": "ec2", "chunk_id": "15", "question_id": 4, "question": "What is the recommended time-to-live (TTL) value for your applications?", "answer_span": "Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6", "chunk": "you'll need them. For more information, see Amazon EC2 service quotas. • Use AWS Trusted Advisor to inspect your AWS environment, and then make recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps. For more information, see AWS Trusted Advisor in the AWS Support User Guide. Backup and recovery • Regularly back up your EBS volumes using Amazon EBS snapshots, and create an Amazon Machine Image (AMI) from your instance to save the configuration as a template for launching future instances. For more information about AWS services that help achieve this use case, see AWS Backup and Amazon Data Lifecycle Manager. • Deploy critical components of your application across multiple Availability Zones, and replicate your data appropriately. • Design your applications to handle dynamic IP addressing when your instance restarts. For more information, see Amazon EC2 instance IP addressing. • Monitor and respond to events. For more information, see Monitor Amazon EC2 resources. • Ensure that you are prepared to handle failover. For a basic solution, you can manually attach a network interface or Elastic IP address to a replacement instance. For more information, see Elastic network interfaces. For an automated solution, you can use Amazon EC2 Auto Scaling. For more information, see the Amazon EC2 Auto Scaling User Guide. • Regularly test the process of recovering your instances and Amazon EBS volumes to ensure data and services are restored successfully. Networking • Set the time-to-live (TTL) value for your applications to 255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI)"} +{"global_id": 632, "doc_id": "ec2", "chunk_id": "16", "question_id": 1, "question": "What is an Amazon Machine Image (AMI)?", "answer_span": "An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance.", "chunk": "255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance. Each AMI also contains a block device mapping that specifies the block devices to attach to the instances that you launch. You must specify an AMI when you launch an instance. The AMI must be compatible with the instance type that you chose for your instance. You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace. An AMI is specific to the following: • Region • Operating system • Processor architecture • Root device type • Virtualization type You can launch multiple instances from a single AMI when you require multiple instances with the same configuration. You can use different AMIs to launch instances when you require instances with different configurations, as shown in the following diagram. 19 Amazon Elastic Compute Cloud User Guide You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration. You can copy an AMI to another AWS Region, and then use it to launch instances in that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements"} +{"global_id": 633, "doc_id": "ec2", "chunk_id": "16", "question_id": 2, "question": "What must you specify when you launch an instance?", "answer_span": "You must specify an AMI when you launch an instance.", "chunk": "255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance. Each AMI also contains a block device mapping that specifies the block devices to attach to the instances that you launch. You must specify an AMI when you launch an instance. The AMI must be compatible with the instance type that you chose for your instance. You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace. An AMI is specific to the following: • Region • Operating system • Processor architecture • Root device type • Virtualization type You can launch multiple instances from a single AMI when you require multiple instances with the same configuration. You can use different AMIs to launch instances when you require instances with different configurations, as shown in the following diagram. 19 Amazon Elastic Compute Cloud User Guide You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration. You can copy an AMI to another AWS Region, and then use it to launch instances in that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements"} +{"global_id": 634, "doc_id": "ec2", "chunk_id": "16", "question_id": 3, "question": "What can you do with an AMI that you created?", "answer_span": "You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration.", "chunk": "255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance. Each AMI also contains a block device mapping that specifies the block devices to attach to the instances that you launch. You must specify an AMI when you launch an instance. The AMI must be compatible with the instance type that you chose for your instance. You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace. An AMI is specific to the following: • Region • Operating system • Processor architecture • Root device type • Virtualization type You can launch multiple instances from a single AMI when you require multiple instances with the same configuration. You can use different AMIs to launch instances when you require instances with different configurations, as shown in the following diagram. 19 Amazon Elastic Compute Cloud User Guide You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration. You can copy an AMI to another AWS Region, and then use it to launch instances in that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements"} +{"global_id": 635, "doc_id": "ec2", "chunk_id": "16", "question_id": 4, "question": "What types of AMIs can you use to launch instances?", "answer_span": "You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace.", "chunk": "255, for IPv4 and IPv6. If you use a smaller value, there is a risk that the TTL will expire while application traffic is in transit, causing reachability issues for your instances. 18 Amazon Elastic Compute Cloud User Guide Amazon Machine Images in Amazon EC2 An Amazon Machine Image (AMI) is an image that provides the software that is required to set up and boot an Amazon EC2 instance. Each AMI also contains a block device mapping that specifies the block devices to attach to the instances that you launch. You must specify an AMI when you launch an instance. The AMI must be compatible with the instance type that you chose for your instance. You can use an AMI provided by AWS, a public AMI, an AMI that someone else shared with you, or an AMI that you purchased from the AWS Marketplace. An AMI is specific to the following: • Region • Operating system • Processor architecture • Root device type • Virtualization type You can launch multiple instances from a single AMI when you require multiple instances with the same configuration. You can use different AMIs to launch instances when you require instances with different configurations, as shown in the following diagram. 19 Amazon Elastic Compute Cloud User Guide You can create an AMI from your Amazon EC2 instances and then use it to launch instances with the same configuration. You can copy an AMI to another AWS Region, and then use it to launch instances in that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements"} +{"global_id": 636, "doc_id": "ec2", "chunk_id": "17", "question_id": 1, "question": "What can you do with an AMI that you created?", "answer_span": "You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration.", "chunk": "that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS Marketplace for Amazon EC2 instances • Amazon EC2 AMI lifecycle • Instance launch behavior with Amazon EC2 boot modes • Use encryption with EBS-backed AMIs • Understand shared AMI usage in Amazon EC2 • Monitor AMI events using Amazon EventBridge • Understand AMI billing information • AMI quotas in Amazon EC2 20 Amazon Elastic Compute Cloud User Guide AMI types and characteristics in Amazon EC2 When you launch an instance, the AMI that you choose must be compatible with the instance type that you choose. You can select an AMI to use based on the following characteristics: • Region • Operating system • Processor architecture • Launch permissions • Root device type • Virtualization types Launch permissions Launch permissions determine who can use an AMI to launch instances. You can think of launch permissions as sharing an AMI—when you grant launch permissions, you're sharing the AMI with other users. Only the owner of an AMI can determine its availability by specifying launch permissions. Launch permissions fall into the following categories. Launch permission Description public The owner grants launch permissions to all AWS accounts. explicit The owner grants launch permissions to specific AWS accounts, organizat ions, or organizational units (OUs). implicit The owner has implicit launch permissions for an AMI. Amazon and the Amazon EC2 community provide a large selection of public AMIs. For more information, see Understand shared AMI usage in Amazon EC2. Developers can charge for their AMIs. For"} +{"global_id": 637, "doc_id": "ec2", "chunk_id": "17", "question_id": 2, "question": "Where can you sell your AMI?", "answer_span": "You can sell your AMI using the AWS Marketplace.", "chunk": "that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS Marketplace for Amazon EC2 instances • Amazon EC2 AMI lifecycle • Instance launch behavior with Amazon EC2 boot modes • Use encryption with EBS-backed AMIs • Understand shared AMI usage in Amazon EC2 • Monitor AMI events using Amazon EventBridge • Understand AMI billing information • AMI quotas in Amazon EC2 20 Amazon Elastic Compute Cloud User Guide AMI types and characteristics in Amazon EC2 When you launch an instance, the AMI that you choose must be compatible with the instance type that you choose. You can select an AMI to use based on the following characteristics: • Region • Operating system • Processor architecture • Launch permissions • Root device type • Virtualization types Launch permissions Launch permissions determine who can use an AMI to launch instances. You can think of launch permissions as sharing an AMI—when you grant launch permissions, you're sharing the AMI with other users. Only the owner of an AMI can determine its availability by specifying launch permissions. Launch permissions fall into the following categories. Launch permission Description public The owner grants launch permissions to all AWS accounts. explicit The owner grants launch permissions to specific AWS accounts, organizat ions, or organizational units (OUs). implicit The owner has implicit launch permissions for an AMI. Amazon and the Amazon EC2 community provide a large selection of public AMIs. For more information, see Understand shared AMI usage in Amazon EC2. Developers can charge for their AMIs. For"} +{"global_id": 638, "doc_id": "ec2", "chunk_id": "17", "question_id": 3, "question": "What determines who can use an AMI to launch instances?", "answer_span": "Launch permissions determine who can use an AMI to launch instances.", "chunk": "that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS Marketplace for Amazon EC2 instances • Amazon EC2 AMI lifecycle • Instance launch behavior with Amazon EC2 boot modes • Use encryption with EBS-backed AMIs • Understand shared AMI usage in Amazon EC2 • Monitor AMI events using Amazon EventBridge • Understand AMI billing information • AMI quotas in Amazon EC2 20 Amazon Elastic Compute Cloud User Guide AMI types and characteristics in Amazon EC2 When you launch an instance, the AMI that you choose must be compatible with the instance type that you choose. You can select an AMI to use based on the following characteristics: • Region • Operating system • Processor architecture • Launch permissions • Root device type • Virtualization types Launch permissions Launch permissions determine who can use an AMI to launch instances. You can think of launch permissions as sharing an AMI—when you grant launch permissions, you're sharing the AMI with other users. Only the owner of an AMI can determine its availability by specifying launch permissions. Launch permissions fall into the following categories. Launch permission Description public The owner grants launch permissions to all AWS accounts. explicit The owner grants launch permissions to specific AWS accounts, organizat ions, or organizational units (OUs). implicit The owner has implicit launch permissions for an AMI. Amazon and the Amazon EC2 community provide a large selection of public AMIs. For more information, see Understand shared AMI usage in Amazon EC2. Developers can charge for their AMIs. For"} +{"global_id": 639, "doc_id": "ec2", "chunk_id": "17", "question_id": 4, "question": "What are the categories of launch permissions?", "answer_span": "Launch permissions fall into the following categories.", "chunk": "that Region. You can also share an AMI that you created with other accounts so that they can launch instances with the same configuration. You can sell your AMI using the AWS Marketplace. Contents • AMI types and characteristics in Amazon EC2 • Find an AMI that meets the requirements for your EC2 instance • Paid AMIs in the AWS Marketplace for Amazon EC2 instances • Amazon EC2 AMI lifecycle • Instance launch behavior with Amazon EC2 boot modes • Use encryption with EBS-backed AMIs • Understand shared AMI usage in Amazon EC2 • Monitor AMI events using Amazon EventBridge • Understand AMI billing information • AMI quotas in Amazon EC2 20 Amazon Elastic Compute Cloud User Guide AMI types and characteristics in Amazon EC2 When you launch an instance, the AMI that you choose must be compatible with the instance type that you choose. You can select an AMI to use based on the following characteristics: • Region • Operating system • Processor architecture • Launch permissions • Root device type • Virtualization types Launch permissions Launch permissions determine who can use an AMI to launch instances. You can think of launch permissions as sharing an AMI—when you grant launch permissions, you're sharing the AMI with other users. Only the owner of an AMI can determine its availability by specifying launch permissions. Launch permissions fall into the following categories. Launch permission Description public The owner grants launch permissions to all AWS accounts. explicit The owner grants launch permissions to specific AWS accounts, organizat ions, or organizational units (OUs). implicit The owner has implicit launch permissions for an AMI. Amazon and the Amazon EC2 community provide a large selection of public AMIs. For more information, see Understand shared AMI usage in Amazon EC2. Developers can charge for their AMIs. For"} +{"global_id": 640, "doc_id": "ec2", "chunk_id": "18", "question_id": 1, "question": "What type of AMI is backed by an Amazon EBS volume?", "answer_span": "Amazon EBS-backed AMI – The root device for an instance launched from the AMI is an Amazon Elastic Block Store (Amazon EBS) volume created from an Amazon EBS snapshot.", "chunk": "specific AWS accounts, organizat ions, or organizational units (OUs). implicit The owner has implicit launch permissions for an AMI. Amazon and the Amazon EC2 community provide a large selection of public AMIs. For more information, see Understand shared AMI usage in Amazon EC2. Developers can charge for their AMIs. For more information, see Paid AMIs in the AWS Marketplace for Amazon EC2 instances. Root device type All AMIs are categorized as either backed by Amazon EBS or backed by instance store. AMI characteristics 21 Amazon Elastic Compute Cloud User Guide • Amazon EBS-backed AMI – The root device for an instance launched from the AMI is an Amazon Elastic Block Store (Amazon EBS) volume created from an Amazon EBS snapshot. Supported for both Linux and Windows AMIs. • Amazon instance store-backed AMI – The root device for an instance launched from the AMI is an instance store volume created from a template stored in Amazon S3. Supported for Linux AMIs only. Windows AMIs do not support instance store for the root device. For more information, see Root volumes for your Amazon EC2 instances. Note Instance store-backed AMIs are considered end of life and are not recommended for new usage. They are only supported on the following older instance types: C1, C3, D2, I2, M1, M2, M3, R3, and X1. The following table summarizes the important differences when using the two types of AMIs. Characteristic Amazon EBS-backed AMI Amazon instance store-backed AMI Root device volume EBS volume Instance store volume Boot time for an instance Usually less than 1 minute Usually less than 5 minutes By default, the root volume is deleted when the instance terminates.* Data on any other EBS volumes persists after instance termination by default. Data on any instance store volumes persists only during the life of the"} +{"global_id": 641, "doc_id": "ec2", "chunk_id": "18", "question_id": 2, "question": "Which AMI type is supported for Linux AMIs only?", "answer_span": "Amazon instance store-backed AMI – The root device for an instance launched from the AMI is an instance store volume created from a template stored in Amazon S3.", "chunk": "specific AWS accounts, organizat ions, or organizational units (OUs). implicit The owner has implicit launch permissions for an AMI. Amazon and the Amazon EC2 community provide a large selection of public AMIs. For more information, see Understand shared AMI usage in Amazon EC2. Developers can charge for their AMIs. For more information, see Paid AMIs in the AWS Marketplace for Amazon EC2 instances. Root device type All AMIs are categorized as either backed by Amazon EBS or backed by instance store. AMI characteristics 21 Amazon Elastic Compute Cloud User Guide • Amazon EBS-backed AMI – The root device for an instance launched from the AMI is an Amazon Elastic Block Store (Amazon EBS) volume created from an Amazon EBS snapshot. Supported for both Linux and Windows AMIs. • Amazon instance store-backed AMI – The root device for an instance launched from the AMI is an instance store volume created from a template stored in Amazon S3. Supported for Linux AMIs only. Windows AMIs do not support instance store for the root device. For more information, see Root volumes for your Amazon EC2 instances. Note Instance store-backed AMIs are considered end of life and are not recommended for new usage. They are only supported on the following older instance types: C1, C3, D2, I2, M1, M2, M3, R3, and X1. The following table summarizes the important differences when using the two types of AMIs. Characteristic Amazon EBS-backed AMI Amazon instance store-backed AMI Root device volume EBS volume Instance store volume Boot time for an instance Usually less than 1 minute Usually less than 5 minutes By default, the root volume is deleted when the instance terminates.* Data on any other EBS volumes persists after instance termination by default. Data on any instance store volumes persists only during the life of the"} +{"global_id": 642, "doc_id": "ec2", "chunk_id": "18", "question_id": 3, "question": "What is the boot time for an instance using an Amazon EBS-backed AMI?", "answer_span": "Boot time for an instance Usually less than 1 minute", "chunk": "specific AWS accounts, organizat ions, or organizational units (OUs). implicit The owner has implicit launch permissions for an AMI. Amazon and the Amazon EC2 community provide a large selection of public AMIs. For more information, see Understand shared AMI usage in Amazon EC2. Developers can charge for their AMIs. For more information, see Paid AMIs in the AWS Marketplace for Amazon EC2 instances. Root device type All AMIs are categorized as either backed by Amazon EBS or backed by instance store. AMI characteristics 21 Amazon Elastic Compute Cloud User Guide • Amazon EBS-backed AMI – The root device for an instance launched from the AMI is an Amazon Elastic Block Store (Amazon EBS) volume created from an Amazon EBS snapshot. Supported for both Linux and Windows AMIs. • Amazon instance store-backed AMI – The root device for an instance launched from the AMI is an instance store volume created from a template stored in Amazon S3. Supported for Linux AMIs only. Windows AMIs do not support instance store for the root device. For more information, see Root volumes for your Amazon EC2 instances. Note Instance store-backed AMIs are considered end of life and are not recommended for new usage. They are only supported on the following older instance types: C1, C3, D2, I2, M1, M2, M3, R3, and X1. The following table summarizes the important differences when using the two types of AMIs. Characteristic Amazon EBS-backed AMI Amazon instance store-backed AMI Root device volume EBS volume Instance store volume Boot time for an instance Usually less than 1 minute Usually less than 5 minutes By default, the root volume is deleted when the instance terminates.* Data on any other EBS volumes persists after instance termination by default. Data on any instance store volumes persists only during the life of the"} +{"global_id": 643, "doc_id": "ec2", "chunk_id": "18", "question_id": 4, "question": "Which instance types are instance store-backed AMIs supported on?", "answer_span": "They are only supported on the following older instance types: C1, C3, D2, I2, M1, M2, M3, R3, and X1.", "chunk": "specific AWS accounts, organizat ions, or organizational units (OUs). implicit The owner has implicit launch permissions for an AMI. Amazon and the Amazon EC2 community provide a large selection of public AMIs. For more information, see Understand shared AMI usage in Amazon EC2. Developers can charge for their AMIs. For more information, see Paid AMIs in the AWS Marketplace for Amazon EC2 instances. Root device type All AMIs are categorized as either backed by Amazon EBS or backed by instance store. AMI characteristics 21 Amazon Elastic Compute Cloud User Guide • Amazon EBS-backed AMI – The root device for an instance launched from the AMI is an Amazon Elastic Block Store (Amazon EBS) volume created from an Amazon EBS snapshot. Supported for both Linux and Windows AMIs. • Amazon instance store-backed AMI – The root device for an instance launched from the AMI is an instance store volume created from a template stored in Amazon S3. Supported for Linux AMIs only. Windows AMIs do not support instance store for the root device. For more information, see Root volumes for your Amazon EC2 instances. Note Instance store-backed AMIs are considered end of life and are not recommended for new usage. They are only supported on the following older instance types: C1, C3, D2, I2, M1, M2, M3, R3, and X1. The following table summarizes the important differences when using the two types of AMIs. Characteristic Amazon EBS-backed AMI Amazon instance store-backed AMI Root device volume EBS volume Instance store volume Boot time for an instance Usually less than 1 minute Usually less than 5 minutes By default, the root volume is deleted when the instance terminates.* Data on any other EBS volumes persists after instance termination by default. Data on any instance store volumes persists only during the life of the"} +{"global_id": 644, "doc_id": "ec2", "chunk_id": "19", "question_id": 1, "question": "What is the default behavior of the root volume when the instance terminates?", "answer_span": "By default, the root volume is deleted when the instance terminates.", "chunk": "time for an instance Usually less than 1 minute Usually less than 5 minutes By default, the root volume is deleted when the instance terminates.* Data on any other EBS volumes persists after instance termination by default. Data on any instance store volumes persists only during the life of the instance. Can be in a stopped state. Even when the instance is stopped and not running, the root volume is persisted in Amazon EBS. Cannot be in a stopped state; instances are running or terminated. Data persistence Stopped state Root device type 22 Amazon Elastic Compute Cloud Characteristic Modifications Charges AMI creation/bundling User Guide Amazon EBS-backed AMI Amazon instance store-backed AMI The instance type, kernel, RAM disk, and user data can be changed while the instance is stopped. Instance attributes are fixed for the life of an instance. You're charged for instance usage, EBS volume usage, and storing your AMI as an EBS snaps hot. You're charged for instance usage and storing your AMI in Amazon S3. Uses a single command/call Requires installation and use of AMI tools * By default, EBS root volumes have the DeleteOnTermination flag set to true. For information about how to change this flag so that the volume persists after termination, see Keep an Amazon EBS root volume after an Amazon EC2 instance terminates. ** Supported with io2 EBS Block Express only. For more information, see Provisioned IOPS SSD Block Express volumes in the Amazon EBS User Guide. Determine the root device type of your AMI The AMI that you use to launch an EC2 instance determines the type of the root volume. The root volume of an EC2 instance is either an EBS volume or an instance store volume. Nitro-based instances support only EBS root volumes. The following previous generation instance types are the"} +{"global_id": 645, "doc_id": "ec2", "chunk_id": "19", "question_id": 2, "question": "What happens to data on any other EBS volumes after instance termination by default?", "answer_span": "Data on any other EBS volumes persists after instance termination by default.", "chunk": "time for an instance Usually less than 1 minute Usually less than 5 minutes By default, the root volume is deleted when the instance terminates.* Data on any other EBS volumes persists after instance termination by default. Data on any instance store volumes persists only during the life of the instance. Can be in a stopped state. Even when the instance is stopped and not running, the root volume is persisted in Amazon EBS. Cannot be in a stopped state; instances are running or terminated. Data persistence Stopped state Root device type 22 Amazon Elastic Compute Cloud Characteristic Modifications Charges AMI creation/bundling User Guide Amazon EBS-backed AMI Amazon instance store-backed AMI The instance type, kernel, RAM disk, and user data can be changed while the instance is stopped. Instance attributes are fixed for the life of an instance. You're charged for instance usage, EBS volume usage, and storing your AMI as an EBS snaps hot. You're charged for instance usage and storing your AMI in Amazon S3. Uses a single command/call Requires installation and use of AMI tools * By default, EBS root volumes have the DeleteOnTermination flag set to true. For information about how to change this flag so that the volume persists after termination, see Keep an Amazon EBS root volume after an Amazon EC2 instance terminates. ** Supported with io2 EBS Block Express only. For more information, see Provisioned IOPS SSD Block Express volumes in the Amazon EBS User Guide. Determine the root device type of your AMI The AMI that you use to launch an EC2 instance determines the type of the root volume. The root volume of an EC2 instance is either an EBS volume or an instance store volume. Nitro-based instances support only EBS root volumes. The following previous generation instance types are the"} +{"global_id": 646, "doc_id": "ec2", "chunk_id": "19", "question_id": 3, "question": "Can the instance type, kernel, RAM disk, and user data be changed while the instance is stopped?", "answer_span": "The instance type, kernel, RAM disk, and user data can be changed while the instance is stopped.", "chunk": "time for an instance Usually less than 1 minute Usually less than 5 minutes By default, the root volume is deleted when the instance terminates.* Data on any other EBS volumes persists after instance termination by default. Data on any instance store volumes persists only during the life of the instance. Can be in a stopped state. Even when the instance is stopped and not running, the root volume is persisted in Amazon EBS. Cannot be in a stopped state; instances are running or terminated. Data persistence Stopped state Root device type 22 Amazon Elastic Compute Cloud Characteristic Modifications Charges AMI creation/bundling User Guide Amazon EBS-backed AMI Amazon instance store-backed AMI The instance type, kernel, RAM disk, and user data can be changed while the instance is stopped. Instance attributes are fixed for the life of an instance. You're charged for instance usage, EBS volume usage, and storing your AMI as an EBS snaps hot. You're charged for instance usage and storing your AMI in Amazon S3. Uses a single command/call Requires installation and use of AMI tools * By default, EBS root volumes have the DeleteOnTermination flag set to true. For information about how to change this flag so that the volume persists after termination, see Keep an Amazon EBS root volume after an Amazon EC2 instance terminates. ** Supported with io2 EBS Block Express only. For more information, see Provisioned IOPS SSD Block Express volumes in the Amazon EBS User Guide. Determine the root device type of your AMI The AMI that you use to launch an EC2 instance determines the type of the root volume. The root volume of an EC2 instance is either an EBS volume or an instance store volume. Nitro-based instances support only EBS root volumes. The following previous generation instance types are the"} +{"global_id": 647, "doc_id": "ec2", "chunk_id": "19", "question_id": 4, "question": "What type of root volumes do Nitro-based instances support?", "answer_span": "Nitro-based instances support only EBS root volumes.", "chunk": "time for an instance Usually less than 1 minute Usually less than 5 minutes By default, the root volume is deleted when the instance terminates.* Data on any other EBS volumes persists after instance termination by default. Data on any instance store volumes persists only during the life of the instance. Can be in a stopped state. Even when the instance is stopped and not running, the root volume is persisted in Amazon EBS. Cannot be in a stopped state; instances are running or terminated. Data persistence Stopped state Root device type 22 Amazon Elastic Compute Cloud Characteristic Modifications Charges AMI creation/bundling User Guide Amazon EBS-backed AMI Amazon instance store-backed AMI The instance type, kernel, RAM disk, and user data can be changed while the instance is stopped. Instance attributes are fixed for the life of an instance. You're charged for instance usage, EBS volume usage, and storing your AMI as an EBS snaps hot. You're charged for instance usage and storing your AMI in Amazon S3. Uses a single command/call Requires installation and use of AMI tools * By default, EBS root volumes have the DeleteOnTermination flag set to true. For information about how to change this flag so that the volume persists after termination, see Keep an Amazon EBS root volume after an Amazon EC2 instance terminates. ** Supported with io2 EBS Block Express only. For more information, see Provisioned IOPS SSD Block Express volumes in the Amazon EBS User Guide. Determine the root device type of your AMI The AMI that you use to launch an EC2 instance determines the type of the root volume. The root volume of an EC2 instance is either an EBS volume or an instance store volume. Nitro-based instances support only EBS root volumes. The following previous generation instance types are the"} +{"global_id": 648, "doc_id": "ec2", "chunk_id": "20", "question_id": 1, "question": "What determines the type of the root volume for an EC2 instance?", "answer_span": "The AMI that you use to launch an EC2 instance determines the type of the root volume.", "chunk": "AMI The AMI that you use to launch an EC2 instance determines the type of the root volume. The root volume of an EC2 instance is either an EBS volume or an instance store volume. Nitro-based instances support only EBS root volumes. The following previous generation instance types are the only instance types that support instance store root volumes: C1, C3, D2, I2, M1, M2, M3, R3, and X1. Console To determine the root device type of an AMI 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. Determine the AMI root device type 23 Amazon Elastic Compute Cloud User Guide Amazon EC2 instances An Amazon EC2 instance is a virtual server in the AWS cloud environment. You have full control over your instance, from the time that you first start it (referred to as launching an instance) until you delete it (referred to as terminating an instance). You can choose from a variety of operating systems when you launch your instance. You can connect to your instance and customize it to meet your needs. For example, you can configure the operating system, install operating system updates, and install applications on your instance. Amazon EC2 provides a wide range of instance types. You can choose an instance type that provides the compute resources, memory, storage, and network performance that you need to run your applications. With Amazon EC2, you pay only for what you use. Billing for your instance starts when you launch your instance and it transitions to the running state. Billing stops when you stop your instance and resumes when you start your instance. When you terminate your instance, billing stops when it transitions to the shutting down state. Amazon EC2 provides features that you can use to optimize the performance and the cost of your instances. For example,"} +{"global_id": 649, "doc_id": "ec2", "chunk_id": "20", "question_id": 2, "question": "What type of root volumes do Nitro-based instances support?", "answer_span": "Nitro-based instances support only EBS root volumes.", "chunk": "AMI The AMI that you use to launch an EC2 instance determines the type of the root volume. The root volume of an EC2 instance is either an EBS volume or an instance store volume. Nitro-based instances support only EBS root volumes. The following previous generation instance types are the only instance types that support instance store root volumes: C1, C3, D2, I2, M1, M2, M3, R3, and X1. Console To determine the root device type of an AMI 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. Determine the AMI root device type 23 Amazon Elastic Compute Cloud User Guide Amazon EC2 instances An Amazon EC2 instance is a virtual server in the AWS cloud environment. You have full control over your instance, from the time that you first start it (referred to as launching an instance) until you delete it (referred to as terminating an instance). You can choose from a variety of operating systems when you launch your instance. You can connect to your instance and customize it to meet your needs. For example, you can configure the operating system, install operating system updates, and install applications on your instance. Amazon EC2 provides a wide range of instance types. You can choose an instance type that provides the compute resources, memory, storage, and network performance that you need to run your applications. With Amazon EC2, you pay only for what you use. Billing for your instance starts when you launch your instance and it transitions to the running state. Billing stops when you stop your instance and resumes when you start your instance. When you terminate your instance, billing stops when it transitions to the shutting down state. Amazon EC2 provides features that you can use to optimize the performance and the cost of your instances. For example,"} +{"global_id": 650, "doc_id": "ec2", "chunk_id": "20", "question_id": 3, "question": "What is an Amazon EC2 instance?", "answer_span": "An Amazon EC2 instance is a virtual server in the AWS cloud environment.", "chunk": "AMI The AMI that you use to launch an EC2 instance determines the type of the root volume. The root volume of an EC2 instance is either an EBS volume or an instance store volume. Nitro-based instances support only EBS root volumes. The following previous generation instance types are the only instance types that support instance store root volumes: C1, C3, D2, I2, M1, M2, M3, R3, and X1. Console To determine the root device type of an AMI 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. Determine the AMI root device type 23 Amazon Elastic Compute Cloud User Guide Amazon EC2 instances An Amazon EC2 instance is a virtual server in the AWS cloud environment. You have full control over your instance, from the time that you first start it (referred to as launching an instance) until you delete it (referred to as terminating an instance). You can choose from a variety of operating systems when you launch your instance. You can connect to your instance and customize it to meet your needs. For example, you can configure the operating system, install operating system updates, and install applications on your instance. Amazon EC2 provides a wide range of instance types. You can choose an instance type that provides the compute resources, memory, storage, and network performance that you need to run your applications. With Amazon EC2, you pay only for what you use. Billing for your instance starts when you launch your instance and it transitions to the running state. Billing stops when you stop your instance and resumes when you start your instance. When you terminate your instance, billing stops when it transitions to the shutting down state. Amazon EC2 provides features that you can use to optimize the performance and the cost of your instances. For example,"} +{"global_id": 651, "doc_id": "ec2", "chunk_id": "20", "question_id": 4, "question": "When does billing for your instance start?", "answer_span": "Billing for your instance starts when you launch your instance and it transitions to the running state.", "chunk": "AMI The AMI that you use to launch an EC2 instance determines the type of the root volume. The root volume of an EC2 instance is either an EBS volume or an instance store volume. Nitro-based instances support only EBS root volumes. The following previous generation instance types are the only instance types that support instance store root volumes: C1, C3, D2, I2, M1, M2, M3, R3, and X1. Console To determine the root device type of an AMI 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. Determine the AMI root device type 23 Amazon Elastic Compute Cloud User Guide Amazon EC2 instances An Amazon EC2 instance is a virtual server in the AWS cloud environment. You have full control over your instance, from the time that you first start it (referred to as launching an instance) until you delete it (referred to as terminating an instance). You can choose from a variety of operating systems when you launch your instance. You can connect to your instance and customize it to meet your needs. For example, you can configure the operating system, install operating system updates, and install applications on your instance. Amazon EC2 provides a wide range of instance types. You can choose an instance type that provides the compute resources, memory, storage, and network performance that you need to run your applications. With Amazon EC2, you pay only for what you use. Billing for your instance starts when you launch your instance and it transitions to the running state. Billing stops when you stop your instance and resumes when you start your instance. When you terminate your instance, billing stops when it transitions to the shutting down state. Amazon EC2 provides features that you can use to optimize the performance and the cost of your instances. For example,"} +{"global_id": 652, "doc_id": "ec2", "chunk_id": "21", "question_id": 1, "question": "When does billing stop for an instance?", "answer_span": "Billing stops when you stop your instance and resumes when you start your instance.", "chunk": "state. Billing stops when you stop your instance and resumes when you start your instance. When you terminate your instance, billing stops when it transitions to the shutting down state. Amazon EC2 provides features that you can use to optimize the performance and the cost of your instances. For example, you can use Amazon EC2 Fleet or Amazon EC2 Auto Scaling to scale your capacity up or down as your instance utilization changes. You can reduce the costs for your instances using Spot Instances or Savings Plans. A managed instance is managed by a service provider, such as Amazon EKS Auto Mode. You can’t directly modify the settings of a managed instance. Managed instances are identified by a true value in the Managed field. For more information, see Amazon EC2 managed instances. Features and tasks • Amazon EC2 instance types • Amazon EC2 managed instances • Amazon EC2 billing and purchasing options • Store instance launch parameters in Amazon EC2 launch templates • Launch an Amazon EC2 instance • Connect to your EC2 instance • Amazon EC2 instance state changes 267 Amazon Elastic Compute Cloud User Guide • Automatic instance recovery • Use instance metadata to manage your EC2 instance • Detect whether a host is an EC2 instance • Instance identity documents for Amazon EC2 instances • Precision clock and time synchronization on your EC2 instance • Manage device drivers for your EC2 instance • Configure your Amazon EC2 Windows instance • Upgrade an EC2 Windows instance to a newer version of Windows Server • Tutorial: Connect an Amazon EC2 instance to an Amazon RDS database Amazon EC2 instance types When you launch an instance, the instance type that you specify determines the hardware of the host computer used for your instance. Each instance type offers different compute, memory,"} +{"global_id": 653, "doc_id": "ec2", "chunk_id": "21", "question_id": 2, "question": "What can you use to scale your capacity up or down?", "answer_span": "you can use Amazon EC2 Fleet or Amazon EC2 Auto Scaling to scale your capacity up or down as your instance utilization changes.", "chunk": "state. Billing stops when you stop your instance and resumes when you start your instance. When you terminate your instance, billing stops when it transitions to the shutting down state. Amazon EC2 provides features that you can use to optimize the performance and the cost of your instances. For example, you can use Amazon EC2 Fleet or Amazon EC2 Auto Scaling to scale your capacity up or down as your instance utilization changes. You can reduce the costs for your instances using Spot Instances or Savings Plans. A managed instance is managed by a service provider, such as Amazon EKS Auto Mode. You can’t directly modify the settings of a managed instance. Managed instances are identified by a true value in the Managed field. For more information, see Amazon EC2 managed instances. Features and tasks • Amazon EC2 instance types • Amazon EC2 managed instances • Amazon EC2 billing and purchasing options • Store instance launch parameters in Amazon EC2 launch templates • Launch an Amazon EC2 instance • Connect to your EC2 instance • Amazon EC2 instance state changes 267 Amazon Elastic Compute Cloud User Guide • Automatic instance recovery • Use instance metadata to manage your EC2 instance • Detect whether a host is an EC2 instance • Instance identity documents for Amazon EC2 instances • Precision clock and time synchronization on your EC2 instance • Manage device drivers for your EC2 instance • Configure your Amazon EC2 Windows instance • Upgrade an EC2 Windows instance to a newer version of Windows Server • Tutorial: Connect an Amazon EC2 instance to an Amazon RDS database Amazon EC2 instance types When you launch an instance, the instance type that you specify determines the hardware of the host computer used for your instance. Each instance type offers different compute, memory,"} +{"global_id": 654, "doc_id": "ec2", "chunk_id": "21", "question_id": 3, "question": "How are managed instances identified?", "answer_span": "Managed instances are identified by a true value in the Managed field.", "chunk": "state. Billing stops when you stop your instance and resumes when you start your instance. When you terminate your instance, billing stops when it transitions to the shutting down state. Amazon EC2 provides features that you can use to optimize the performance and the cost of your instances. For example, you can use Amazon EC2 Fleet or Amazon EC2 Auto Scaling to scale your capacity up or down as your instance utilization changes. You can reduce the costs for your instances using Spot Instances or Savings Plans. A managed instance is managed by a service provider, such as Amazon EKS Auto Mode. You can’t directly modify the settings of a managed instance. Managed instances are identified by a true value in the Managed field. For more information, see Amazon EC2 managed instances. Features and tasks • Amazon EC2 instance types • Amazon EC2 managed instances • Amazon EC2 billing and purchasing options • Store instance launch parameters in Amazon EC2 launch templates • Launch an Amazon EC2 instance • Connect to your EC2 instance • Amazon EC2 instance state changes 267 Amazon Elastic Compute Cloud User Guide • Automatic instance recovery • Use instance metadata to manage your EC2 instance • Detect whether a host is an EC2 instance • Instance identity documents for Amazon EC2 instances • Precision clock and time synchronization on your EC2 instance • Manage device drivers for your EC2 instance • Configure your Amazon EC2 Windows instance • Upgrade an EC2 Windows instance to a newer version of Windows Server • Tutorial: Connect an Amazon EC2 instance to an Amazon RDS database Amazon EC2 instance types When you launch an instance, the instance type that you specify determines the hardware of the host computer used for your instance. Each instance type offers different compute, memory,"} +{"global_id": 655, "doc_id": "ec2", "chunk_id": "21", "question_id": 4, "question": "What determines the hardware of the host computer used for your instance?", "answer_span": "the instance type that you specify determines the hardware of the host computer used for your instance.", "chunk": "state. Billing stops when you stop your instance and resumes when you start your instance. When you terminate your instance, billing stops when it transitions to the shutting down state. Amazon EC2 provides features that you can use to optimize the performance and the cost of your instances. For example, you can use Amazon EC2 Fleet or Amazon EC2 Auto Scaling to scale your capacity up or down as your instance utilization changes. You can reduce the costs for your instances using Spot Instances or Savings Plans. A managed instance is managed by a service provider, such as Amazon EKS Auto Mode. You can’t directly modify the settings of a managed instance. Managed instances are identified by a true value in the Managed field. For more information, see Amazon EC2 managed instances. Features and tasks • Amazon EC2 instance types • Amazon EC2 managed instances • Amazon EC2 billing and purchasing options • Store instance launch parameters in Amazon EC2 launch templates • Launch an Amazon EC2 instance • Connect to your EC2 instance • Amazon EC2 instance state changes 267 Amazon Elastic Compute Cloud User Guide • Automatic instance recovery • Use instance metadata to manage your EC2 instance • Detect whether a host is an EC2 instance • Instance identity documents for Amazon EC2 instances • Precision clock and time synchronization on your EC2 instance • Manage device drivers for your EC2 instance • Configure your Amazon EC2 Windows instance • Upgrade an EC2 Windows instance to a newer version of Windows Server • Tutorial: Connect an Amazon EC2 instance to an Amazon RDS database Amazon EC2 instance types When you launch an instance, the instance type that you specify determines the hardware of the host computer used for your instance. Each instance type offers different compute, memory,"} +{"global_id": 656, "doc_id": "ec2", "chunk_id": "22", "question_id": 1, "question": "What determines the hardware of the host computer used for your instance?", "answer_span": "the instance type that you specify determines the hardware of the host computer used for your instance.", "chunk": "newer version of Windows Server • Tutorial: Connect an Amazon EC2 instance to an Amazon RDS database Amazon EC2 instance types When you launch an instance, the instance type that you specify determines the hardware of the host computer used for your instance. Each instance type offers different compute, memory, and storage capabilities, and is grouped in an instance family based on these capabilities. Select an instance type based on the requirements of the application or software that you plan to run on your instance. For more information about features and use cases, see Amazon EC2 Instance Types. Amazon EC2 dedicates some resources of the host computer, such as CPU, memory, and instance storage, to a particular instance. Amazon EC2 shares other resources of the host computer, such as the network and the disk subsystem, among instances. If each instance on a host computer tries to use as much of one of these shared resources as possible, each receives an equal share of that resource. However, when a resource is underused, an instance can consume a higher share of that resource while it's available. Each instance type provides higher or lower minimum performance from a shared resource. For example, instance types with high I/O performance have a larger allocation of shared resources. Allocating a larger share of shared resources also reduces the variance of I/O performance. For most applications, moderate I/O performance is more than enough. However, for applications that require greater or more consistent I/O performance, consider an instance type with higher I/O performance. Contents • Available instance types • Hardware specifications Instance types 268 Amazon Elastic Compute Cloud User Guide • Hypervisor type • AMI virtualization types • Processors • Find an Amazon EC2 instance type • Get recommendations from EC2 instance type finder • Get EC2 instance"} +{"global_id": 657, "doc_id": "ec2", "chunk_id": "22", "question_id": 2, "question": "What do instance types offer?", "answer_span": "Each instance type offers different compute, memory, and storage capabilities, and is grouped in an instance family based on these capabilities.", "chunk": "newer version of Windows Server • Tutorial: Connect an Amazon EC2 instance to an Amazon RDS database Amazon EC2 instance types When you launch an instance, the instance type that you specify determines the hardware of the host computer used for your instance. Each instance type offers different compute, memory, and storage capabilities, and is grouped in an instance family based on these capabilities. Select an instance type based on the requirements of the application or software that you plan to run on your instance. For more information about features and use cases, see Amazon EC2 Instance Types. Amazon EC2 dedicates some resources of the host computer, such as CPU, memory, and instance storage, to a particular instance. Amazon EC2 shares other resources of the host computer, such as the network and the disk subsystem, among instances. If each instance on a host computer tries to use as much of one of these shared resources as possible, each receives an equal share of that resource. However, when a resource is underused, an instance can consume a higher share of that resource while it's available. Each instance type provides higher or lower minimum performance from a shared resource. For example, instance types with high I/O performance have a larger allocation of shared resources. Allocating a larger share of shared resources also reduces the variance of I/O performance. For most applications, moderate I/O performance is more than enough. However, for applications that require greater or more consistent I/O performance, consider an instance type with higher I/O performance. Contents • Available instance types • Hardware specifications Instance types 268 Amazon Elastic Compute Cloud User Guide • Hypervisor type • AMI virtualization types • Processors • Find an Amazon EC2 instance type • Get recommendations from EC2 instance type finder • Get EC2 instance"} +{"global_id": 658, "doc_id": "ec2", "chunk_id": "22", "question_id": 3, "question": "What happens if each instance on a host computer tries to use as much of one of the shared resources as possible?", "answer_span": "If each instance on a host computer tries to use as much of one of these shared resources as possible, each receives an equal share of that resource.", "chunk": "newer version of Windows Server • Tutorial: Connect an Amazon EC2 instance to an Amazon RDS database Amazon EC2 instance types When you launch an instance, the instance type that you specify determines the hardware of the host computer used for your instance. Each instance type offers different compute, memory, and storage capabilities, and is grouped in an instance family based on these capabilities. Select an instance type based on the requirements of the application or software that you plan to run on your instance. For more information about features and use cases, see Amazon EC2 Instance Types. Amazon EC2 dedicates some resources of the host computer, such as CPU, memory, and instance storage, to a particular instance. Amazon EC2 shares other resources of the host computer, such as the network and the disk subsystem, among instances. If each instance on a host computer tries to use as much of one of these shared resources as possible, each receives an equal share of that resource. However, when a resource is underused, an instance can consume a higher share of that resource while it's available. Each instance type provides higher or lower minimum performance from a shared resource. For example, instance types with high I/O performance have a larger allocation of shared resources. Allocating a larger share of shared resources also reduces the variance of I/O performance. For most applications, moderate I/O performance is more than enough. However, for applications that require greater or more consistent I/O performance, consider an instance type with higher I/O performance. Contents • Available instance types • Hardware specifications Instance types 268 Amazon Elastic Compute Cloud User Guide • Hypervisor type • AMI virtualization types • Processors • Find an Amazon EC2 instance type • Get recommendations from EC2 instance type finder • Get EC2 instance"} +{"global_id": 659, "doc_id": "ec2", "chunk_id": "22", "question_id": 4, "question": "What should you consider for applications that require greater or more consistent I/O performance?", "answer_span": "However, for applications that require greater or more consistent I/O performance, consider an instance type with higher I/O performance.", "chunk": "newer version of Windows Server • Tutorial: Connect an Amazon EC2 instance to an Amazon RDS database Amazon EC2 instance types When you launch an instance, the instance type that you specify determines the hardware of the host computer used for your instance. Each instance type offers different compute, memory, and storage capabilities, and is grouped in an instance family based on these capabilities. Select an instance type based on the requirements of the application or software that you plan to run on your instance. For more information about features and use cases, see Amazon EC2 Instance Types. Amazon EC2 dedicates some resources of the host computer, such as CPU, memory, and instance storage, to a particular instance. Amazon EC2 shares other resources of the host computer, such as the network and the disk subsystem, among instances. If each instance on a host computer tries to use as much of one of these shared resources as possible, each receives an equal share of that resource. However, when a resource is underused, an instance can consume a higher share of that resource while it's available. Each instance type provides higher or lower minimum performance from a shared resource. For example, instance types with high I/O performance have a larger allocation of shared resources. Allocating a larger share of shared resources also reduces the variance of I/O performance. For most applications, moderate I/O performance is more than enough. However, for applications that require greater or more consistent I/O performance, consider an instance type with higher I/O performance. Contents • Available instance types • Hardware specifications Instance types 268 Amazon Elastic Compute Cloud User Guide • Hypervisor type • AMI virtualization types • Processors • Find an Amazon EC2 instance type • Get recommendations from EC2 instance type finder • Get EC2 instance"} +{"global_id": 660, "doc_id": "ec2", "chunk_id": "23", "question_id": 1, "question": "What does Amazon EC2 provide?", "answer_span": "Amazon EC2 provides a wide selection of instance types optimized to fit different use cases.", "chunk": "type with higher I/O performance. Contents • Available instance types • Hardware specifications Instance types 268 Amazon Elastic Compute Cloud User Guide • Hypervisor type • AMI virtualization types • Processors • Find an Amazon EC2 instance type • Get recommendations from EC2 instance type finder • Get EC2 instance recommendations from Compute Optimizer • Amazon EC2 instance type changes • Burstable performance instances • Performance acceleration with GPU instances • Amazon EC2 Mac instances • Amazon EBS-optimized instance types • CPU options for Amazon EC2 instances • AMD SEV-SNP for Amazon EC2 instances • Processor state control for Amazon EC2 Linux instances Available instance types Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target workload. Instance type naming conventions Names are based on instance family, generation, processor family, capabilities, and size. For more information, see Naming conventions in the Amazon EC2 Instance Types Guide. Find an instance type To determine which instance types meet your requirements, such as supported Regions, compute resources, or storage resources, see Find an Amazon EC2 instance type and Amazon EC2 instance type specifications in the Amazon EC2 Instance Types Guide. Available instance types 269 Amazon Elastic Compute Cloud User Guide • Launch a container instance using an Inf1 or Inf2 instance and an Amazon ECS-optimized AMI. For more information, see Amazon Linux 2 (Inferentia) AMIs in the Amazon Elastic Container Service Developer Guide. • Create an Amazon EKS cluster with nodes running Inf1 instances. For more information, see Inferentia support"} +{"global_id": 661, "doc_id": "ec2", "chunk_id": "23", "question_id": 2, "question": "What do instance types comprise?", "answer_span": "Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications.", "chunk": "type with higher I/O performance. Contents • Available instance types • Hardware specifications Instance types 268 Amazon Elastic Compute Cloud User Guide • Hypervisor type • AMI virtualization types • Processors • Find an Amazon EC2 instance type • Get recommendations from EC2 instance type finder • Get EC2 instance recommendations from Compute Optimizer • Amazon EC2 instance type changes • Burstable performance instances • Performance acceleration with GPU instances • Amazon EC2 Mac instances • Amazon EBS-optimized instance types • CPU options for Amazon EC2 instances • AMD SEV-SNP for Amazon EC2 instances • Processor state control for Amazon EC2 Linux instances Available instance types Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target workload. Instance type naming conventions Names are based on instance family, generation, processor family, capabilities, and size. For more information, see Naming conventions in the Amazon EC2 Instance Types Guide. Find an instance type To determine which instance types meet your requirements, such as supported Regions, compute resources, or storage resources, see Find an Amazon EC2 instance type and Amazon EC2 instance type specifications in the Amazon EC2 Instance Types Guide. Available instance types 269 Amazon Elastic Compute Cloud User Guide • Launch a container instance using an Inf1 or Inf2 instance and an Amazon ECS-optimized AMI. For more information, see Amazon Linux 2 (Inferentia) AMIs in the Amazon Elastic Container Service Developer Guide. • Create an Amazon EKS cluster with nodes running Inf1 instances. For more information, see Inferentia support"} +{"global_id": 662, "doc_id": "ec2", "chunk_id": "23", "question_id": 3, "question": "What are instance type naming conventions based on?", "answer_span": "Names are based on instance family, generation, processor family, capabilities, and size.", "chunk": "type with higher I/O performance. Contents • Available instance types • Hardware specifications Instance types 268 Amazon Elastic Compute Cloud User Guide • Hypervisor type • AMI virtualization types • Processors • Find an Amazon EC2 instance type • Get recommendations from EC2 instance type finder • Get EC2 instance recommendations from Compute Optimizer • Amazon EC2 instance type changes • Burstable performance instances • Performance acceleration with GPU instances • Amazon EC2 Mac instances • Amazon EBS-optimized instance types • CPU options for Amazon EC2 instances • AMD SEV-SNP for Amazon EC2 instances • Processor state control for Amazon EC2 Linux instances Available instance types Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target workload. Instance type naming conventions Names are based on instance family, generation, processor family, capabilities, and size. For more information, see Naming conventions in the Amazon EC2 Instance Types Guide. Find an instance type To determine which instance types meet your requirements, such as supported Regions, compute resources, or storage resources, see Find an Amazon EC2 instance type and Amazon EC2 instance type specifications in the Amazon EC2 Instance Types Guide. Available instance types 269 Amazon Elastic Compute Cloud User Guide • Launch a container instance using an Inf1 or Inf2 instance and an Amazon ECS-optimized AMI. For more information, see Amazon Linux 2 (Inferentia) AMIs in the Amazon Elastic Container Service Developer Guide. • Create an Amazon EKS cluster with nodes running Inf1 instances. For more information, see Inferentia support"} +{"global_id": 663, "doc_id": "ec2", "chunk_id": "23", "question_id": 4, "question": "Where can you find more information about instance types?", "answer_span": "see Find an Amazon EC2 instance type and Amazon EC2 instance type specifications in the Amazon EC2 Instance Types Guide.", "chunk": "type with higher I/O performance. Contents • Available instance types • Hardware specifications Instance types 268 Amazon Elastic Compute Cloud User Guide • Hypervisor type • AMI virtualization types • Processors • Find an Amazon EC2 instance type • Get recommendations from EC2 instance type finder • Get EC2 instance recommendations from Compute Optimizer • Amazon EC2 instance type changes • Burstable performance instances • Performance acceleration with GPU instances • Amazon EC2 Mac instances • Amazon EBS-optimized instance types �� CPU options for Amazon EC2 instances • AMD SEV-SNP for Amazon EC2 instances • Processor state control for Amazon EC2 Linux instances Available instance types Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give you the flexibility to choose the appropriate mix of resources for your applications. Each instance type includes one or more instance sizes, allowing you to scale your resources to the requirements of your target workload. Instance type naming conventions Names are based on instance family, generation, processor family, capabilities, and size. For more information, see Naming conventions in the Amazon EC2 Instance Types Guide. Find an instance type To determine which instance types meet your requirements, such as supported Regions, compute resources, or storage resources, see Find an Amazon EC2 instance type and Amazon EC2 instance type specifications in the Amazon EC2 Instance Types Guide. Available instance types 269 Amazon Elastic Compute Cloud User Guide • Launch a container instance using an Inf1 or Inf2 instance and an Amazon ECS-optimized AMI. For more information, see Amazon Linux 2 (Inferentia) AMIs in the Amazon Elastic Container Service Developer Guide. • Create an Amazon EKS cluster with nodes running Inf1 instances. For more information, see Inferentia support"} +{"global_id": 664, "doc_id": "ec2", "chunk_id": "24", "question_id": 1, "question": "What type of instance can you use to launch a container instance?", "answer_span": "Launch a container instance using an Inf1 or Inf2 instance and an Amazon ECS-optimized AMI.", "chunk": "• Launch a container instance using an Inf1 or Inf2 instance and an Amazon ECS-optimized AMI. For more information, see Amazon Linux 2 (Inferentia) AMIs in the Amazon Elastic Container Service Developer Guide. • Create an Amazon EKS cluster with nodes running Inf1 instances. For more information, see Inferentia support in the Amazon EKS User Guide. Find an Amazon EC2 instance type Before you can launch an instance, you must select an instance type to use. The instance type that you choose might depend on the resources that your workload requires, such as compute, memory, or storage resources. It can be beneficial to identify several instance types that might suit your workload and evaluate their performance in a test environment. There is no substitute for measuring the performance of your application under load. You can get suggestions and guidance for EC2 instance types using the EC2 instance type finder. For more information, see the section called “EC2 instance type finder”. If you already have running EC2 instances, you can use AWS Compute Optimizer to get recommendations about the instance types that you should use to improve performance, save money, or both. For more information, see the section called “Compute Optimizer recommendations”. Tasks • Find an instance type using the console • Describe an instance type using the AWS CLI • Find an instance type using the AWS CLI • Find an instance type using the Tools for PowerShell Find an instance type using the console You can find an instance type that meets your needs using the Amazon EC2 console. To find an instance type using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select the Region in which to launch your instances. You can select any Region that's available to you, regardless"} +{"global_id": 665, "doc_id": "ec2", "chunk_id": "24", "question_id": 2, "question": "Where can you find more information about Amazon EKS cluster with Inf1 instances?", "answer_span": "For more information, see Inferentia support in the Amazon EKS User Guide.", "chunk": "• Launch a container instance using an Inf1 or Inf2 instance and an Amazon ECS-optimized AMI. For more information, see Amazon Linux 2 (Inferentia) AMIs in the Amazon Elastic Container Service Developer Guide. • Create an Amazon EKS cluster with nodes running Inf1 instances. For more information, see Inferentia support in the Amazon EKS User Guide. Find an Amazon EC2 instance type Before you can launch an instance, you must select an instance type to use. The instance type that you choose might depend on the resources that your workload requires, such as compute, memory, or storage resources. It can be beneficial to identify several instance types that might suit your workload and evaluate their performance in a test environment. There is no substitute for measuring the performance of your application under load. You can get suggestions and guidance for EC2 instance types using the EC2 instance type finder. For more information, see the section called “EC2 instance type finder”. If you already have running EC2 instances, you can use AWS Compute Optimizer to get recommendations about the instance types that you should use to improve performance, save money, or both. For more information, see the section called “Compute Optimizer recommendations”. Tasks • Find an instance type using the console • Describe an instance type using the AWS CLI • Find an instance type using the AWS CLI • Find an instance type using the Tools for PowerShell Find an instance type using the console You can find an instance type that meets your needs using the Amazon EC2 console. To find an instance type using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select the Region in which to launch your instances. You can select any Region that's available to you, regardless"} +{"global_id": 666, "doc_id": "ec2", "chunk_id": "24", "question_id": 3, "question": "What should you do before launching an instance?", "answer_span": "Before you can launch an instance, you must select an instance type to use.", "chunk": "• Launch a container instance using an Inf1 or Inf2 instance and an Amazon ECS-optimized AMI. For more information, see Amazon Linux 2 (Inferentia) AMIs in the Amazon Elastic Container Service Developer Guide. • Create an Amazon EKS cluster with nodes running Inf1 instances. For more information, see Inferentia support in the Amazon EKS User Guide. Find an Amazon EC2 instance type Before you can launch an instance, you must select an instance type to use. The instance type that you choose might depend on the resources that your workload requires, such as compute, memory, or storage resources. It can be beneficial to identify several instance types that might suit your workload and evaluate their performance in a test environment. There is no substitute for measuring the performance of your application under load. You can get suggestions and guidance for EC2 instance types using the EC2 instance type finder. For more information, see the section called “EC2 instance type finder”. If you already have running EC2 instances, you can use AWS Compute Optimizer to get recommendations about the instance types that you should use to improve performance, save money, or both. For more information, see the section called “Compute Optimizer recommendations”. Tasks • Find an instance type using the console • Describe an instance type using the AWS CLI • Find an instance type using the AWS CLI • Find an instance type using the Tools for PowerShell Find an instance type using the console You can find an instance type that meets your needs using the Amazon EC2 console. To find an instance type using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select the Region in which to launch your instances. You can select any Region that's available to you, regardless"} +{"global_id": 667, "doc_id": "ec2", "chunk_id": "24", "question_id": 4, "question": "What tool can you use to get recommendations about EC2 instance types?", "answer_span": "you can use AWS Compute Optimizer to get recommendations about the instance types that you should use to improve performance, save money, or both.", "chunk": "• Launch a container instance using an Inf1 or Inf2 instance and an Amazon ECS-optimized AMI. For more information, see Amazon Linux 2 (Inferentia) AMIs in the Amazon Elastic Container Service Developer Guide. • Create an Amazon EKS cluster with nodes running Inf1 instances. For more information, see Inferentia support in the Amazon EKS User Guide. Find an Amazon EC2 instance type Before you can launch an instance, you must select an instance type to use. The instance type that you choose might depend on the resources that your workload requires, such as compute, memory, or storage resources. It can be beneficial to identify several instance types that might suit your workload and evaluate their performance in a test environment. There is no substitute for measuring the performance of your application under load. You can get suggestions and guidance for EC2 instance types using the EC2 instance type finder. For more information, see the section called “EC2 instance type finder”. If you already have running EC2 instances, you can use AWS Compute Optimizer to get recommendations about the instance types that you should use to improve performance, save money, or both. For more information, see the section called “Compute Optimizer recommendations”. Tasks • Find an instance type using the console • Describe an instance type using the AWS CLI • Find an instance type using the AWS CLI • Find an instance type using the Tools for PowerShell Find an instance type using the console You can find an instance type that meets your needs using the Amazon EC2 console. To find an instance type using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select the Region in which to launch your instances. You can select any Region that's available to you, regardless"} +{"global_id": 668, "doc_id": "ec2", "chunk_id": "25", "question_id": 1, "question": "How do you open the Amazon EC2 console?", "answer_span": "Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.", "chunk": "type that meets your needs using the Amazon EC2 console. To find an instance type using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select the Region in which to launch your instances. You can select any Region that's available to you, regardless of your location. Find an instance type 274 Amazon Elastic Compute Cloud User Guide 3. In the navigation pane, choose Instance Types. 4. (Optional) Choose the preferences (gear) icon to select which instance type attributes to display, such as On-Demand Linux pricing, and then choose Confirm. Alternatively, select the name of an instance type to open its details page and view all attributes available through the console. The console does not display all the attributes available through the API or the command line. 5. Use the instance type attributes to filter the list of displayed instance types to only the instance types that meet your needs. For example, you can filter on the following attributes: • Availability zones – The name of the Availability Zone, Local Zone, or Wavelength Zone. For more information, see the section called “Regions and Zones”. • vCPUs or Cores – The number of vCPUs or cores. • Memory (GiB) – The memory size, in GiB. • Network performance – The network performance, in Gigabits. • Local instance storage – Indicates whether the instance type has local instance storage (true | false). 6. (Optional) To see a side-by-side comparison, select the checkbox for multiple instance types. The comparison is displayed at the bottom of the screen. 7. (Optional) To save the list of instance types to a comma-separated values (.csv) file for further review, choose Actions, Download list CSV. The file includes all instance types that match the filters you set. 8. (Optional) To launch instances"} +{"global_id": 669, "doc_id": "ec2", "chunk_id": "25", "question_id": 2, "question": "What can you filter on when finding an instance type?", "answer_span": "you can filter on the following attributes: • Availability zones – The name of the Availability Zone, Local Zone, or Wavelength Zone. • vCPUs or Cores – The number of vCPUs or cores. • Memory (GiB) – The memory size, in GiB. • Network performance – The network performance, in Gigabits. • Local instance storage – Indicates whether the instance type has local instance storage (true | false).", "chunk": "type that meets your needs using the Amazon EC2 console. To find an instance type using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select the Region in which to launch your instances. You can select any Region that's available to you, regardless of your location. Find an instance type 274 Amazon Elastic Compute Cloud User Guide 3. In the navigation pane, choose Instance Types. 4. (Optional) Choose the preferences (gear) icon to select which instance type attributes to display, such as On-Demand Linux pricing, and then choose Confirm. Alternatively, select the name of an instance type to open its details page and view all attributes available through the console. The console does not display all the attributes available through the API or the command line. 5. Use the instance type attributes to filter the list of displayed instance types to only the instance types that meet your needs. For example, you can filter on the following attributes: • Availability zones – The name of the Availability Zone, Local Zone, or Wavelength Zone. For more information, see the section called “Regions and Zones”. • vCPUs or Cores – The number of vCPUs or cores. • Memory (GiB) – The memory size, in GiB. • Network performance – The network performance, in Gigabits. • Local instance storage – Indicates whether the instance type has local instance storage (true | false). 6. (Optional) To see a side-by-side comparison, select the checkbox for multiple instance types. The comparison is displayed at the bottom of the screen. 7. (Optional) To save the list of instance types to a comma-separated values (.csv) file for further review, choose Actions, Download list CSV. The file includes all instance types that match the filters you set. 8. (Optional) To launch instances"} +{"global_id": 670, "doc_id": "ec2", "chunk_id": "25", "question_id": 3, "question": "What is the optional action to see a side-by-side comparison?", "answer_span": "To see a side-by-side comparison, select the checkbox for multiple instance types.", "chunk": "type that meets your needs using the Amazon EC2 console. To find an instance type using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select the Region in which to launch your instances. You can select any Region that's available to you, regardless of your location. Find an instance type 274 Amazon Elastic Compute Cloud User Guide 3. In the navigation pane, choose Instance Types. 4. (Optional) Choose the preferences (gear) icon to select which instance type attributes to display, such as On-Demand Linux pricing, and then choose Confirm. Alternatively, select the name of an instance type to open its details page and view all attributes available through the console. The console does not display all the attributes available through the API or the command line. 5. Use the instance type attributes to filter the list of displayed instance types to only the instance types that meet your needs. For example, you can filter on the following attributes: • Availability zones – The name of the Availability Zone, Local Zone, or Wavelength Zone. For more information, see the section called “Regions and Zones”. • vCPUs or Cores – The number of vCPUs or cores. • Memory (GiB) – The memory size, in GiB. • Network performance – The network performance, in Gigabits. • Local instance storage – Indicates whether the instance type has local instance storage (true | false). 6. (Optional) To see a side-by-side comparison, select the checkbox for multiple instance types. The comparison is displayed at the bottom of the screen. 7. (Optional) To save the list of instance types to a comma-separated values (.csv) file for further review, choose Actions, Download list CSV. The file includes all instance types that match the filters you set. 8. (Optional) To launch instances"} +{"global_id": 671, "doc_id": "ec2", "chunk_id": "25", "question_id": 4, "question": "What can you do to save the list of instance types?", "answer_span": "To save the list of instance types to a comma-separated values (.csv) file for further review, choose Actions, Download list CSV.", "chunk": "type that meets your needs using the Amazon EC2 console. To find an instance type using the console 1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/. 2. From the navigation bar, select the Region in which to launch your instances. You can select any Region that's available to you, regardless of your location. Find an instance type 274 Amazon Elastic Compute Cloud User Guide 3. In the navigation pane, choose Instance Types. 4. (Optional) Choose the preferences (gear) icon to select which instance type attributes to display, such as On-Demand Linux pricing, and then choose Confirm. Alternatively, select the name of an instance type to open its details page and view all attributes available through the console. The console does not display all the attributes available through the API or the command line. 5. Use the instance type attributes to filter the list of displayed instance types to only the instance types that meet your needs. For example, you can filter on the following attributes: • Availability zones – The name of the Availability Zone, Local Zone, or Wavelength Zone. For more information, see the section called “Regions and Zones”. • vCPUs or Cores – The number of vCPUs or cores. • Memory (GiB) – The memory size, in GiB. • Network performance – The network performance, in Gigabits. • Local instance storage – Indicates whether the instance type has local instance storage (true | false). 6. (Optional) To see a side-by-side comparison, select the checkbox for multiple instance types. The comparison is displayed at the bottom of the screen. 7. (Optional) To save the list of instance types to a comma-separated values (.csv) file for further review, choose Actions, Download list CSV. The file includes all instance types that match the filters you set. 8. (Optional) To launch instances"} +{"global_id": 672, "doc_id": "ec2", "chunk_id": "26", "question_id": 1, "question": "Where is the comparison displayed?", "answer_span": "The comparison is displayed at the bottom of the screen.", "chunk": "The comparison is displayed at the bottom of the screen. 7. (Optional) To save the list of instance types to a comma-separated values (.csv) file for further review, choose Actions, Download list CSV. The file includes all instance types that match the filters you set. 8. (Optional) To launch instances using an instance type that meet your needs, select the checkbox for the instance type and choose Actions, Launch instance. For more information, see Launch an EC2 instance using the launch instance wizard in the console. Describe an instance type using the AWS CLI You can use the describe-instance-types command to describe a specific instance type. To fully describe an instance type The following command displays all available details for the specified instance type. The output is lengthy, so it is omitted here. aws ec2 describe-instance-types \\ --instance-types t2.micro \\ Find an instance type 275 Amazon Elastic Compute Cloud User Guide EC2 Fleet and Spot Fleet EC2 Fleet and Spot Fleet are designed to be a useful way to launch a fleet of tens, hundreds, or thousands of Amazon EC2 instances in a single operation. Each instance in a fleet is either configured by a launch template or a set of launch parameters that you configure manually at launch. Topics • Features and benefits • Which is the best fleet method to use? • Configuration options for your EC2 Fleet or Spot Fleet • Work with EC2 Fleet • Work with Spot Fleet • Monitor your EC2 Fleet or Spot Fleet • Tutorials for EC2 Fleet • Example CLI configurations for EC2 Fleet • Example CLI configurations Spot Fleet • Quotas for EC2 Fleet and Spot Fleet Features and benefits Fleets provide the following features and benefits, enabling you to maximize cost savings and optimize availability and performance when running"} +{"global_id": 673, "doc_id": "ec2", "chunk_id": "26", "question_id": 2, "question": "What command is used to describe a specific instance type using the AWS CLI?", "answer_span": "You can use the describe-instance-types command to describe a specific instance type.", "chunk": "The comparison is displayed at the bottom of the screen. 7. (Optional) To save the list of instance types to a comma-separated values (.csv) file for further review, choose Actions, Download list CSV. The file includes all instance types that match the filters you set. 8. (Optional) To launch instances using an instance type that meet your needs, select the checkbox for the instance type and choose Actions, Launch instance. For more information, see Launch an EC2 instance using the launch instance wizard in the console. Describe an instance type using the AWS CLI You can use the describe-instance-types command to describe a specific instance type. To fully describe an instance type The following command displays all available details for the specified instance type. The output is lengthy, so it is omitted here. aws ec2 describe-instance-types \\ --instance-types t2.micro \\ Find an instance type 275 Amazon Elastic Compute Cloud User Guide EC2 Fleet and Spot Fleet EC2 Fleet and Spot Fleet are designed to be a useful way to launch a fleet of tens, hundreds, or thousands of Amazon EC2 instances in a single operation. Each instance in a fleet is either configured by a launch template or a set of launch parameters that you configure manually at launch. Topics • Features and benefits • Which is the best fleet method to use? • Configuration options for your EC2 Fleet or Spot Fleet • Work with EC2 Fleet • Work with Spot Fleet • Monitor your EC2 Fleet or Spot Fleet • Tutorials for EC2 Fleet • Example CLI configurations for EC2 Fleet • Example CLI configurations Spot Fleet • Quotas for EC2 Fleet and Spot Fleet Features and benefits Fleets provide the following features and benefits, enabling you to maximize cost savings and optimize availability and performance when running"} +{"global_id": 674, "doc_id": "ec2", "chunk_id": "26", "question_id": 3, "question": "What is the purpose of EC2 Fleet and Spot Fleet?", "answer_span": "EC2 Fleet and Spot Fleet are designed to be a useful way to launch a fleet of tens, hundreds, or thousands of Amazon EC2 instances in a single operation.", "chunk": "The comparison is displayed at the bottom of the screen. 7. (Optional) To save the list of instance types to a comma-separated values (.csv) file for further review, choose Actions, Download list CSV. The file includes all instance types that match the filters you set. 8. (Optional) To launch instances using an instance type that meet your needs, select the checkbox for the instance type and choose Actions, Launch instance. For more information, see Launch an EC2 instance using the launch instance wizard in the console. Describe an instance type using the AWS CLI You can use the describe-instance-types command to describe a specific instance type. To fully describe an instance type The following command displays all available details for the specified instance type. The output is lengthy, so it is omitted here. aws ec2 describe-instance-types \\ --instance-types t2.micro \\ Find an instance type 275 Amazon Elastic Compute Cloud User Guide EC2 Fleet and Spot Fleet EC2 Fleet and Spot Fleet are designed to be a useful way to launch a fleet of tens, hundreds, or thousands of Amazon EC2 instances in a single operation. Each instance in a fleet is either configured by a launch template or a set of launch parameters that you configure manually at launch. Topics • Features and benefits • Which is the best fleet method to use? • Configuration options for your EC2 Fleet or Spot Fleet • Work with EC2 Fleet • Work with Spot Fleet • Monitor your EC2 Fleet or Spot Fleet • Tutorials for EC2 Fleet • Example CLI configurations for EC2 Fleet • Example CLI configurations Spot Fleet • Quotas for EC2 Fleet and Spot Fleet Features and benefits Fleets provide the following features and benefits, enabling you to maximize cost savings and optimize availability and performance when running"} +{"global_id": 675, "doc_id": "ec2", "chunk_id": "26", "question_id": 4, "question": "What do fleets provide?", "answer_span": "Fleets provide the following features and benefits, enabling you to maximize cost savings and optimize availability and performance when running.", "chunk": "The comparison is displayed at the bottom of the screen. 7. (Optional) To save the list of instance types to a comma-separated values (.csv) file for further review, choose Actions, Download list CSV. The file includes all instance types that match the filters you set. 8. (Optional) To launch instances using an instance type that meet your needs, select the checkbox for the instance type and choose Actions, Launch instance. For more information, see Launch an EC2 instance using the launch instance wizard in the console. Describe an instance type using the AWS CLI You can use the describe-instance-types command to describe a specific instance type. To fully describe an instance type The following command displays all available details for the specified instance type. The output is lengthy, so it is omitted here. aws ec2 describe-instance-types \\ --instance-types t2.micro \\ Find an instance type 275 Amazon Elastic Compute Cloud User Guide EC2 Fleet and Spot Fleet EC2 Fleet and Spot Fleet are designed to be a useful way to launch a fleet of tens, hundreds, or thousands of Amazon EC2 instances in a single operation. Each instance in a fleet is either configured by a launch template or a set of launch parameters that you configure manually at launch. Topics • Features and benefits • Which is the best fleet method to use? • Configuration options for your EC2 Fleet or Spot Fleet • Work with EC2 Fleet • Work with Spot Fleet • Monitor your EC2 Fleet or Spot Fleet • Tutorials for EC2 Fleet • Example CLI configurations for EC2 Fleet • Example CLI configurations Spot Fleet • Quotas for EC2 Fleet and Spot Fleet Features and benefits Fleets provide the following features and benefits, enabling you to maximize cost savings and optimize availability and performance when running"} +{"global_id": 676, "doc_id": "ec2", "chunk_id": "27", "question_id": 1, "question": "What features and benefits do fleets provide?", "answer_span": "Fleets provide the following features and benefits, enabling you to maximize cost savings and optimize availability and performance when running applications on multiple EC2 instances.", "chunk": "Fleet • Tutorials for EC2 Fleet • Example CLI configurations for EC2 Fleet • Example CLI configurations Spot Fleet • Quotas for EC2 Fleet and Spot Fleet Features and benefits Fleets provide the following features and benefits, enabling you to maximize cost savings and optimize availability and performance when running applications on multiple EC2 instances. Multiple instance types A fleet can launch multiple instance types, ensuring it isn't dependent on the availability of any single instance type. This increases the overall availability of instances in your fleet. Distributing instances across Availability Zones A fleet automatically attempts to distribute instances evenly across multiple Availability Zones for high availability. This provides resiliency in case an Availability Zone becomes unavailable. Features and benefits 1933 Amazon Elastic Compute Cloud User Guide Multiple purchasing options A fleet can launch multiple purchase options (Spot and On-Demand Instances), allowing you to optimize costs through Spot Instance usage. You can also take advantage of Reserved Instance and Savings Plans discounts by using them in conjunction with On-Demand Instances in the fleet. Automated replacement of Spot Instances If your fleet includes Spot Instances, it can automatically request replacement Spot capacity if your Spot Instances are interrupted. Through Capacity Rebalancing, a fleet can also monitor and proactively replace your Spot Instances that are at an elevated risk of interruption. Reserve On-Demand capacity A fleet can use an On-Demand Capacity Reservation to reserve On-Demand capacity. A fleet can also include Capacity Blocks for ML, allowing you to reserve GPU instances on a future date to support short duration machine learning (ML) workloads. Which is the best fleet method to use? As a general best practice, we recommend launching fleets of Spot and On-Demand Instances with Amazon EC2 Auto Scaling because it provides additional features you can use to manage your fleet."} +{"global_id": 677, "doc_id": "ec2", "chunk_id": "27", "question_id": 2, "question": "How does a fleet ensure high availability?", "answer_span": "A fleet automatically attempts to distribute instances evenly across multiple Availability Zones for high availability.", "chunk": "Fleet • Tutorials for EC2 Fleet • Example CLI configurations for EC2 Fleet • Example CLI configurations Spot Fleet • Quotas for EC2 Fleet and Spot Fleet Features and benefits Fleets provide the following features and benefits, enabling you to maximize cost savings and optimize availability and performance when running applications on multiple EC2 instances. Multiple instance types A fleet can launch multiple instance types, ensuring it isn't dependent on the availability of any single instance type. This increases the overall availability of instances in your fleet. Distributing instances across Availability Zones A fleet automatically attempts to distribute instances evenly across multiple Availability Zones for high availability. This provides resiliency in case an Availability Zone becomes unavailable. Features and benefits 1933 Amazon Elastic Compute Cloud User Guide Multiple purchasing options A fleet can launch multiple purchase options (Spot and On-Demand Instances), allowing you to optimize costs through Spot Instance usage. You can also take advantage of Reserved Instance and Savings Plans discounts by using them in conjunction with On-Demand Instances in the fleet. Automated replacement of Spot Instances If your fleet includes Spot Instances, it can automatically request replacement Spot capacity if your Spot Instances are interrupted. Through Capacity Rebalancing, a fleet can also monitor and proactively replace your Spot Instances that are at an elevated risk of interruption. Reserve On-Demand capacity A fleet can use an On-Demand Capacity Reservation to reserve On-Demand capacity. A fleet can also include Capacity Blocks for ML, allowing you to reserve GPU instances on a future date to support short duration machine learning (ML) workloads. Which is the best fleet method to use? As a general best practice, we recommend launching fleets of Spot and On-Demand Instances with Amazon EC2 Auto Scaling because it provides additional features you can use to manage your fleet."} +{"global_id": 678, "doc_id": "ec2", "chunk_id": "27", "question_id": 3, "question": "What purchasing options can a fleet launch?", "answer_span": "A fleet can launch multiple purchase options (Spot and On-Demand Instances), allowing you to optimize costs through Spot Instance usage.", "chunk": "Fleet • Tutorials for EC2 Fleet • Example CLI configurations for EC2 Fleet • Example CLI configurations Spot Fleet • Quotas for EC2 Fleet and Spot Fleet Features and benefits Fleets provide the following features and benefits, enabling you to maximize cost savings and optimize availability and performance when running applications on multiple EC2 instances. Multiple instance types A fleet can launch multiple instance types, ensuring it isn't dependent on the availability of any single instance type. This increases the overall availability of instances in your fleet. Distributing instances across Availability Zones A fleet automatically attempts to distribute instances evenly across multiple Availability Zones for high availability. This provides resiliency in case an Availability Zone becomes unavailable. Features and benefits 1933 Amazon Elastic Compute Cloud User Guide Multiple purchasing options A fleet can launch multiple purchase options (Spot and On-Demand Instances), allowing you to optimize costs through Spot Instance usage. You can also take advantage of Reserved Instance and Savings Plans discounts by using them in conjunction with On-Demand Instances in the fleet. Automated replacement of Spot Instances If your fleet includes Spot Instances, it can automatically request replacement Spot capacity if your Spot Instances are interrupted. Through Capacity Rebalancing, a fleet can also monitor and proactively replace your Spot Instances that are at an elevated risk of interruption. Reserve On-Demand capacity A fleet can use an On-Demand Capacity Reservation to reserve On-Demand capacity. A fleet can also include Capacity Blocks for ML, allowing you to reserve GPU instances on a future date to support short duration machine learning (ML) workloads. Which is the best fleet method to use? As a general best practice, we recommend launching fleets of Spot and On-Demand Instances with Amazon EC2 Auto Scaling because it provides additional features you can use to manage your fleet."} +{"global_id": 679, "doc_id": "ec2", "chunk_id": "27", "question_id": 4, "question": "What is the recommended fleet method to use?", "answer_span": "As a general best practice, we recommend launching fleets of Spot and On-Demand Instances with Amazon EC2 Auto Scaling because it provides additional features you can use to manage your fleet.", "chunk": "Fleet • Tutorials for EC2 Fleet • Example CLI configurations for EC2 Fleet • Example CLI configurations Spot Fleet • Quotas for EC2 Fleet and Spot Fleet Features and benefits Fleets provide the following features and benefits, enabling you to maximize cost savings and optimize availability and performance when running applications on multiple EC2 instances. Multiple instance types A fleet can launch multiple instance types, ensuring it isn't dependent on the availability of any single instance type. This increases the overall availability of instances in your fleet. Distributing instances across Availability Zones A fleet automatically attempts to distribute instances evenly across multiple Availability Zones for high availability. This provides resiliency in case an Availability Zone becomes unavailable. Features and benefits 1933 Amazon Elastic Compute Cloud User Guide Multiple purchasing options A fleet can launch multiple purchase options (Spot and On-Demand Instances), allowing you to optimize costs through Spot Instance usage. You can also take advantage of Reserved Instance and Savings Plans discounts by using them in conjunction with On-Demand Instances in the fleet. Automated replacement of Spot Instances If your fleet includes Spot Instances, it can automatically request replacement Spot capacity if your Spot Instances are interrupted. Through Capacity Rebalancing, a fleet can also monitor and proactively replace your Spot Instances that are at an elevated risk of interruption. Reserve On-Demand capacity A fleet can use an On-Demand Capacity Reservation to reserve On-Demand capacity. A fleet can also include Capacity Blocks for ML, allowing you to reserve GPU instances on a future date to support short duration machine learning (ML) workloads. Which is the best fleet method to use? As a general best practice, we recommend launching fleets of Spot and On-Demand Instances with Amazon EC2 Auto Scaling because it provides additional features you can use to manage your fleet."} +{"global_id": 680, "doc_id": "ec2", "chunk_id": "28", "question_id": 1, "question": "What is the best fleet method to use for short duration machine learning workloads?", "answer_span": "we recommend launching fleets of Spot and On-Demand Instances with Amazon EC2 Auto Scaling", "chunk": "a future date to support short duration machine learning (ML) workloads. Which is the best fleet method to use? As a general best practice, we recommend launching fleets of Spot and On-Demand Instances with Amazon EC2 Auto Scaling because it provides additional features you can use to manage your fleet. The list of additional features includes automatic health check replacements for both Spot and OnDemand Instances, application-based health checks, and an integration with Elastic Load Balancing to ensure an even distribution of application traffic to your healthy instances. You can also use Auto Scaling groups when you use AWS services such as Amazon ECS, Amazon EKS (self-managed node groups), and Amazon VPC Lattice. For more information, see the Amazon EC2 Auto Scaling User Guide. If you can't use Amazon EC2 Auto Scaling, then you might consider using EC2 Fleet or Spot Fleet. EC2 Fleet and Spot Fleet offer the same core functionality. However, EC2 Fleet is only available using a command line and does not provide console support. Spot Fleet provides console support, but is based on a legacy API with no planned investment. Use the following table to determine which fleet method to use. Which fleet method to use? 1934 Amazon Elastic Compute Cloud User Guide Fleet method When to use? Use case Amazon EC2 Auto Scaling • You need multiple instances with either a single configuration or a mixed configuration. Create an Auto Scaling group that manages the lifecycle of your instances while maintaini ng the desired number of instances. Supports horizontal scaling (adding more instances ) between specified minimum • You want to automate the lifecycle management of your instances. EC2 Fleet • You need multiple instances with either a single configuration or a mixed configuration. • You want to self-manage your instance lifecycle. • If you"} +{"global_id": 681, "doc_id": "ec2", "chunk_id": "28", "question_id": 2, "question": "What additional features does Amazon EC2 Auto Scaling provide?", "answer_span": "The list of additional features includes automatic health check replacements for both Spot and OnDemand Instances, application-based health checks, and an integration with Elastic Load Balancing", "chunk": "a future date to support short duration machine learning (ML) workloads. Which is the best fleet method to use? As a general best practice, we recommend launching fleets of Spot and On-Demand Instances with Amazon EC2 Auto Scaling because it provides additional features you can use to manage your fleet. The list of additional features includes automatic health check replacements for both Spot and OnDemand Instances, application-based health checks, and an integration with Elastic Load Balancing to ensure an even distribution of application traffic to your healthy instances. You can also use Auto Scaling groups when you use AWS services such as Amazon ECS, Amazon EKS (self-managed node groups), and Amazon VPC Lattice. For more information, see the Amazon EC2 Auto Scaling User Guide. If you can't use Amazon EC2 Auto Scaling, then you might consider using EC2 Fleet or Spot Fleet. EC2 Fleet and Spot Fleet offer the same core functionality. However, EC2 Fleet is only available using a command line and does not provide console support. Spot Fleet provides console support, but is based on a legacy API with no planned investment. Use the following table to determine which fleet method to use. Which fleet method to use? 1934 Amazon Elastic Compute Cloud User Guide Fleet method When to use? Use case Amazon EC2 Auto Scaling • You need multiple instances with either a single configuration or a mixed configuration. Create an Auto Scaling group that manages the lifecycle of your instances while maintaini ng the desired number of instances. Supports horizontal scaling (adding more instances ) between specified minimum • You want to automate the lifecycle management of your instances. EC2 Fleet • You need multiple instances with either a single configuration or a mixed configuration. • You want to self-manage your instance lifecycle. • If you"} +{"global_id": 682, "doc_id": "ec2", "chunk_id": "28", "question_id": 3, "question": "What should you consider if you can't use Amazon EC2 Auto Scaling?", "answer_span": "you might consider using EC2 Fleet or Spot Fleet", "chunk": "a future date to support short duration machine learning (ML) workloads. Which is the best fleet method to use? As a general best practice, we recommend launching fleets of Spot and On-Demand Instances with Amazon EC2 Auto Scaling because it provides additional features you can use to manage your fleet. The list of additional features includes automatic health check replacements for both Spot and OnDemand Instances, application-based health checks, and an integration with Elastic Load Balancing to ensure an even distribution of application traffic to your healthy instances. You can also use Auto Scaling groups when you use AWS services such as Amazon ECS, Amazon EKS (self-managed node groups), and Amazon VPC Lattice. For more information, see the Amazon EC2 Auto Scaling User Guide. If you can't use Amazon EC2 Auto Scaling, then you might consider using EC2 Fleet or Spot Fleet. EC2 Fleet and Spot Fleet offer the same core functionality. However, EC2 Fleet is only available using a command line and does not provide console support. Spot Fleet provides console support, but is based on a legacy API with no planned investment. Use the following table to determine which fleet method to use. Which fleet method to use? 1934 Amazon Elastic Compute Cloud User Guide Fleet method When to use? Use case Amazon EC2 Auto Scaling • You need multiple instances with either a single configuration or a mixed configuration. Create an Auto Scaling group that manages the lifecycle of your instances while maintaini ng the desired number of instances. Supports horizontal scaling (adding more instances ) between specified minimum • You want to automate the lifecycle management of your instances. EC2 Fleet • You need multiple instances with either a single configuration or a mixed configuration. • You want to self-manage your instance lifecycle. • If you"} +{"global_id": 683, "doc_id": "ec2", "chunk_id": "28", "question_id": 4, "question": "What is a key difference between EC2 Fleet and Spot Fleet?", "answer_span": "EC2 Fleet is only available using a command line and does not provide console support", "chunk": "a future date to support short duration machine learning (ML) workloads. Which is the best fleet method to use? As a general best practice, we recommend launching fleets of Spot and On-Demand Instances with Amazon EC2 Auto Scaling because it provides additional features you can use to manage your fleet. The list of additional features includes automatic health check replacements for both Spot and OnDemand Instances, application-based health checks, and an integration with Elastic Load Balancing to ensure an even distribution of application traffic to your healthy instances. You can also use Auto Scaling groups when you use AWS services such as Amazon ECS, Amazon EKS (self-managed node groups), and Amazon VPC Lattice. For more information, see the Amazon EC2 Auto Scaling User Guide. If you can't use Amazon EC2 Auto Scaling, then you might consider using EC2 Fleet or Spot Fleet. EC2 Fleet and Spot Fleet offer the same core functionality. However, EC2 Fleet is only available using a command line and does not provide console support. Spot Fleet provides console support, but is based on a legacy API with no planned investment. Use the following table to determine which fleet method to use. Which fleet method to use? 1934 Amazon Elastic Compute Cloud User Guide Fleet method When to use? Use case Amazon EC2 Auto Scaling • You need multiple instances with either a single configuration or a mixed configuration. Create an Auto Scaling group that manages the lifecycle of your instances while maintaini ng the desired number of instances. Supports horizontal scaling (adding more instances ) between specified minimum • You want to automate the lifecycle management of your instances. EC2 Fleet • You need multiple instances with either a single configuration or a mixed configuration. • You want to self-manage your instance lifecycle. • If you"} +{"global_id": 684, "doc_id": "ec2", "chunk_id": "29", "question_id": 1, "question": "What is recommended if you don't need auto scaling?", "answer_span": "we recommend that you use an instant type EC2 Fleet.", "chunk": "of instances. Supports horizontal scaling (adding more instances ) between specified minimum • You want to automate the lifecycle management of your instances. EC2 Fleet • You need multiple instances with either a single configuration or a mixed configuration. • You want to self-manage your instance lifecycle. • If you don’t need auto scaling, we recommend that you use an instant type EC2 Fleet. and maximum limits. Create an instant fleet of both On-Demand Instances and Spot Instances in a single operation, with multiple launch specifications that vary by instance type, AMI, Availability Zone, or subnet. The Spot Instance allocation strategy defaults to lowestprice per unit, but we recommend changing it to price-capacity-opt imized . Spot Fleet • We strongly discourage using Spot Fleet because it is based on a legacy API with no planned investmen t. Use Spot Fleet only if you need console support for a use case for when you would use EC2 Fleet. • If you want to manage your instance lifecycle, rather use EC2 Fleet. • If you don't want to manage your instance Which fleet method to use? 1935 Amazon Elastic Compute Cloud Fleet method User Guide When to use? Use case lifecycle, rather use an Auto Scaling group. Configuration options for your EC2 Fleet or Spot Fleet When planning your EC2 Fleet or Spot Fleet, we recommend that you consider the following options when deciding how to configure your fleet. Configura tion option Question Documentation Fleet request type Do you want a fleet that submits a one-time request for the desired target capacity, or a fleet that maintains target capacity over time? EC2 Fleet and Spot Fleet request types Spot Instances Do you plan to include Spot Instances in your fleet? Review the Spot best practices and use them when you plan your"} +{"global_id": 685, "doc_id": "ec2", "chunk_id": "29", "question_id": 2, "question": "What is the default Spot Instance allocation strategy?", "answer_span": "The Spot Instance allocation strategy defaults to lowestprice per unit.", "chunk": "of instances. Supports horizontal scaling (adding more instances ) between specified minimum • You want to automate the lifecycle management of your instances. EC2 Fleet • You need multiple instances with either a single configuration or a mixed configuration. • You want to self-manage your instance lifecycle. • If you don’t need auto scaling, we recommend that you use an instant type EC2 Fleet. and maximum limits. Create an instant fleet of both On-Demand Instances and Spot Instances in a single operation, with multiple launch specifications that vary by instance type, AMI, Availability Zone, or subnet. The Spot Instance allocation strategy defaults to lowestprice per unit, but we recommend changing it to price-capacity-opt imized . Spot Fleet • We strongly discourage using Spot Fleet because it is based on a legacy API with no planned investmen t. Use Spot Fleet only if you need console support for a use case for when you would use EC2 Fleet. • If you want to manage your instance lifecycle, rather use EC2 Fleet. • If you don't want to manage your instance Which fleet method to use? 1935 Amazon Elastic Compute Cloud Fleet method User Guide When to use? Use case lifecycle, rather use an Auto Scaling group. Configuration options for your EC2 Fleet or Spot Fleet When planning your EC2 Fleet or Spot Fleet, we recommend that you consider the following options when deciding how to configure your fleet. Configura tion option Question Documentation Fleet request type Do you want a fleet that submits a one-time request for the desired target capacity, or a fleet that maintains target capacity over time? EC2 Fleet and Spot Fleet request types Spot Instances Do you plan to include Spot Instances in your fleet? Review the Spot best practices and use them when you plan your"} +{"global_id": 686, "doc_id": "ec2", "chunk_id": "29", "question_id": 3, "question": "What should you consider when planning your EC2 Fleet or Spot Fleet?", "answer_span": "we recommend that you consider the following options when deciding how to configure your fleet.", "chunk": "of instances. Supports horizontal scaling (adding more instances ) between specified minimum • You want to automate the lifecycle management of your instances. EC2 Fleet • You need multiple instances with either a single configuration or a mixed configuration. • You want to self-manage your instance lifecycle. • If you don’t need auto scaling, we recommend that you use an instant type EC2 Fleet. and maximum limits. Create an instant fleet of both On-Demand Instances and Spot Instances in a single operation, with multiple launch specifications that vary by instance type, AMI, Availability Zone, or subnet. The Spot Instance allocation strategy defaults to lowestprice per unit, but we recommend changing it to price-capacity-opt imized . Spot Fleet • We strongly discourage using Spot Fleet because it is based on a legacy API with no planned investmen t. Use Spot Fleet only if you need console support for a use case for when you would use EC2 Fleet. • If you want to manage your instance lifecycle, rather use EC2 Fleet. • If you don't want to manage your instance Which fleet method to use? 1935 Amazon Elastic Compute Cloud Fleet method User Guide When to use? Use case lifecycle, rather use an Auto Scaling group. Configuration options for your EC2 Fleet or Spot Fleet When planning your EC2 Fleet or Spot Fleet, we recommend that you consider the following options when deciding how to configure your fleet. Configura tion option Question Documentation Fleet request type Do you want a fleet that submits a one-time request for the desired target capacity, or a fleet that maintains target capacity over time? EC2 Fleet and Spot Fleet request types Spot Instances Do you plan to include Spot Instances in your fleet? Review the Spot best practices and use them when you plan your"} +{"global_id": 687, "doc_id": "ec2", "chunk_id": "29", "question_id": 4, "question": "What is discouraged to use because it is based on a legacy API?", "answer_span": "We strongly discourage using Spot Fleet because it is based on a legacy API with no planned investment.", "chunk": "of instances. Supports horizontal scaling (adding more instances ) between specified minimum • You want to automate the lifecycle management of your instances. EC2 Fleet • You need multiple instances with either a single configuration or a mixed configuration. • You want to self-manage your instance lifecycle. • If you don’t need auto scaling, we recommend that you use an instant type EC2 Fleet. and maximum limits. Create an instant fleet of both On-Demand Instances and Spot Instances in a single operation, with multiple launch specifications that vary by instance type, AMI, Availability Zone, or subnet. The Spot Instance allocation strategy defaults to lowestprice per unit, but we recommend changing it to price-capacity-opt imized . Spot Fleet • We strongly discourage using Spot Fleet because it is based on a legacy API with no planned investmen t. Use Spot Fleet only if you need console support for a use case for when you would use EC2 Fleet. • If you want to manage your instance lifecycle, rather use EC2 Fleet. • If you don't want to manage your instance Which fleet method to use? 1935 Amazon Elastic Compute Cloud Fleet method User Guide When to use? Use case lifecycle, rather use an Auto Scaling group. Configuration options for your EC2 Fleet or Spot Fleet When planning your EC2 Fleet or Spot Fleet, we recommend that you consider the following options when deciding how to configure your fleet. Configura tion option Question Documentation Fleet request type Do you want a fleet that submits a one-time request for the desired target capacity, or a fleet that maintains target capacity over time? EC2 Fleet and Spot Fleet request types Spot Instances Do you plan to include Spot Instances in your fleet? Review the Spot best practices and use them when you plan your"} +{"global_id": 688, "doc_id": "ec2", "chunk_id": "30", "question_id": 1, "question": "What type of request does EC2 Fleet submit for target capacity?", "answer_span": "that submits a one-time request for the desired target capacity, or a fleet that maintains target capacity over time?", "chunk": "that submits a one-time request for the desired target capacity, or a fleet that maintains target capacity over time? EC2 Fleet and Spot Fleet request types Spot Instances Do you plan to include Spot Instances in your fleet? Review the Spot best practices and use them when you plan your fleet so that you can provision the instances at the lowest possible price. Best practices for Amazon EC2 Spot Spending limit for your fleet Do you want to limit how much you'll pay for your fleet per hour? Set a spending limit for your EC2 Fleet or Spot Fleet Instance types and attribute -based instance type selection Do you want to specify the instance types in your fleet, or let Amazon EC2 select the instance types that meet your application requirements? Specify attributes for instance type selection for EC2 Fleet or Spot Fleet Configuration options 1936 Amazon Elastic Compute Cloud User Guide Configura tion option Question Documentation Instance weighting Do you want to assign weights to each instance type to represent their compute capacity and performance, so that Amazon EC2 can select any combination of available instance types to fulfil your desired target capacity? Use instance weighting to manage cost and performanc e of your EC2 Fleet or Spot Fleet Allocation strategies Do you want to decide whether to optimize for available capacity, price, or instance types to use for the Spot Instances and On-Demand Instances in your fleet? Use allocation strategies to determine how EC2 Fleet or Spot Fleet fulfills Spot and On-Demand capacity Capacity Rebalanci ng Do you want your fleet to automatically replace at-risk Spot Instances? Use Capacity Rebalancing in EC2 Fleet and Spot Fleet to replace at-risk Spot Instances OnDemand Capacity Reservati on Do you want to reserve capacity for the OnDemand Instances in your fleet?"} +{"global_id": 689, "doc_id": "ec2", "chunk_id": "30", "question_id": 2, "question": "What should you review when planning your fleet to provision Spot Instances at the lowest possible price?", "answer_span": "Review the Spot best practices and use them when you plan your fleet so that you can provision the instances at the lowest possible price.", "chunk": "that submits a one-time request for the desired target capacity, or a fleet that maintains target capacity over time? EC2 Fleet and Spot Fleet request types Spot Instances Do you plan to include Spot Instances in your fleet? Review the Spot best practices and use them when you plan your fleet so that you can provision the instances at the lowest possible price. Best practices for Amazon EC2 Spot Spending limit for your fleet Do you want to limit how much you'll pay for your fleet per hour? Set a spending limit for your EC2 Fleet or Spot Fleet Instance types and attribute -based instance type selection Do you want to specify the instance types in your fleet, or let Amazon EC2 select the instance types that meet your application requirements? Specify attributes for instance type selection for EC2 Fleet or Spot Fleet Configuration options 1936 Amazon Elastic Compute Cloud User Guide Configura tion option Question Documentation Instance weighting Do you want to assign weights to each instance type to represent their compute capacity and performance, so that Amazon EC2 can select any combination of available instance types to fulfil your desired target capacity? Use instance weighting to manage cost and performanc e of your EC2 Fleet or Spot Fleet Allocation strategies Do you want to decide whether to optimize for available capacity, price, or instance types to use for the Spot Instances and On-Demand Instances in your fleet? Use allocation strategies to determine how EC2 Fleet or Spot Fleet fulfills Spot and On-Demand capacity Capacity Rebalanci ng Do you want your fleet to automatically replace at-risk Spot Instances? Use Capacity Rebalancing in EC2 Fleet and Spot Fleet to replace at-risk Spot Instances OnDemand Capacity Reservati on Do you want to reserve capacity for the OnDemand Instances in your fleet?"} +{"global_id": 690, "doc_id": "ec2", "chunk_id": "30", "question_id": 3, "question": "What can you use to manage cost and performance of your EC2 Fleet or Spot Fleet?", "answer_span": "Use instance weighting to manage cost and performance of your EC2 Fleet or Spot Fleet", "chunk": "that submits a one-time request for the desired target capacity, or a fleet that maintains target capacity over time? EC2 Fleet and Spot Fleet request types Spot Instances Do you plan to include Spot Instances in your fleet? Review the Spot best practices and use them when you plan your fleet so that you can provision the instances at the lowest possible price. Best practices for Amazon EC2 Spot Spending limit for your fleet Do you want to limit how much you'll pay for your fleet per hour? Set a spending limit for your EC2 Fleet or Spot Fleet Instance types and attribute -based instance type selection Do you want to specify the instance types in your fleet, or let Amazon EC2 select the instance types that meet your application requirements? Specify attributes for instance type selection for EC2 Fleet or Spot Fleet Configuration options 1936 Amazon Elastic Compute Cloud User Guide Configura tion option Question Documentation Instance weighting Do you want to assign weights to each instance type to represent their compute capacity and performance, so that Amazon EC2 can select any combination of available instance types to fulfil your desired target capacity? Use instance weighting to manage cost and performanc e of your EC2 Fleet or Spot Fleet Allocation strategies Do you want to decide whether to optimize for available capacity, price, or instance types to use for the Spot Instances and On-Demand Instances in your fleet? Use allocation strategies to determine how EC2 Fleet or Spot Fleet fulfills Spot and On-Demand capacity Capacity Rebalanci ng Do you want your fleet to automatically replace at-risk Spot Instances? Use Capacity Rebalancing in EC2 Fleet and Spot Fleet to replace at-risk Spot Instances OnDemand Capacity Reservati on Do you want to reserve capacity for the OnDemand Instances in your fleet?"} +{"global_id": 691, "doc_id": "ec2", "chunk_id": "30", "question_id": 4, "question": "What do you want to decide regarding allocation strategies for your fleet?", "answer_span": "Do you want to decide whether to optimize for available capacity, price, or instance types to use for the Spot Instances and On-Demand Instances in your fleet?", "chunk": "that submits a one-time request for the desired target capacity, or a fleet that maintains target capacity over time? EC2 Fleet and Spot Fleet request types Spot Instances Do you plan to include Spot Instances in your fleet? Review the Spot best practices and use them when you plan your fleet so that you can provision the instances at the lowest possible price. Best practices for Amazon EC2 Spot Spending limit for your fleet Do you want to limit how much you'll pay for your fleet per hour? Set a spending limit for your EC2 Fleet or Spot Fleet Instance types and attribute -based instance type selection Do you want to specify the instance types in your fleet, or let Amazon EC2 select the instance types that meet your application requirements? Specify attributes for instance type selection for EC2 Fleet or Spot Fleet Configuration options 1936 Amazon Elastic Compute Cloud User Guide Configura tion option Question Documentation Instance weighting Do you want to assign weights to each instance type to represent their compute capacity and performance, so that Amazon EC2 can select any combination of available instance types to fulfil your desired target capacity? Use instance weighting to manage cost and performanc e of your EC2 Fleet or Spot Fleet Allocation strategies Do you want to decide whether to optimize for available capacity, price, or instance types to use for the Spot Instances and On-Demand Instances in your fleet? Use allocation strategies to determine how EC2 Fleet or Spot Fleet fulfills Spot and On-Demand capacity Capacity Rebalanci ng Do you want your fleet to automatically replace at-risk Spot Instances? Use Capacity Rebalancing in EC2 Fleet and Spot Fleet to replace at-risk Spot Instances OnDemand Capacity Reservati on Do you want to reserve capacity for the OnDemand Instances in your fleet?"} +{"global_id": 692, "doc_id": "ec2", "chunk_id": "31", "question_id": 1, "question": "What does Capacity Rebalancing do?", "answer_span": "Use Capacity Rebalancing in EC2 Fleet and Spot Fleet to replace at-risk Spot Instances.", "chunk": "fulfills Spot and On-Demand capacity Capacity Rebalanci ng Do you want your fleet to automatically replace at-risk Spot Instances? Use Capacity Rebalancing in EC2 Fleet and Spot Fleet to replace at-risk Spot Instances OnDemand Capacity Reservati on Do you want to reserve capacity for the OnDemand Instances in your fleet? Use Capacity Reservations to reserve On-Demand capacity in EC2 Fleet EC2 Fleet and Spot Fleet request types The request type for an EC2 Fleet or Spot Fleet determines whether the request is synchronous or asynchronous, and whether it is a one-time request for the desired target capacity or an ongoing effort to maintain the capacity over time. When configuring your fleet, you must specify the request type. Both EC2 Fleet and Spot Fleet offer two request types: request and maintain. In addition, EC2 Fleet offers a third request type called instant. Request types 1937 Amazon Elastic Compute Cloud User Guide Fleet request types instant (EC2 Fleet only) If you configure the request type as instant, EC2 Fleet places a synchronous one-time request for your desired capacity. In the API response, it returns the instances that launched and provides errors for those instances that could not be launched. For more information, see Configure an EC2 Fleet of type instant. request If you configure the request type as request, the fleet places an asynchronous one-time request for your desired capacity. If capacity diminishes due to Spot interruptions, the fleet does not attempt to replenish Spot Instances, nor does it submit requests in alternative Spot capacity pools if capacity is unavailable. When creating a Spot Fleet of type request using the console, clear the Maintain target capacity checkbox. maintain (default) If you configure the request type as maintain, the fleet places an asynchronous request for your desired capacity, and maintains it by automatically"} +{"global_id": 693, "doc_id": "ec2", "chunk_id": "31", "question_id": 2, "question": "How can you reserve capacity for On-Demand Instances?", "answer_span": "Use Capacity Reservations to reserve On-Demand capacity in EC2 Fleet.", "chunk": "fulfills Spot and On-Demand capacity Capacity Rebalanci ng Do you want your fleet to automatically replace at-risk Spot Instances? Use Capacity Rebalancing in EC2 Fleet and Spot Fleet to replace at-risk Spot Instances OnDemand Capacity Reservati on Do you want to reserve capacity for the OnDemand Instances in your fleet? Use Capacity Reservations to reserve On-Demand capacity in EC2 Fleet EC2 Fleet and Spot Fleet request types The request type for an EC2 Fleet or Spot Fleet determines whether the request is synchronous or asynchronous, and whether it is a one-time request for the desired target capacity or an ongoing effort to maintain the capacity over time. When configuring your fleet, you must specify the request type. Both EC2 Fleet and Spot Fleet offer two request types: request and maintain. In addition, EC2 Fleet offers a third request type called instant. Request types 1937 Amazon Elastic Compute Cloud User Guide Fleet request types instant (EC2 Fleet only) If you configure the request type as instant, EC2 Fleet places a synchronous one-time request for your desired capacity. In the API response, it returns the instances that launched and provides errors for those instances that could not be launched. For more information, see Configure an EC2 Fleet of type instant. request If you configure the request type as request, the fleet places an asynchronous one-time request for your desired capacity. If capacity diminishes due to Spot interruptions, the fleet does not attempt to replenish Spot Instances, nor does it submit requests in alternative Spot capacity pools if capacity is unavailable. When creating a Spot Fleet of type request using the console, clear the Maintain target capacity checkbox. maintain (default) If you configure the request type as maintain, the fleet places an asynchronous request for your desired capacity, and maintains it by automatically"} +{"global_id": 694, "doc_id": "ec2", "chunk_id": "31", "question_id": 3, "question": "What are the request types offered by EC2 Fleet and Spot Fleet?", "answer_span": "Both EC2 Fleet and Spot Fleet offer two request types: request and maintain.", "chunk": "fulfills Spot and On-Demand capacity Capacity Rebalanci ng Do you want your fleet to automatically replace at-risk Spot Instances? Use Capacity Rebalancing in EC2 Fleet and Spot Fleet to replace at-risk Spot Instances OnDemand Capacity Reservati on Do you want to reserve capacity for the OnDemand Instances in your fleet? Use Capacity Reservations to reserve On-Demand capacity in EC2 Fleet EC2 Fleet and Spot Fleet request types The request type for an EC2 Fleet or Spot Fleet determines whether the request is synchronous or asynchronous, and whether it is a one-time request for the desired target capacity or an ongoing effort to maintain the capacity over time. When configuring your fleet, you must specify the request type. Both EC2 Fleet and Spot Fleet offer two request types: request and maintain. In addition, EC2 Fleet offers a third request type called instant. Request types 1937 Amazon Elastic Compute Cloud User Guide Fleet request types instant (EC2 Fleet only) If you configure the request type as instant, EC2 Fleet places a synchronous one-time request for your desired capacity. In the API response, it returns the instances that launched and provides errors for those instances that could not be launched. For more information, see Configure an EC2 Fleet of type instant. request If you configure the request type as request, the fleet places an asynchronous one-time request for your desired capacity. If capacity diminishes due to Spot interruptions, the fleet does not attempt to replenish Spot Instances, nor does it submit requests in alternative Spot capacity pools if capacity is unavailable. When creating a Spot Fleet of type request using the console, clear the Maintain target capacity checkbox. maintain (default) If you configure the request type as maintain, the fleet places an asynchronous request for your desired capacity, and maintains it by automatically"} +{"global_id": 695, "doc_id": "ec2", "chunk_id": "31", "question_id": 4, "question": "What happens if you configure the request type as instant?", "answer_span": "If you configure the request type as instant, EC2 Fleet places a synchronous one-time request for your desired capacity.", "chunk": "fulfills Spot and On-Demand capacity Capacity Rebalanci ng Do you want your fleet to automatically replace at-risk Spot Instances? Use Capacity Rebalancing in EC2 Fleet and Spot Fleet to replace at-risk Spot Instances OnDemand Capacity Reservati on Do you want to reserve capacity for the OnDemand Instances in your fleet? Use Capacity Reservations to reserve On-Demand capacity in EC2 Fleet EC2 Fleet and Spot Fleet request types The request type for an EC2 Fleet or Spot Fleet determines whether the request is synchronous or asynchronous, and whether it is a one-time request for the desired target capacity or an ongoing effort to maintain the capacity over time. When configuring your fleet, you must specify the request type. Both EC2 Fleet and Spot Fleet offer two request types: request and maintain. In addition, EC2 Fleet offers a third request type called instant. Request types 1937 Amazon Elastic Compute Cloud User Guide Fleet request types instant (EC2 Fleet only) If you configure the request type as instant, EC2 Fleet places a synchronous one-time request for your desired capacity. In the API response, it returns the instances that launched and provides errors for those instances that could not be launched. For more information, see Configure an EC2 Fleet of type instant. request If you configure the request type as request, the fleet places an asynchronous one-time request for your desired capacity. If capacity diminishes due to Spot interruptions, the fleet does not attempt to replenish Spot Instances, nor does it submit requests in alternative Spot capacity pools if capacity is unavailable. When creating a Spot Fleet of type request using the console, clear the Maintain target capacity checkbox. maintain (default) If you configure the request type as maintain, the fleet places an asynchronous request for your desired capacity, and maintains it by automatically"} +{"global_id": 696, "doc_id": "ec2", "chunk_id": "32", "question_id": 1, "question": "What should you do when creating a Spot Fleet of type request using the console if capacity is unavailable?", "answer_span": "alternative Spot capacity pools if capacity is unavailable.", "chunk": "alternative Spot capacity pools if capacity is unavailable. When creating a Spot Fleet of type request using the console, clear the Maintain target capacity checkbox. maintain (default) If you configure the request type as maintain, the fleet places an asynchronous request for your desired capacity, and maintains it by automatically replenishing any interrupted Spot Instances. When creating a Spot Fleet of type maintain using the console, select the Maintain target capacity checkbox Configure an EC2 Fleet of type instant The EC2 Fleet of type instant is a synchronous one-time request that makes only one attempt to launch your desired capacity. The API response lists the instances that launched, along with errors for those instances that could not be launched. There are several benefits to using an EC2 Fleet of type instant, which are described in this article. Example configurations are provided at the end of the article. For workloads that need a launch-only API to launch EC2 instances, you can use the RunInstances API. However, with RunInstances, you can only launch On-Demand Instances or Spot Instances, but not both in the same request. Furthermore, when you use RunInstances to launch Spot Instances, your Spot Instance request is limited to one instance type and one Availability Zone. This targets a single Spot capacity pool (a set of unused instances with the same instance type and Availability Zone). If the Spot capacity pool does not have sufficient Spot Instance capacity for your request, the RunInstances call fails. Request types 1938 Amazon Elastic Compute Cloud User Guide Instead of using RunInstances to launch Spot Instances, we recommend that you rather use the CreateFleet API with the type parameter set to instant for the following benefits: • Launch On-Demand Instances and Spot Instances in one request. An EC2 Fleet can launch OnDemand Instances, Spot"} +{"global_id": 697, "doc_id": "ec2", "chunk_id": "32", "question_id": 2, "question": "What happens when you configure the request type as maintain?", "answer_span": "the fleet places an asynchronous request for your desired capacity, and maintains it by automatically replenishing any interrupted Spot Instances.", "chunk": "alternative Spot capacity pools if capacity is unavailable. When creating a Spot Fleet of type request using the console, clear the Maintain target capacity checkbox. maintain (default) If you configure the request type as maintain, the fleet places an asynchronous request for your desired capacity, and maintains it by automatically replenishing any interrupted Spot Instances. When creating a Spot Fleet of type maintain using the console, select the Maintain target capacity checkbox Configure an EC2 Fleet of type instant The EC2 Fleet of type instant is a synchronous one-time request that makes only one attempt to launch your desired capacity. The API response lists the instances that launched, along with errors for those instances that could not be launched. There are several benefits to using an EC2 Fleet of type instant, which are described in this article. Example configurations are provided at the end of the article. For workloads that need a launch-only API to launch EC2 instances, you can use the RunInstances API. However, with RunInstances, you can only launch On-Demand Instances or Spot Instances, but not both in the same request. Furthermore, when you use RunInstances to launch Spot Instances, your Spot Instance request is limited to one instance type and one Availability Zone. This targets a single Spot capacity pool (a set of unused instances with the same instance type and Availability Zone). If the Spot capacity pool does not have sufficient Spot Instance capacity for your request, the RunInstances call fails. Request types 1938 Amazon Elastic Compute Cloud User Guide Instead of using RunInstances to launch Spot Instances, we recommend that you rather use the CreateFleet API with the type parameter set to instant for the following benefits: • Launch On-Demand Instances and Spot Instances in one request. An EC2 Fleet can launch OnDemand Instances, Spot"} +{"global_id": 698, "doc_id": "ec2", "chunk_id": "32", "question_id": 3, "question": "What is the EC2 Fleet of type instant?", "answer_span": "The EC2 Fleet of type instant is a synchronous one-time request that makes only one attempt to launch your desired capacity.", "chunk": "alternative Spot capacity pools if capacity is unavailable. When creating a Spot Fleet of type request using the console, clear the Maintain target capacity checkbox. maintain (default) If you configure the request type as maintain, the fleet places an asynchronous request for your desired capacity, and maintains it by automatically replenishing any interrupted Spot Instances. When creating a Spot Fleet of type maintain using the console, select the Maintain target capacity checkbox Configure an EC2 Fleet of type instant The EC2 Fleet of type instant is a synchronous one-time request that makes only one attempt to launch your desired capacity. The API response lists the instances that launched, along with errors for those instances that could not be launched. There are several benefits to using an EC2 Fleet of type instant, which are described in this article. Example configurations are provided at the end of the article. For workloads that need a launch-only API to launch EC2 instances, you can use the RunInstances API. However, with RunInstances, you can only launch On-Demand Instances or Spot Instances, but not both in the same request. Furthermore, when you use RunInstances to launch Spot Instances, your Spot Instance request is limited to one instance type and one Availability Zone. This targets a single Spot capacity pool (a set of unused instances with the same instance type and Availability Zone). If the Spot capacity pool does not have sufficient Spot Instance capacity for your request, the RunInstances call fails. Request types 1938 Amazon Elastic Compute Cloud User Guide Instead of using RunInstances to launch Spot Instances, we recommend that you rather use the CreateFleet API with the type parameter set to instant for the following benefits: • Launch On-Demand Instances and Spot Instances in one request. An EC2 Fleet can launch OnDemand Instances, Spot"} +{"global_id": 699, "doc_id": "ec2", "chunk_id": "32", "question_id": 4, "question": "What is a limitation of using RunInstances to launch Spot Instances?", "answer_span": "your Spot Instance request is limited to one instance type and one Availability Zone.", "chunk": "alternative Spot capacity pools if capacity is unavailable. When creating a Spot Fleet of type request using the console, clear the Maintain target capacity checkbox. maintain (default) If you configure the request type as maintain, the fleet places an asynchronous request for your desired capacity, and maintains it by automatically replenishing any interrupted Spot Instances. When creating a Spot Fleet of type maintain using the console, select the Maintain target capacity checkbox Configure an EC2 Fleet of type instant The EC2 Fleet of type instant is a synchronous one-time request that makes only one attempt to launch your desired capacity. The API response lists the instances that launched, along with errors for those instances that could not be launched. There are several benefits to using an EC2 Fleet of type instant, which are described in this article. Example configurations are provided at the end of the article. For workloads that need a launch-only API to launch EC2 instances, you can use the RunInstances API. However, with RunInstances, you can only launch On-Demand Instances or Spot Instances, but not both in the same request. Furthermore, when you use RunInstances to launch Spot Instances, your Spot Instance request is limited to one instance type and one Availability Zone. This targets a single Spot capacity pool (a set of unused instances with the same instance type and Availability Zone). If the Spot capacity pool does not have sufficient Spot Instance capacity for your request, the RunInstances call fails. Request types 1938 Amazon Elastic Compute Cloud User Guide Instead of using RunInstances to launch Spot Instances, we recommend that you rather use the CreateFleet API with the type parameter set to instant for the following benefits: • Launch On-Demand Instances and Spot Instances in one request. An EC2 Fleet can launch OnDemand Instances, Spot"} +{"global_id": 700, "doc_id": "ec2", "chunk_id": "33", "question_id": 1, "question": "What API is recommended instead of RunInstances to launch Spot Instances?", "answer_span": "we recommend that you rather use the CreateFleet API with the type parameter set to instant", "chunk": "Compute Cloud User Guide Instead of using RunInstances to launch Spot Instances, we recommend that you rather use the CreateFleet API with the type parameter set to instant for the following benefits: • Launch On-Demand Instances and Spot Instances in one request. An EC2 Fleet can launch OnDemand Instances, Spot Instances, or both. The request for Spot Instances is fulfilled if there is available capacity and the maximum price per hour for your request exceeds the Spot price. • Increase the availability of Spot Instances. By using an EC2 Fleet of type instant, you can launch Spot Instances following Spot best practices with the resulting benefits: • Spot best practice: Be flexible about instance types and Availability Zones. Benefit: By specifying several instance types and Availability Zones, you increase the number of Spot capacity pools. This gives the Spot service a better chance of finding and allocating your desired Spot compute capacity. A good rule of thumb is to be flexible across at least 10 instance types for each workload and make sure that all Availability Zones are configured for use in your VPC. • Spot best practice: Use the price-capacity-optimized allocation strategy. Benefit: The price-capacity-optimized allocation strategy identifies instances from the most-available Spot capacity pools, and then automatically provisions instances from the lowest priced of these pools. Because your Spot Instance capacity is sourced from pools with optimal capacity, this decreases the possibility that your Spot Instances will be interrupted when Amazon EC2 needs the capacity back. • Get access to a wider set of capabilities. For workloads that need a launch-only API, and where you prefer to manage the lifecycle of your instance rather than let EC2 Fleet manage it for you, use the EC2 Fleet of type instant instead of the RunInstances API. EC2 Fleet provides a"} +{"global_id": 701, "doc_id": "ec2", "chunk_id": "33", "question_id": 2, "question": "What is a benefit of using an EC2 Fleet of type instant?", "answer_span": "you can launch Spot Instances following Spot best practices with the resulting benefits", "chunk": "Compute Cloud User Guide Instead of using RunInstances to launch Spot Instances, we recommend that you rather use the CreateFleet API with the type parameter set to instant for the following benefits: • Launch On-Demand Instances and Spot Instances in one request. An EC2 Fleet can launch OnDemand Instances, Spot Instances, or both. The request for Spot Instances is fulfilled if there is available capacity and the maximum price per hour for your request exceeds the Spot price. • Increase the availability of Spot Instances. By using an EC2 Fleet of type instant, you can launch Spot Instances following Spot best practices with the resulting benefits: • Spot best practice: Be flexible about instance types and Availability Zones. Benefit: By specifying several instance types and Availability Zones, you increase the number of Spot capacity pools. This gives the Spot service a better chance of finding and allocating your desired Spot compute capacity. A good rule of thumb is to be flexible across at least 10 instance types for each workload and make sure that all Availability Zones are configured for use in your VPC. • Spot best practice: Use the price-capacity-optimized allocation strategy. Benefit: The price-capacity-optimized allocation strategy identifies instances from the most-available Spot capacity pools, and then automatically provisions instances from the lowest priced of these pools. Because your Spot Instance capacity is sourced from pools with optimal capacity, this decreases the possibility that your Spot Instances will be interrupted when Amazon EC2 needs the capacity back. • Get access to a wider set of capabilities. For workloads that need a launch-only API, and where you prefer to manage the lifecycle of your instance rather than let EC2 Fleet manage it for you, use the EC2 Fleet of type instant instead of the RunInstances API. EC2 Fleet provides a"} +{"global_id": 702, "doc_id": "ec2", "chunk_id": "33", "question_id": 3, "question": "What is a good rule of thumb for instance types when using Spot best practices?", "answer_span": "A good rule of thumb is to be flexible across at least 10 instance types for each workload", "chunk": "Compute Cloud User Guide Instead of using RunInstances to launch Spot Instances, we recommend that you rather use the CreateFleet API with the type parameter set to instant for the following benefits: • Launch On-Demand Instances and Spot Instances in one request. An EC2 Fleet can launch OnDemand Instances, Spot Instances, or both. The request for Spot Instances is fulfilled if there is available capacity and the maximum price per hour for your request exceeds the Spot price. • Increase the availability of Spot Instances. By using an EC2 Fleet of type instant, you can launch Spot Instances following Spot best practices with the resulting benefits: • Spot best practice: Be flexible about instance types and Availability Zones. Benefit: By specifying several instance types and Availability Zones, you increase the number of Spot capacity pools. This gives the Spot service a better chance of finding and allocating your desired Spot compute capacity. A good rule of thumb is to be flexible across at least 10 instance types for each workload and make sure that all Availability Zones are configured for use in your VPC. • Spot best practice: Use the price-capacity-optimized allocation strategy. Benefit: The price-capacity-optimized allocation strategy identifies instances from the most-available Spot capacity pools, and then automatically provisions instances from the lowest priced of these pools. Because your Spot Instance capacity is sourced from pools with optimal capacity, this decreases the possibility that your Spot Instances will be interrupted when Amazon EC2 needs the capacity back. • Get access to a wider set of capabilities. For workloads that need a launch-only API, and where you prefer to manage the lifecycle of your instance rather than let EC2 Fleet manage it for you, use the EC2 Fleet of type instant instead of the RunInstances API. EC2 Fleet provides a"} +{"global_id": 703, "doc_id": "ec2", "chunk_id": "33", "question_id": 4, "question": "What does the price-capacity-optimized allocation strategy do?", "answer_span": "The price-capacity-optimized allocation strategy identifies instances from the most-available Spot capacity pools", "chunk": "Compute Cloud User Guide Instead of using RunInstances to launch Spot Instances, we recommend that you rather use the CreateFleet API with the type parameter set to instant for the following benefits: • Launch On-Demand Instances and Spot Instances in one request. An EC2 Fleet can launch OnDemand Instances, Spot Instances, or both. The request for Spot Instances is fulfilled if there is available capacity and the maximum price per hour for your request exceeds the Spot price. • Increase the availability of Spot Instances. By using an EC2 Fleet of type instant, you can launch Spot Instances following Spot best practices with the resulting benefits: • Spot best practice: Be flexible about instance types and Availability Zones. Benefit: By specifying several instance types and Availability Zones, you increase the number of Spot capacity pools. This gives the Spot service a better chance of finding and allocating your desired Spot compute capacity. A good rule of thumb is to be flexible across at least 10 instance types for each workload and make sure that all Availability Zones are configured for use in your VPC. • Spot best practice: Use the price-capacity-optimized allocation strategy. Benefit: The price-capacity-optimized allocation strategy identifies instances from the most-available Spot capacity pools, and then automatically provisions instances from the lowest priced of these pools. Because your Spot Instance capacity is sourced from pools with optimal capacity, this decreases the possibility that your Spot Instances will be interrupted when Amazon EC2 needs the capacity back. • Get access to a wider set of capabilities. For workloads that need a launch-only API, and where you prefer to manage the lifecycle of your instance rather than let EC2 Fleet manage it for you, use the EC2 Fleet of type instant instead of the RunInstances API. EC2 Fleet provides a"} +{"global_id": 704, "doc_id": "ec2", "chunk_id": "34", "question_id": 1, "question": "What should you use for workloads that need a launch-only API?", "answer_span": "use the EC2 Fleet of type instant instead of the RunInstances API.", "chunk": "access to a wider set of capabilities. For workloads that need a launch-only API, and where you prefer to manage the lifecycle of your instance rather than let EC2 Fleet manage it for you, use the EC2 Fleet of type instant instead of the RunInstances API. EC2 Fleet provides a wider set of capabilities than RunInstances, as demonstrated in the following examples. For all other workloads, you should use Amazon EC2 Auto Scaling because it supplies a more comprehensive feature set for a wide variety of workloads, like ELB-backed applications, containerized workloads, and queue processing jobs. You can use EC2 Fleet of type instant to launch instances into Capacity Blocks. For more information, see Tutorial: Configure your EC2 Fleet to launch instances into Capacity Blocks. AWS services like Amazon EC2 Auto Scaling and Amazon EMR use EC2 Fleet of type instant to launch EC2 instances. Request types 1939 Amazon Elastic Compute Cloud User Guide Prerequisites for EC2 Fleet of type instant For the prerequisites for creating an EC2 Fleet, see EC2 Fleet prerequisites. How instant EC2 Fleet works When working with an EC2 Fleet of type instant, the sequence of events is as follows: 1. Configure: Configure the CreateFleet request type as instant. For more information, see Create an EC2 Fleet. Note that after you make the API call, you can't modify it. 2. Request: When you make the API call, Amazon EC2 places a synchronous one-time request for your desired capacity. 3. Response: The API response lists the instances that launched, along with errors for those instances that could not be launched. 4. Describe: You can describe your EC2 Fleet, list the instances associated with your EC2 Fleet, and view the history of your EC2 Fleet. 5. Terminate instances: You can terminate the instances at any time. 6. Delete"} +{"global_id": 705, "doc_id": "ec2", "chunk_id": "34", "question_id": 2, "question": "What does EC2 Fleet provide compared to RunInstances?", "answer_span": "EC2 Fleet provides a wider set of capabilities than RunInstances", "chunk": "access to a wider set of capabilities. For workloads that need a launch-only API, and where you prefer to manage the lifecycle of your instance rather than let EC2 Fleet manage it for you, use the EC2 Fleet of type instant instead of the RunInstances API. EC2 Fleet provides a wider set of capabilities than RunInstances, as demonstrated in the following examples. For all other workloads, you should use Amazon EC2 Auto Scaling because it supplies a more comprehensive feature set for a wide variety of workloads, like ELB-backed applications, containerized workloads, and queue processing jobs. You can use EC2 Fleet of type instant to launch instances into Capacity Blocks. For more information, see Tutorial: Configure your EC2 Fleet to launch instances into Capacity Blocks. AWS services like Amazon EC2 Auto Scaling and Amazon EMR use EC2 Fleet of type instant to launch EC2 instances. Request types 1939 Amazon Elastic Compute Cloud User Guide Prerequisites for EC2 Fleet of type instant For the prerequisites for creating an EC2 Fleet, see EC2 Fleet prerequisites. How instant EC2 Fleet works When working with an EC2 Fleet of type instant, the sequence of events is as follows: 1. Configure: Configure the CreateFleet request type as instant. For more information, see Create an EC2 Fleet. Note that after you make the API call, you can't modify it. 2. Request: When you make the API call, Amazon EC2 places a synchronous one-time request for your desired capacity. 3. Response: The API response lists the instances that launched, along with errors for those instances that could not be launched. 4. Describe: You can describe your EC2 Fleet, list the instances associated with your EC2 Fleet, and view the history of your EC2 Fleet. 5. Terminate instances: You can terminate the instances at any time. 6. Delete"} +{"global_id": 706, "doc_id": "ec2", "chunk_id": "34", "question_id": 3, "question": "What is recommended for all other workloads?", "answer_span": "you should use Amazon EC2 Auto Scaling because it supplies a more comprehensive feature set for a wide variety of workloads", "chunk": "access to a wider set of capabilities. For workloads that need a launch-only API, and where you prefer to manage the lifecycle of your instance rather than let EC2 Fleet manage it for you, use the EC2 Fleet of type instant instead of the RunInstances API. EC2 Fleet provides a wider set of capabilities than RunInstances, as demonstrated in the following examples. For all other workloads, you should use Amazon EC2 Auto Scaling because it supplies a more comprehensive feature set for a wide variety of workloads, like ELB-backed applications, containerized workloads, and queue processing jobs. You can use EC2 Fleet of type instant to launch instances into Capacity Blocks. For more information, see Tutorial: Configure your EC2 Fleet to launch instances into Capacity Blocks. AWS services like Amazon EC2 Auto Scaling and Amazon EMR use EC2 Fleet of type instant to launch EC2 instances. Request types 1939 Amazon Elastic Compute Cloud User Guide Prerequisites for EC2 Fleet of type instant For the prerequisites for creating an EC2 Fleet, see EC2 Fleet prerequisites. How instant EC2 Fleet works When working with an EC2 Fleet of type instant, the sequence of events is as follows: 1. Configure: Configure the CreateFleet request type as instant. For more information, see Create an EC2 Fleet. Note that after you make the API call, you can't modify it. 2. Request: When you make the API call, Amazon EC2 places a synchronous one-time request for your desired capacity. 3. Response: The API response lists the instances that launched, along with errors for those instances that could not be launched. 4. Describe: You can describe your EC2 Fleet, list the instances associated with your EC2 Fleet, and view the history of your EC2 Fleet. 5. Terminate instances: You can terminate the instances at any time. 6. Delete"} +{"global_id": 707, "doc_id": "ec2", "chunk_id": "34", "question_id": 4, "question": "What is the first step when working with an EC2 Fleet of type instant?", "answer_span": "Configure the CreateFleet request type as instant.", "chunk": "access to a wider set of capabilities. For workloads that need a launch-only API, and where you prefer to manage the lifecycle of your instance rather than let EC2 Fleet manage it for you, use the EC2 Fleet of type instant instead of the RunInstances API. EC2 Fleet provides a wider set of capabilities than RunInstances, as demonstrated in the following examples. For all other workloads, you should use Amazon EC2 Auto Scaling because it supplies a more comprehensive feature set for a wide variety of workloads, like ELB-backed applications, containerized workloads, and queue processing jobs. You can use EC2 Fleet of type instant to launch instances into Capacity Blocks. For more information, see Tutorial: Configure your EC2 Fleet to launch instances into Capacity Blocks. AWS services like Amazon EC2 Auto Scaling and Amazon EMR use EC2 Fleet of type instant to launch EC2 instances. Request types 1939 Amazon Elastic Compute Cloud User Guide Prerequisites for EC2 Fleet of type instant For the prerequisites for creating an EC2 Fleet, see EC2 Fleet prerequisites. How instant EC2 Fleet works When working with an EC2 Fleet of type instant, the sequence of events is as follows: 1. Configure: Configure the CreateFleet request type as instant. For more information, see Create an EC2 Fleet. Note that after you make the API call, you can't modify it. 2. Request: When you make the API call, Amazon EC2 places a synchronous one-time request for your desired capacity. 3. Response: The API response lists the instances that launched, along with errors for those instances that could not be launched. 4. Describe: You can describe your EC2 Fleet, list the instances associated with your EC2 Fleet, and view the history of your EC2 Fleet. 5. Terminate instances: You can terminate the instances at any time. 6. Delete"} +{"global_id": 708, "doc_id": "ec2", "chunk_id": "35", "question_id": 1, "question": "What can you do with your EC2 Fleet?", "answer_span": "You can describe your EC2 Fleet, list the instances associated with your EC2 Fleet, and view the history of your EC2 Fleet.", "chunk": "that launched, along with errors for those instances that could not be launched. 4. Describe: You can describe your EC2 Fleet, list the instances associated with your EC2 Fleet, and view the history of your EC2 Fleet. 5. Terminate instances: You can terminate the instances at any time. 6. Delete fleet request: The fleet request can be deleted either manually or automatically: • Manual: You can delete the fleet request after your instances launch. Note that a deleted instant fleet with running instances is not supported. When you delete an instant fleet, Amazon EC2 automatically terminates all its instances. For fleets with more than 1000 instances, the deletion request might fail. If your fleet has more than 1000 instances, first terminate most of the instances manually, leaving 1000 or fewer. Then delete the fleet, and the remaining instances will be terminated automatically. • Automatic: Amazon EC2 deletes the fleet request some time after either: • All the instances are terminated. • The fleet fails to launch any instances. Examples The following examples show how to use EC2 Fleet of type instant for different use cases. For more information about using the EC2 CreateFleet API parameters, see CreateFleet in the Amazon EC2 API Reference. Examples • Example 1: Launch Spot Instances with the capacity-optimized allocation strategy Request types 1940 Amazon Elastic Compute Cloud User Guide Networking in Amazon EC2 Amazon VPC enables you to launch AWS resources, such as Amazon EC2 instances, into a virtual network dedicated to your AWS account, known as a virtual private cloud (VPC). When you launch an instance, you can select a subnet from the VPC. The instance is configured with a primary network interface, which is a logical virtual network card. The instance receives a primary private IP address from the IPv4 address of the"} +{"global_id": 709, "doc_id": "ec2", "chunk_id": "35", "question_id": 2, "question": "What happens when you delete an instant fleet with running instances?", "answer_span": "Note that a deleted instant fleet with running instances is not supported.", "chunk": "that launched, along with errors for those instances that could not be launched. 4. Describe: You can describe your EC2 Fleet, list the instances associated with your EC2 Fleet, and view the history of your EC2 Fleet. 5. Terminate instances: You can terminate the instances at any time. 6. Delete fleet request: The fleet request can be deleted either manually or automatically: • Manual: You can delete the fleet request after your instances launch. Note that a deleted instant fleet with running instances is not supported. When you delete an instant fleet, Amazon EC2 automatically terminates all its instances. For fleets with more than 1000 instances, the deletion request might fail. If your fleet has more than 1000 instances, first terminate most of the instances manually, leaving 1000 or fewer. Then delete the fleet, and the remaining instances will be terminated automatically. • Automatic: Amazon EC2 deletes the fleet request some time after either: • All the instances are terminated. • The fleet fails to launch any instances. Examples The following examples show how to use EC2 Fleet of type instant for different use cases. For more information about using the EC2 CreateFleet API parameters, see CreateFleet in the Amazon EC2 API Reference. Examples • Example 1: Launch Spot Instances with the capacity-optimized allocation strategy Request types 1940 Amazon Elastic Compute Cloud User Guide Networking in Amazon EC2 Amazon VPC enables you to launch AWS resources, such as Amazon EC2 instances, into a virtual network dedicated to your AWS account, known as a virtual private cloud (VPC). When you launch an instance, you can select a subnet from the VPC. The instance is configured with a primary network interface, which is a logical virtual network card. The instance receives a primary private IP address from the IPv4 address of the"} +{"global_id": 710, "doc_id": "ec2", "chunk_id": "35", "question_id": 3, "question": "What might happen if your fleet has more than 1000 instances?", "answer_span": "For fleets with more than 1000 instances, the deletion request might fail.", "chunk": "that launched, along with errors for those instances that could not be launched. 4. Describe: You can describe your EC2 Fleet, list the instances associated with your EC2 Fleet, and view the history of your EC2 Fleet. 5. Terminate instances: You can terminate the instances at any time. 6. Delete fleet request: The fleet request can be deleted either manually or automatically: • Manual: You can delete the fleet request after your instances launch. Note that a deleted instant fleet with running instances is not supported. When you delete an instant fleet, Amazon EC2 automatically terminates all its instances. For fleets with more than 1000 instances, the deletion request might fail. If your fleet has more than 1000 instances, first terminate most of the instances manually, leaving 1000 or fewer. Then delete the fleet, and the remaining instances will be terminated automatically. • Automatic: Amazon EC2 deletes the fleet request some time after either: • All the instances are terminated. • The fleet fails to launch any instances. Examples The following examples show how to use EC2 Fleet of type instant for different use cases. For more information about using the EC2 CreateFleet API parameters, see CreateFleet in the Amazon EC2 API Reference. Examples • Example 1: Launch Spot Instances with the capacity-optimized allocation strategy Request types 1940 Amazon Elastic Compute Cloud User Guide Networking in Amazon EC2 Amazon VPC enables you to launch AWS resources, such as Amazon EC2 instances, into a virtual network dedicated to your AWS account, known as a virtual private cloud (VPC). When you launch an instance, you can select a subnet from the VPC. The instance is configured with a primary network interface, which is a logical virtual network card. The instance receives a primary private IP address from the IPv4 address of the"} +{"global_id": 711, "doc_id": "ec2", "chunk_id": "35", "question_id": 4, "question": "What does Amazon EC2 do after all the instances are terminated?", "answer_span": "Amazon EC2 deletes the fleet request some time after either: All the instances are terminated.", "chunk": "that launched, along with errors for those instances that could not be launched. 4. Describe: You can describe your EC2 Fleet, list the instances associated with your EC2 Fleet, and view the history of your EC2 Fleet. 5. Terminate instances: You can terminate the instances at any time. 6. Delete fleet request: The fleet request can be deleted either manually or automatically: • Manual: You can delete the fleet request after your instances launch. Note that a deleted instant fleet with running instances is not supported. When you delete an instant fleet, Amazon EC2 automatically terminates all its instances. For fleets with more than 1000 instances, the deletion request might fail. If your fleet has more than 1000 instances, first terminate most of the instances manually, leaving 1000 or fewer. Then delete the fleet, and the remaining instances will be terminated automatically. • Automatic: Amazon EC2 deletes the fleet request some time after either: • All the instances are terminated. • The fleet fails to launch any instances. Examples The following examples show how to use EC2 Fleet of type instant for different use cases. For more information about using the EC2 CreateFleet API parameters, see CreateFleet in the Amazon EC2 API Reference. Examples • Example 1: Launch Spot Instances with the capacity-optimized allocation strategy Request types 1940 Amazon Elastic Compute Cloud User Guide Networking in Amazon EC2 Amazon VPC enables you to launch AWS resources, such as Amazon EC2 instances, into a virtual network dedicated to your AWS account, known as a virtual private cloud (VPC). When you launch an instance, you can select a subnet from the VPC. The instance is configured with a primary network interface, which is a logical virtual network card. The instance receives a primary private IP address from the IPv4 address of the"} +{"global_id": 712, "doc_id": "ec2", "chunk_id": "36", "question_id": 1, "question": "What is a virtual private cloud (VPC)?", "answer_span": "known as a virtual private cloud (VPC).", "chunk": "known as a virtual private cloud (VPC). When you launch an instance, you can select a subnet from the VPC. The instance is configured with a primary network interface, which is a logical virtual network card. The instance receives a primary private IP address from the IPv4 address of the subnet, and it is assigned to the primary network interface. You can control whether the instance receives a public IP address from Amazon's pool of public IP addresses. The public IP address of an instance is associated with your instance only until it is stopped or terminated. If you require a persistent public IP address, you can allocate an Elastic IP address for your AWS account and associate it with an instance or a network interface. An Elastic IP address remains associated with your AWS account until you release it, and you can move it from one instance to another as needed. You can bring your own IP address range to your AWS account, where it appears as an address pool, and then allocate Elastic IP addresses from your address pool. To increase network performance and reduce latency, you can launch instances in a placement group. You can get significantly higher packet per second (PPS) performance using enhanced networking. You can accelerate high performance computing and machine learning applications using an Elastic Fabric Adapter (EFA), which is a network device that you can attach to a supported instance type. Features • Regions and Zones • Amazon EC2 instance IP addressing • EC2 instance hostnames and domains • Bring your own IP addresses (BYOIP) to Amazon EC2 • Elastic IP addresses • Elastic network interfaces • Amazon EC2 instance network bandwidth • Enhanced networking on Amazon EC2 instances • Elastic Fabric Adapter for AI/ML and HPC workloads on Amazon EC2 •"} +{"global_id": 713, "doc_id": "ec2", "chunk_id": "36", "question_id": 2, "question": "What does the instance receive from the IPv4 address of the subnet?", "answer_span": "The instance receives a primary private IP address from the IPv4 address of the subnet.", "chunk": "known as a virtual private cloud (VPC). When you launch an instance, you can select a subnet from the VPC. The instance is configured with a primary network interface, which is a logical virtual network card. The instance receives a primary private IP address from the IPv4 address of the subnet, and it is assigned to the primary network interface. You can control whether the instance receives a public IP address from Amazon's pool of public IP addresses. The public IP address of an instance is associated with your instance only until it is stopped or terminated. If you require a persistent public IP address, you can allocate an Elastic IP address for your AWS account and associate it with an instance or a network interface. An Elastic IP address remains associated with your AWS account until you release it, and you can move it from one instance to another as needed. You can bring your own IP address range to your AWS account, where it appears as an address pool, and then allocate Elastic IP addresses from your address pool. To increase network performance and reduce latency, you can launch instances in a placement group. You can get significantly higher packet per second (PPS) performance using enhanced networking. You can accelerate high performance computing and machine learning applications using an Elastic Fabric Adapter (EFA), which is a network device that you can attach to a supported instance type. Features • Regions and Zones • Amazon EC2 instance IP addressing • EC2 instance hostnames and domains • Bring your own IP addresses (BYOIP) to Amazon EC2 • Elastic IP addresses • Elastic network interfaces • Amazon EC2 instance network bandwidth • Enhanced networking on Amazon EC2 instances • Elastic Fabric Adapter for AI/ML and HPC workloads on Amazon EC2 •"} +{"global_id": 714, "doc_id": "ec2", "chunk_id": "36", "question_id": 3, "question": "What can you allocate for a persistent public IP address?", "answer_span": "If you require a persistent public IP address, you can allocate an Elastic IP address for your AWS account.", "chunk": "known as a virtual private cloud (VPC). When you launch an instance, you can select a subnet from the VPC. The instance is configured with a primary network interface, which is a logical virtual network card. The instance receives a primary private IP address from the IPv4 address of the subnet, and it is assigned to the primary network interface. You can control whether the instance receives a public IP address from Amazon's pool of public IP addresses. The public IP address of an instance is associated with your instance only until it is stopped or terminated. If you require a persistent public IP address, you can allocate an Elastic IP address for your AWS account and associate it with an instance or a network interface. An Elastic IP address remains associated with your AWS account until you release it, and you can move it from one instance to another as needed. You can bring your own IP address range to your AWS account, where it appears as an address pool, and then allocate Elastic IP addresses from your address pool. To increase network performance and reduce latency, you can launch instances in a placement group. You can get significantly higher packet per second (PPS) performance using enhanced networking. You can accelerate high performance computing and machine learning applications using an Elastic Fabric Adapter (EFA), which is a network device that you can attach to a supported instance type. Features • Regions and Zones • Amazon EC2 instance IP addressing • EC2 instance hostnames and domains • Bring your own IP addresses (BYOIP) to Amazon EC2 • Elastic IP addresses • Elastic network interfaces • Amazon EC2 instance network bandwidth • Enhanced networking on Amazon EC2 instances • Elastic Fabric Adapter for AI/ML and HPC workloads on Amazon EC2 •"} +{"global_id": 715, "doc_id": "ec2", "chunk_id": "36", "question_id": 4, "question": "What device can you attach to a supported instance type for high performance computing?", "answer_span": "You can accelerate high performance computing and machine learning applications using an Elastic Fabric Adapter (EFA).", "chunk": "known as a virtual private cloud (VPC). When you launch an instance, you can select a subnet from the VPC. The instance is configured with a primary network interface, which is a logical virtual network card. The instance receives a primary private IP address from the IPv4 address of the subnet, and it is assigned to the primary network interface. You can control whether the instance receives a public IP address from Amazon's pool of public IP addresses. The public IP address of an instance is associated with your instance only until it is stopped or terminated. If you require a persistent public IP address, you can allocate an Elastic IP address for your AWS account and associate it with an instance or a network interface. An Elastic IP address remains associated with your AWS account until you release it, and you can move it from one instance to another as needed. You can bring your own IP address range to your AWS account, where it appears as an address pool, and then allocate Elastic IP addresses from your address pool. To increase network performance and reduce latency, you can launch instances in a placement group. You can get significantly higher packet per second (PPS) performance using enhanced networking. You can accelerate high performance computing and machine learning applications using an Elastic Fabric Adapter (EFA), which is a network device that you can attach to a supported instance type. Features • Regions and Zones • Amazon EC2 instance IP addressing • EC2 instance hostnames and domains • Bring your own IP addresses (BYOIP) to Amazon EC2 • Elastic IP addresses • Elastic network interfaces • Amazon EC2 instance network bandwidth • Enhanced networking on Amazon EC2 instances • Elastic Fabric Adapter for AI/ML and HPC workloads on Amazon EC2 •"} +{"global_id": 716, "doc_id": "ec2", "chunk_id": "37", "question_id": 1, "question": "What are Availability Zones?", "answer_span": "Availability Zones are multiple, isolated locations within each Region.", "chunk": "• EC2 instance hostnames and domains • Bring your own IP addresses (BYOIP) to Amazon EC2 • Elastic IP addresses • Elastic network interfaces • Amazon EC2 instance network bandwidth • Enhanced networking on Amazon EC2 instances • Elastic Fabric Adapter for AI/ML and HPC workloads on Amazon EC2 • Amazon EC2 instance topology • Placement groups for your Amazon EC2 instances 2176 Amazon Elastic Compute Cloud User Guide • Network maximum transmission unit (MTU) for your EC2 instance • Virtual private clouds for your EC2 instances Regions and Zones Amazon EC2 is hosted in multiple locations world-wide. These locations are composed of AWS Regions, Availability Zones, Local Zones, AWS Outposts, and Wavelength Zones. • Regions are separate geographic areas. • Availability Zones are multiple, isolated locations within each Region. • Local Zones provide you with the ability to place resources, such as compute and storage, in multiple locations closer to your end users. • Wavelength Zones provide you with the ability to build applications that deliver ultra-low latencies to 5G devices and end users. Wavelength deploys standard AWS compute and storage services to the edge of telecommunication carriers' 5G networks. • AWS Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, colocation space, or on-premises facility. AWS operates state-of-the-art, highly available data centers. Although rare, failures can occur that affect the availability of instances that are in the same location. If you host all of your instances in a single location that is affected by a failure, none of your instances would be available. For more information, see AWS Global Infrastructure. Contents • Regions • Availability Zones • Local Zones • Wavelength Zones • AWS Outposts Regions Each Region is designed to be isolated from the other Regions. This achieves the greatest possible fault"} +{"global_id": 717, "doc_id": "ec2", "chunk_id": "37", "question_id": 2, "question": "What do Local Zones provide?", "answer_span": "Local Zones provide you with the ability to place resources, such as compute and storage, in multiple locations closer to your end users.", "chunk": "• EC2 instance hostnames and domains • Bring your own IP addresses (BYOIP) to Amazon EC2 • Elastic IP addresses • Elastic network interfaces • Amazon EC2 instance network bandwidth • Enhanced networking on Amazon EC2 instances • Elastic Fabric Adapter for AI/ML and HPC workloads on Amazon EC2 • Amazon EC2 instance topology • Placement groups for your Amazon EC2 instances 2176 Amazon Elastic Compute Cloud User Guide • Network maximum transmission unit (MTU) for your EC2 instance • Virtual private clouds for your EC2 instances Regions and Zones Amazon EC2 is hosted in multiple locations world-wide. These locations are composed of AWS Regions, Availability Zones, Local Zones, AWS Outposts, and Wavelength Zones. • Regions are separate geographic areas. • Availability Zones are multiple, isolated locations within each Region. • Local Zones provide you with the ability to place resources, such as compute and storage, in multiple locations closer to your end users. • Wavelength Zones provide you with the ability to build applications that deliver ultra-low latencies to 5G devices and end users. Wavelength deploys standard AWS compute and storage services to the edge of telecommunication carriers' 5G networks. • AWS Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, colocation space, or on-premises facility. AWS operates state-of-the-art, highly available data centers. Although rare, failures can occur that affect the availability of instances that are in the same location. If you host all of your instances in a single location that is affected by a failure, none of your instances would be available. For more information, see AWS Global Infrastructure. Contents • Regions • Availability Zones • Local Zones • Wavelength Zones • AWS Outposts Regions Each Region is designed to be isolated from the other Regions. This achieves the greatest possible fault"} +{"global_id": 718, "doc_id": "ec2", "chunk_id": "37", "question_id": 3, "question": "What is the purpose of Wavelength Zones?", "answer_span": "Wavelength Zones provide you with the ability to build applications that deliver ultra-low latencies to 5G devices and end users.", "chunk": "• EC2 instance hostnames and domains • Bring your own IP addresses (BYOIP) to Amazon EC2 • Elastic IP addresses • Elastic network interfaces • Amazon EC2 instance network bandwidth • Enhanced networking on Amazon EC2 instances • Elastic Fabric Adapter for AI/ML and HPC workloads on Amazon EC2 • Amazon EC2 instance topology • Placement groups for your Amazon EC2 instances 2176 Amazon Elastic Compute Cloud User Guide • Network maximum transmission unit (MTU) for your EC2 instance • Virtual private clouds for your EC2 instances Regions and Zones Amazon EC2 is hosted in multiple locations world-wide. These locations are composed of AWS Regions, Availability Zones, Local Zones, AWS Outposts, and Wavelength Zones. • Regions are separate geographic areas. • Availability Zones are multiple, isolated locations within each Region. • Local Zones provide you with the ability to place resources, such as compute and storage, in multiple locations closer to your end users. • Wavelength Zones provide you with the ability to build applications that deliver ultra-low latencies to 5G devices and end users. Wavelength deploys standard AWS compute and storage services to the edge of telecommunication carriers' 5G networks. • AWS Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, colocation space, or on-premises facility. AWS operates state-of-the-art, highly available data centers. Although rare, failures can occur that affect the availability of instances that are in the same location. If you host all of your instances in a single location that is affected by a failure, none of your instances would be available. For more information, see AWS Global Infrastructure. Contents • Regions • Availability Zones • Local Zones • Wavelength Zones • AWS Outposts Regions Each Region is designed to be isolated from the other Regions. This achieves the greatest possible fault"} +{"global_id": 719, "doc_id": "ec2", "chunk_id": "37", "question_id": 4, "question": "What does AWS Outposts bring to data centers?", "answer_span": "AWS Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, colocation space, or on-premises facility.", "chunk": "• EC2 instance hostnames and domains • Bring your own IP addresses (BYOIP) to Amazon EC2 • Elastic IP addresses • Elastic network interfaces • Amazon EC2 instance network bandwidth • Enhanced networking on Amazon EC2 instances • Elastic Fabric Adapter for AI/ML and HPC workloads on Amazon EC2 • Amazon EC2 instance topology • Placement groups for your Amazon EC2 instances 2176 Amazon Elastic Compute Cloud User Guide • Network maximum transmission unit (MTU) for your EC2 instance • Virtual private clouds for your EC2 instances Regions and Zones Amazon EC2 is hosted in multiple locations world-wide. These locations are composed of AWS Regions, Availability Zones, Local Zones, AWS Outposts, and Wavelength Zones. • Regions are separate geographic areas. • Availability Zones are multiple, isolated locations within each Region. • Local Zones provide you with the ability to place resources, such as compute and storage, in multiple locations closer to your end users. • Wavelength Zones provide you with the ability to build applications that deliver ultra-low latencies to 5G devices and end users. Wavelength deploys standard AWS compute and storage services to the edge of telecommunication carriers' 5G networks. • AWS Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, colocation space, or on-premises facility. AWS operates state-of-the-art, highly available data centers. Although rare, failures can occur that affect the availability of instances that are in the same location. If you host all of your instances in a single location that is affected by a failure, none of your instances would be available. For more information, see AWS Global Infrastructure. Contents • Regions • Availability Zones • Local Zones • Wavelength Zones • AWS Outposts Regions Each Region is designed to be isolated from the other Regions. This achieves the greatest possible fault"} +{"global_id": 720, "doc_id": "ec2", "chunk_id": "38", "question_id": 1, "question": "What happens if there is a failure?", "answer_span": "by a failure, none of your instances would be available.", "chunk": "by a failure, none of your instances would be available. For more information, see AWS Global Infrastructure. Contents • Regions • Availability Zones • Local Zones • Wavelength Zones • AWS Outposts Regions Each Region is designed to be isolated from the other Regions. This achieves the greatest possible fault tolerance and stability. Regions and Zones 2177 Amazon Elastic Compute Cloud User Guide When you launch an instance, select a Region that puts your instances close to specific customers, or that meets the legal or other requirements that you have. You can launch instances in multiple Regions. When you view your resources, you see only the resources that are tied to the Region that you specified. This is because Regions are isolated from each other, and we don't automatically replicate resources across Regions. Available Regions For the list of available Regions, see AWS Regions. Regional endpoints for Amazon EC2 When you work with an instance using the command line interface or API actions, you must specify its Regional endpoint. For more information about the Regions and endpoints for Amazon EC2, see Amazon EC2 service endpoints in the Amazon EC2 Developer Guide. For more information, see AWS Regions in the AWS Regions and Availability Zones User Guide. Availability Zones Each Region has multiple, isolated locations known as Availability Zones. The code for an Availability Zone is its Region code followed by a letter identifier. For example, us-east-1a. By launching EC2 instances in multiple Availability Zones, you can protect your applications from the failure of a single location in the Region. The following diagram illustrates multiple Availability Zones in an AWS Region. Availability Zone A and Availability Zone B each have one subnet, and each subnet has EC2 instances. Availability Zone C has no subnets, therefore you can't launch instances into this"} +{"global_id": 721, "doc_id": "ec2", "chunk_id": "38", "question_id": 2, "question": "What is the purpose of each Region?", "answer_span": "Each Region is designed to be isolated from the other Regions.", "chunk": "by a failure, none of your instances would be available. For more information, see AWS Global Infrastructure. Contents • Regions • Availability Zones • Local Zones • Wavelength Zones • AWS Outposts Regions Each Region is designed to be isolated from the other Regions. This achieves the greatest possible fault tolerance and stability. Regions and Zones 2177 Amazon Elastic Compute Cloud User Guide When you launch an instance, select a Region that puts your instances close to specific customers, or that meets the legal or other requirements that you have. You can launch instances in multiple Regions. When you view your resources, you see only the resources that are tied to the Region that you specified. This is because Regions are isolated from each other, and we don't automatically replicate resources across Regions. Available Regions For the list of available Regions, see AWS Regions. Regional endpoints for Amazon EC2 When you work with an instance using the command line interface or API actions, you must specify its Regional endpoint. For more information about the Regions and endpoints for Amazon EC2, see Amazon EC2 service endpoints in the Amazon EC2 Developer Guide. For more information, see AWS Regions in the AWS Regions and Availability Zones User Guide. Availability Zones Each Region has multiple, isolated locations known as Availability Zones. The code for an Availability Zone is its Region code followed by a letter identifier. For example, us-east-1a. By launching EC2 instances in multiple Availability Zones, you can protect your applications from the failure of a single location in the Region. The following diagram illustrates multiple Availability Zones in an AWS Region. Availability Zone A and Availability Zone B each have one subnet, and each subnet has EC2 instances. Availability Zone C has no subnets, therefore you can't launch instances into this"} +{"global_id": 722, "doc_id": "ec2", "chunk_id": "38", "question_id": 3, "question": "What do you see when you view your resources?", "answer_span": "When you view your resources, you see only the resources that are tied to the Region that you specified.", "chunk": "by a failure, none of your instances would be available. For more information, see AWS Global Infrastructure. Contents • Regions • Availability Zones • Local Zones • Wavelength Zones • AWS Outposts Regions Each Region is designed to be isolated from the other Regions. This achieves the greatest possible fault tolerance and stability. Regions and Zones 2177 Amazon Elastic Compute Cloud User Guide When you launch an instance, select a Region that puts your instances close to specific customers, or that meets the legal or other requirements that you have. You can launch instances in multiple Regions. When you view your resources, you see only the resources that are tied to the Region that you specified. This is because Regions are isolated from each other, and we don't automatically replicate resources across Regions. Available Regions For the list of available Regions, see AWS Regions. Regional endpoints for Amazon EC2 When you work with an instance using the command line interface or API actions, you must specify its Regional endpoint. For more information about the Regions and endpoints for Amazon EC2, see Amazon EC2 service endpoints in the Amazon EC2 Developer Guide. For more information, see AWS Regions in the AWS Regions and Availability Zones User Guide. Availability Zones Each Region has multiple, isolated locations known as Availability Zones. The code for an Availability Zone is its Region code followed by a letter identifier. For example, us-east-1a. By launching EC2 instances in multiple Availability Zones, you can protect your applications from the failure of a single location in the Region. The following diagram illustrates multiple Availability Zones in an AWS Region. Availability Zone A and Availability Zone B each have one subnet, and each subnet has EC2 instances. Availability Zone C has no subnets, therefore you can't launch instances into this"} +{"global_id": 723, "doc_id": "ec2", "chunk_id": "38", "question_id": 4, "question": "What is the code for an Availability Zone?", "answer_span": "The code for an Availability Zone is its Region code followed by a letter identifier.", "chunk": "by a failure, none of your instances would be available. For more information, see AWS Global Infrastructure. Contents • Regions • Availability Zones • Local Zones • Wavelength Zones • AWS Outposts Regions Each Region is designed to be isolated from the other Regions. This achieves the greatest possible fault tolerance and stability. Regions and Zones 2177 Amazon Elastic Compute Cloud User Guide When you launch an instance, select a Region that puts your instances close to specific customers, or that meets the legal or other requirements that you have. You can launch instances in multiple Regions. When you view your resources, you see only the resources that are tied to the Region that you specified. This is because Regions are isolated from each other, and we don't automatically replicate resources across Regions. Available Regions For the list of available Regions, see AWS Regions. Regional endpoints for Amazon EC2 When you work with an instance using the command line interface or API actions, you must specify its Regional endpoint. For more information about the Regions and endpoints for Amazon EC2, see Amazon EC2 service endpoints in the Amazon EC2 Developer Guide. For more information, see AWS Regions in the AWS Regions and Availability Zones User Guide. Availability Zones Each Region has multiple, isolated locations known as Availability Zones. The code for an Availability Zone is its Region code followed by a letter identifier. For example, us-east-1a. By launching EC2 instances in multiple Availability Zones, you can protect your applications from the failure of a single location in the Region. The following diagram illustrates multiple Availability Zones in an AWS Region. Availability Zone A and Availability Zone B each have one subnet, and each subnet has EC2 instances. Availability Zone C has no subnets, therefore you can't launch instances into this"} +{"global_id": 724, "doc_id": "ec2", "chunk_id": "39", "question_id": 1, "question": "What does the diagram illustrate in an AWS Region?", "answer_span": "The following diagram illustrates multiple Availability Zones in an AWS Region.", "chunk": "the failure of a single location in the Region. The following diagram illustrates multiple Availability Zones in an AWS Region. Availability Zone A and Availability Zone B each have one subnet, and each subnet has EC2 instances. Availability Zone C has no subnets, therefore you can't launch instances into this Availability Zone. Availability Zones 2178 Amazon Elastic Compute Cloud User Guide For more information, see Virtual private clouds for your EC2 instances. Availability Zones by Region For the list of Availability Zones by Region, see AWS Availability Zones. Instances in Availability Zones When you launch an instance, you select a Region and a virtual private cloud (VPC). Then, you can either select a subnet from one of the Availability Zones or let us choose a subnet for you. When you launch your initial instances, we recommend that you let us select an Availability Zone for you based on system health and available capacity. If you launch additional instances, specify an Availability Zone only if your new instances must be close to, or separated from, your existing instances. If you distribute instances across multiple Availability Zones and an instance fails, you can design your application so that an instance in another Availability Zone handles requests instead. For more information, see AWS Availability Zones in the AWS Regions and Availability Zones User Guide. Availability Zones 2179"} +{"global_id": 725, "doc_id": "ec2", "chunk_id": "39", "question_id": 2, "question": "How many subnets does Availability Zone A have?", "answer_span": "Availability Zone A and Availability Zone B each have one subnet.", "chunk": "the failure of a single location in the Region. The following diagram illustrates multiple Availability Zones in an AWS Region. Availability Zone A and Availability Zone B each have one subnet, and each subnet has EC2 instances. Availability Zone C has no subnets, therefore you can't launch instances into this Availability Zone. Availability Zones 2178 Amazon Elastic Compute Cloud User Guide For more information, see Virtual private clouds for your EC2 instances. Availability Zones by Region For the list of Availability Zones by Region, see AWS Availability Zones. Instances in Availability Zones When you launch an instance, you select a Region and a virtual private cloud (VPC). Then, you can either select a subnet from one of the Availability Zones or let us choose a subnet for you. When you launch your initial instances, we recommend that you let us select an Availability Zone for you based on system health and available capacity. If you launch additional instances, specify an Availability Zone only if your new instances must be close to, or separated from, your existing instances. If you distribute instances across multiple Availability Zones and an instance fails, you can design your application so that an instance in another Availability Zone handles requests instead. For more information, see AWS Availability Zones in the AWS Regions and Availability Zones User Guide. Availability Zones 2179"} +{"global_id": 726, "doc_id": "ec2", "chunk_id": "39", "question_id": 3, "question": "Can you launch instances into Availability Zone C?", "answer_span": "Availability Zone C has no subnets, therefore you can't launch instances into this Availability Zone.", "chunk": "the failure of a single location in the Region. The following diagram illustrates multiple Availability Zones in an AWS Region. Availability Zone A and Availability Zone B each have one subnet, and each subnet has EC2 instances. Availability Zone C has no subnets, therefore you can't launch instances into this Availability Zone. Availability Zones 2178 Amazon Elastic Compute Cloud User Guide For more information, see Virtual private clouds for your EC2 instances. Availability Zones by Region For the list of Availability Zones by Region, see AWS Availability Zones. Instances in Availability Zones When you launch an instance, you select a Region and a virtual private cloud (VPC). Then, you can either select a subnet from one of the Availability Zones or let us choose a subnet for you. When you launch your initial instances, we recommend that you let us select an Availability Zone for you based on system health and available capacity. If you launch additional instances, specify an Availability Zone only if your new instances must be close to, or separated from, your existing instances. If you distribute instances across multiple Availability Zones and an instance fails, you can design your application so that an instance in another Availability Zone handles requests instead. For more information, see AWS Availability Zones in the AWS Regions and Availability Zones User Guide. Availability Zones 2179"} +{"global_id": 727, "doc_id": "ec2", "chunk_id": "39", "question_id": 4, "question": "What do you select when you launch an instance?", "answer_span": "When you launch an instance, you select a Region and a virtual private cloud (VPC).", "chunk": "the failure of a single location in the Region. The following diagram illustrates multiple Availability Zones in an AWS Region. Availability Zone A and Availability Zone B each have one subnet, and each subnet has EC2 instances. Availability Zone C has no subnets, therefore you can't launch instances into this Availability Zone. Availability Zones 2178 Amazon Elastic Compute Cloud User Guide For more information, see Virtual private clouds for your EC2 instances. Availability Zones by Region For the list of Availability Zones by Region, see AWS Availability Zones. Instances in Availability Zones When you launch an instance, you select a Region and a virtual private cloud (VPC). Then, you can either select a subnet from one of the Availability Zones or let us choose a subnet for you. When you launch your initial instances, we recommend that you let us select an Availability Zone for you based on system health and available capacity. If you launch additional instances, specify an Availability Zone only if your new instances must be close to, or separated from, your existing instances. If you distribute instances across multiple Availability Zones and an instance fails, you can design your application so that an instance in another Availability Zone handles requests instead. For more information, see AWS Availability Zones in the AWS Regions and Availability Zones User Guide. Availability Zones 2179"} +{"global_id": 728, "doc_id": "qdeveloper", "chunk_id": "0", "question_id": 1, "question": "What is Amazon Q Developer built on?", "answer_span": "Amazon Q Developer is built on Amazon Bedrock", "chunk": "Amazon Q Developer User Guide What is Amazon Q Developer? Note Powered by Amazon Bedrock: Amazon Q Developer is built on Amazon Bedrock and includes automated abuse detection implemented in Amazon Bedrock to enforce safety, security, and the responsible use of AI. Amazon Q Developer is a generative artificial intelligence (AI) powered conversational assistant that can help you understand, build, extend, and operate AWS applications. You can ask questions about AWS architecture, your AWS resources, best practices, documentation, support, and more. Amazon Q is constantly updating its capabilities so your questions get the most contextually relevant and actionable answers. When used in an integrated development environment (IDE), Amazon Q provides software development assistance. Amazon Q can chat about code, provide inline code completions, generate net new code, scan your code for security vulnerabilities, and make code upgrades and improvements, such as language updates, debugging, and optimizations. Amazon Q is powered by Amazon Bedrock, a fully managed service that makes foundation models (FMs) available through an API. The model that powers Amazon Q has been augmented with high quality AWS content to get you more complete, actionable, and referenced answers to accelerate your building on AWS. Note This is the documentation for Amazon Q Developer. If you are looking for documentation for Amazon Q Business, see the Amazon Q Business User Guide. Get started with Amazon Q Developer To quickly get started using Amazon Q, you can access it in the following ways: Get started 1 Amazon Q Developer User Guide AWS apps and websites Add the necessary permissions to your IAM identity, and then choose the Amazon Q icon to start chatting in the AWS Management Console, AWS Documentation website, AWS website, or AWS Console Mobile Application. For more information, see Using Amazon Q Developer on AWS apps and websites."} +{"global_id": 729, "doc_id": "qdeveloper", "chunk_id": "0", "question_id": 2, "question": "What type of assistant is Amazon Q Developer?", "answer_span": "Amazon Q Developer is a generative artificial intelligence (AI) powered conversational assistant", "chunk": "Amazon Q Developer User Guide What is Amazon Q Developer? Note Powered by Amazon Bedrock: Amazon Q Developer is built on Amazon Bedrock and includes automated abuse detection implemented in Amazon Bedrock to enforce safety, security, and the responsible use of AI. Amazon Q Developer is a generative artificial intelligence (AI) powered conversational assistant that can help you understand, build, extend, and operate AWS applications. You can ask questions about AWS architecture, your AWS resources, best practices, documentation, support, and more. Amazon Q is constantly updating its capabilities so your questions get the most contextually relevant and actionable answers. When used in an integrated development environment (IDE), Amazon Q provides software development assistance. Amazon Q can chat about code, provide inline code completions, generate net new code, scan your code for security vulnerabilities, and make code upgrades and improvements, such as language updates, debugging, and optimizations. Amazon Q is powered by Amazon Bedrock, a fully managed service that makes foundation models (FMs) available through an API. The model that powers Amazon Q has been augmented with high quality AWS content to get you more complete, actionable, and referenced answers to accelerate your building on AWS. Note This is the documentation for Amazon Q Developer. If you are looking for documentation for Amazon Q Business, see the Amazon Q Business User Guide. Get started with Amazon Q Developer To quickly get started using Amazon Q, you can access it in the following ways: Get started 1 Amazon Q Developer User Guide AWS apps and websites Add the necessary permissions to your IAM identity, and then choose the Amazon Q icon to start chatting in the AWS Management Console, AWS Documentation website, AWS website, or AWS Console Mobile Application. For more information, see Using Amazon Q Developer on AWS apps and websites."} +{"global_id": 730, "doc_id": "qdeveloper", "chunk_id": "0", "question_id": 3, "question": "What can Amazon Q help you with?", "answer_span": "Amazon Q can chat about code, provide inline code completions, generate net new code, scan your code for security vulnerabilities, and make code upgrades and improvements", "chunk": "Amazon Q Developer User Guide What is Amazon Q Developer? Note Powered by Amazon Bedrock: Amazon Q Developer is built on Amazon Bedrock and includes automated abuse detection implemented in Amazon Bedrock to enforce safety, security, and the responsible use of AI. Amazon Q Developer is a generative artificial intelligence (AI) powered conversational assistant that can help you understand, build, extend, and operate AWS applications. You can ask questions about AWS architecture, your AWS resources, best practices, documentation, support, and more. Amazon Q is constantly updating its capabilities so your questions get the most contextually relevant and actionable answers. When used in an integrated development environment (IDE), Amazon Q provides software development assistance. Amazon Q can chat about code, provide inline code completions, generate net new code, scan your code for security vulnerabilities, and make code upgrades and improvements, such as language updates, debugging, and optimizations. Amazon Q is powered by Amazon Bedrock, a fully managed service that makes foundation models (FMs) available through an API. The model that powers Amazon Q has been augmented with high quality AWS content to get you more complete, actionable, and referenced answers to accelerate your building on AWS. Note This is the documentation for Amazon Q Developer. If you are looking for documentation for Amazon Q Business, see the Amazon Q Business User Guide. Get started with Amazon Q Developer To quickly get started using Amazon Q, you can access it in the following ways: Get started 1 Amazon Q Developer User Guide AWS apps and websites Add the necessary permissions to your IAM identity, and then choose the Amazon Q icon to start chatting in the AWS Management Console, AWS Documentation website, AWS website, or AWS Console Mobile Application. For more information, see Using Amazon Q Developer on AWS apps and websites."} +{"global_id": 731, "doc_id": "qdeveloper", "chunk_id": "0", "question_id": 4, "question": "How can you get started with Amazon Q Developer?", "answer_span": "Add the necessary permissions to your IAM identity, and then choose the Amazon Q icon to start chatting in the AWS Management Console, AWS Documentation website, AWS website, or AWS Console Mobile Application", "chunk": "Amazon Q Developer User Guide What is Amazon Q Developer? Note Powered by Amazon Bedrock: Amazon Q Developer is built on Amazon Bedrock and includes automated abuse detection implemented in Amazon Bedrock to enforce safety, security, and the responsible use of AI. Amazon Q Developer is a generative artificial intelligence (AI) powered conversational assistant that can help you understand, build, extend, and operate AWS applications. You can ask questions about AWS architecture, your AWS resources, best practices, documentation, support, and more. Amazon Q is constantly updating its capabilities so your questions get the most contextually relevant and actionable answers. When used in an integrated development environment (IDE), Amazon Q provides software development assistance. Amazon Q can chat about code, provide inline code completions, generate net new code, scan your code for security vulnerabilities, and make code upgrades and improvements, such as language updates, debugging, and optimizations. Amazon Q is powered by Amazon Bedrock, a fully managed service that makes foundation models (FMs) available through an API. The model that powers Amazon Q has been augmented with high quality AWS content to get you more complete, actionable, and referenced answers to accelerate your building on AWS. Note This is the documentation for Amazon Q Developer. If you are looking for documentation for Amazon Q Business, see the Amazon Q Business User Guide. Get started with Amazon Q Developer To quickly get started using Amazon Q, you can access it in the following ways: Get started 1 Amazon Q Developer User Guide AWS apps and websites Add the necessary permissions to your IAM identity, and then choose the Amazon Q icon to start chatting in the AWS Management Console, AWS Documentation website, AWS website, or AWS Console Mobile Application. For more information, see Using Amazon Q Developer on AWS apps and websites."} +{"global_id": 732, "doc_id": "qdeveloper", "chunk_id": "1", "question_id": 1, "question": "What do you need to do to start chatting in the AWS Management Console?", "answer_span": "Add the necessary permissions to your IAM identity, and then choose the Amazon Q icon to start chatting in the AWS Management Console.", "chunk": "AWS apps and websites Add the necessary permissions to your IAM identity, and then choose the Amazon Q icon to start chatting in the AWS Management Console, AWS Documentation website, AWS website, or AWS Console Mobile Application. For more information, see Using Amazon Q Developer on AWS apps and websites. IDEs Download the Amazon Q extension and use your AWS Builder ID (no AWS account required) to sign in for free. Download Amazon Q in Visual Studio Code Download Amazon Q in JetBrains IDEs Download Amazon Q in the AWS Toolkit for Visual Studio Download Amazon Q in Eclipse IDEs (Preview) From your IDE, choose the Amazon Q icon to start chatting or initiate a development workflow. For more information, see Installing the Amazon Q Developer extension or plugin in your IDE. Command line Download Amazon Q for command line for macOS Download Amazon Q for command line for Linux AppImage Get started 2 Amazon Q Developer User Guide Download Amazon Q for command line for Ubuntu For more information, see Using Amazon Q Developer on the command line. Amazon Q Developer in chat applications Add the AmazonQDeveloperAccess managed policy to your IAM identity and channel guardrails for Microsoft Teams or Slack applications. For more information, see Chatting with Amazon Q Developer in chat applications. Amazon Q Developer pricing Amazon Q Developer is available through a Free tier and the Amazon Q Developer Pro subscription. For more information, see Amazon Q Developer pricing. Amazon Q Developer pricing 3 Amazon Q Developer User Guide Amazon Q Developer features Amazon Q Developer is available across AWS environments and services, and also as a coding assistant in third party IDEs. Many of Amazon Q Developer’s capabilities exist in a chat interface, where you can use natural language to ask questions about AWS, get"} +{"global_id": 733, "doc_id": "qdeveloper", "chunk_id": "1", "question_id": 2, "question": "What is required to sign in for free to use the Amazon Q extension?", "answer_span": "use your AWS Builder ID (no AWS account required) to sign in for free.", "chunk": "AWS apps and websites Add the necessary permissions to your IAM identity, and then choose the Amazon Q icon to start chatting in the AWS Management Console, AWS Documentation website, AWS website, or AWS Console Mobile Application. For more information, see Using Amazon Q Developer on AWS apps and websites. IDEs Download the Amazon Q extension and use your AWS Builder ID (no AWS account required) to sign in for free. Download Amazon Q in Visual Studio Code Download Amazon Q in JetBrains IDEs Download Amazon Q in the AWS Toolkit for Visual Studio Download Amazon Q in Eclipse IDEs (Preview) From your IDE, choose the Amazon Q icon to start chatting or initiate a development workflow. For more information, see Installing the Amazon Q Developer extension or plugin in your IDE. Command line Download Amazon Q for command line for macOS Download Amazon Q for command line for Linux AppImage Get started 2 Amazon Q Developer User Guide Download Amazon Q for command line for Ubuntu For more information, see Using Amazon Q Developer on the command line. Amazon Q Developer in chat applications Add the AmazonQDeveloperAccess managed policy to your IAM identity and channel guardrails for Microsoft Teams or Slack applications. For more information, see Chatting with Amazon Q Developer in chat applications. Amazon Q Developer pricing Amazon Q Developer is available through a Free tier and the Amazon Q Developer Pro subscription. For more information, see Amazon Q Developer pricing. Amazon Q Developer pricing 3 Amazon Q Developer User Guide Amazon Q Developer features Amazon Q Developer is available across AWS environments and services, and also as a coding assistant in third party IDEs. Many of Amazon Q Developer’s capabilities exist in a chat interface, where you can use natural language to ask questions about AWS, get"} +{"global_id": 734, "doc_id": "qdeveloper", "chunk_id": "1", "question_id": 3, "question": "What managed policy should be added to your IAM identity for chat applications?", "answer_span": "Add the AmazonQDeveloperAccess managed policy to your IAM identity and channel guardrails for Microsoft Teams or Slack applications.", "chunk": "AWS apps and websites Add the necessary permissions to your IAM identity, and then choose the Amazon Q icon to start chatting in the AWS Management Console, AWS Documentation website, AWS website, or AWS Console Mobile Application. For more information, see Using Amazon Q Developer on AWS apps and websites. IDEs Download the Amazon Q extension and use your AWS Builder ID (no AWS account required) to sign in for free. Download Amazon Q in Visual Studio Code Download Amazon Q in JetBrains IDEs Download Amazon Q in the AWS Toolkit for Visual Studio Download Amazon Q in Eclipse IDEs (Preview) From your IDE, choose the Amazon Q icon to start chatting or initiate a development workflow. For more information, see Installing the Amazon Q Developer extension or plugin in your IDE. Command line Download Amazon Q for command line for macOS Download Amazon Q for command line for Linux AppImage Get started 2 Amazon Q Developer User Guide Download Amazon Q for command line for Ubuntu For more information, see Using Amazon Q Developer on the command line. Amazon Q Developer in chat applications Add the AmazonQDeveloperAccess managed policy to your IAM identity and channel guardrails for Microsoft Teams or Slack applications. For more information, see Chatting with Amazon Q Developer in chat applications. Amazon Q Developer pricing Amazon Q Developer is available through a Free tier and the Amazon Q Developer Pro subscription. For more information, see Amazon Q Developer pricing. Amazon Q Developer pricing 3 Amazon Q Developer User Guide Amazon Q Developer features Amazon Q Developer is available across AWS environments and services, and also as a coding assistant in third party IDEs. Many of Amazon Q Developer’s capabilities exist in a chat interface, where you can use natural language to ask questions about AWS, get"} +{"global_id": 735, "doc_id": "qdeveloper", "chunk_id": "1", "question_id": 4, "question": "What subscription options are available for Amazon Q Developer?", "answer_span": "Amazon Q Developer is available through a Free tier and the Amazon Q Developer Pro subscription.", "chunk": "AWS apps and websites Add the necessary permissions to your IAM identity, and then choose the Amazon Q icon to start chatting in the AWS Management Console, AWS Documentation website, AWS website, or AWS Console Mobile Application. For more information, see Using Amazon Q Developer on AWS apps and websites. IDEs Download the Amazon Q extension and use your AWS Builder ID (no AWS account required) to sign in for free. Download Amazon Q in Visual Studio Code Download Amazon Q in JetBrains IDEs Download Amazon Q in the AWS Toolkit for Visual Studio Download Amazon Q in Eclipse IDEs (Preview) From your IDE, choose the Amazon Q icon to start chatting or initiate a development workflow. For more information, see Installing the Amazon Q Developer extension or plugin in your IDE. Command line Download Amazon Q for command line for macOS Download Amazon Q for command line for Linux AppImage Get started 2 Amazon Q Developer User Guide Download Amazon Q for command line for Ubuntu For more information, see Using Amazon Q Developer on the command line. Amazon Q Developer in chat applications Add the AmazonQDeveloperAccess managed policy to your IAM identity and channel guardrails for Microsoft Teams or Slack applications. For more information, see Chatting with Amazon Q Developer in chat applications. Amazon Q Developer pricing Amazon Q Developer is available through a Free tier and the Amazon Q Developer Pro subscription. For more information, see Amazon Q Developer pricing. Amazon Q Developer pricing 3 Amazon Q Developer User Guide Amazon Q Developer features Amazon Q Developer is available across AWS environments and services, and also as a coding assistant in third party IDEs. Many of Amazon Q Developer’s capabilities exist in a chat interface, where you can use natural language to ask questions about AWS, get"} +{"global_id": 736, "doc_id": "qdeveloper", "chunk_id": "2", "question_id": 1, "question": "What is Amazon Q Developer available across?", "answer_span": "Amazon Q Developer is available across AWS environments and services, and also as a coding assistant in third party IDEs.", "chunk": "Developer User Guide Amazon Q Developer features Amazon Q Developer is available across AWS environments and services, and also as a coding assistant in third party IDEs. Many of Amazon Q Developer’s capabilities exist in a chat interface, where you can use natural language to ask questions about AWS, get help with code, explore resources, or troubleshoot. When you chat with Amazon Q, Amazon Q uses the context of your current conversation to inform its responses. You can ask follow-up questions or refer to its response when you ask a new question. Other Amazon Q Developer features are available as a part of your workflows in AWS service consoles and supported IDEs. The following sections explain the different features of Amazon Q Developer that you might encounter across your AWS experience. Analytics Summarizing your data With Amazon Q QuickSight, you can utilize the Generative BI authoring experience, create executive summaries of your data, ask and answer questions of data, and generate data stories. For more information, see Using Generative BI with Amazon Q QuickSight in the QuickSight User Guide. Management and governance Exploring nodes using text prompts Using AWS Systems Manager and Amazon Q, you can ask natural language questions about your managed nodes or instances. Amazon Q then uses the Systems Manager ListNodes action and creates filters based on your textual input to retrieve results. For more information, see Exploring nodes using text prompts in Amazon Q in the AWS Systems Manager User Guide. Analytics 4 Amazon Q Developer User Guide Investigating operational issues Amazon CloudWatch investigations enhance your ability to investigate and analyze resources, events, and activities across your AWS environment. By leveraging natural language processing, Amazon Q simplifies the process of understanding complex scenarios and relationships within your AWS account. Amazon Q Developer now helps you accelerate"} +{"global_id": 737, "doc_id": "qdeveloper", "chunk_id": "2", "question_id": 2, "question": "What can you do with Amazon Q QuickSight?", "answer_span": "With Amazon Q QuickSight, you can utilize the Generative BI authoring experience, create executive summaries of your data, ask and answer questions of data, and generate data stories.", "chunk": "Developer User Guide Amazon Q Developer features Amazon Q Developer is available across AWS environments and services, and also as a coding assistant in third party IDEs. Many of Amazon Q Developer’s capabilities exist in a chat interface, where you can use natural language to ask questions about AWS, get help with code, explore resources, or troubleshoot. When you chat with Amazon Q, Amazon Q uses the context of your current conversation to inform its responses. You can ask follow-up questions or refer to its response when you ask a new question. Other Amazon Q Developer features are available as a part of your workflows in AWS service consoles and supported IDEs. The following sections explain the different features of Amazon Q Developer that you might encounter across your AWS experience. Analytics Summarizing your data With Amazon Q QuickSight, you can utilize the Generative BI authoring experience, create executive summaries of your data, ask and answer questions of data, and generate data stories. For more information, see Using Generative BI with Amazon Q QuickSight in the QuickSight User Guide. Management and governance Exploring nodes using text prompts Using AWS Systems Manager and Amazon Q, you can ask natural language questions about your managed nodes or instances. Amazon Q then uses the Systems Manager ListNodes action and creates filters based on your textual input to retrieve results. For more information, see Exploring nodes using text prompts in Amazon Q in the AWS Systems Manager User Guide. Analytics 4 Amazon Q Developer User Guide Investigating operational issues Amazon CloudWatch investigations enhance your ability to investigate and analyze resources, events, and activities across your AWS environment. By leveraging natural language processing, Amazon Q simplifies the process of understanding complex scenarios and relationships within your AWS account. Amazon Q Developer now helps you accelerate"} +{"global_id": 738, "doc_id": "qdeveloper", "chunk_id": "2", "question_id": 3, "question": "How does Amazon Q assist with managed nodes or instances?", "answer_span": "Using AWS Systems Manager and Amazon Q, you can ask natural language questions about your managed nodes or instances.", "chunk": "Developer User Guide Amazon Q Developer features Amazon Q Developer is available across AWS environments and services, and also as a coding assistant in third party IDEs. Many of Amazon Q Developer’s capabilities exist in a chat interface, where you can use natural language to ask questions about AWS, get help with code, explore resources, or troubleshoot. When you chat with Amazon Q, Amazon Q uses the context of your current conversation to inform its responses. You can ask follow-up questions or refer to its response when you ask a new question. Other Amazon Q Developer features are available as a part of your workflows in AWS service consoles and supported IDEs. The following sections explain the different features of Amazon Q Developer that you might encounter across your AWS experience. Analytics Summarizing your data With Amazon Q QuickSight, you can utilize the Generative BI authoring experience, create executive summaries of your data, ask and answer questions of data, and generate data stories. For more information, see Using Generative BI with Amazon Q QuickSight in the QuickSight User Guide. Management and governance Exploring nodes using text prompts Using AWS Systems Manager and Amazon Q, you can ask natural language questions about your managed nodes or instances. Amazon Q then uses the Systems Manager ListNodes action and creates filters based on your textual input to retrieve results. For more information, see Exploring nodes using text prompts in Amazon Q in the AWS Systems Manager User Guide. Analytics 4 Amazon Q Developer User Guide Investigating operational issues Amazon CloudWatch investigations enhance your ability to investigate and analyze resources, events, and activities across your AWS environment. By leveraging natural language processing, Amazon Q simplifies the process of understanding complex scenarios and relationships within your AWS account. Amazon Q Developer now helps you accelerate"} +{"global_id": 739, "doc_id": "qdeveloper", "chunk_id": "2", "question_id": 4, "question": "What does Amazon CloudWatch investigations enhance?", "answer_span": "Amazon CloudWatch investigations enhance your ability to investigate and analyze resources, events, and activities across your AWS environment.", "chunk": "Developer User Guide Amazon Q Developer features Amazon Q Developer is available across AWS environments and services, and also as a coding assistant in third party IDEs. Many of Amazon Q Developer’s capabilities exist in a chat interface, where you can use natural language to ask questions about AWS, get help with code, explore resources, or troubleshoot. When you chat with Amazon Q, Amazon Q uses the context of your current conversation to inform its responses. You can ask follow-up questions or refer to its response when you ask a new question. Other Amazon Q Developer features are available as a part of your workflows in AWS service consoles and supported IDEs. The following sections explain the different features of Amazon Q Developer that you might encounter across your AWS experience. Analytics Summarizing your data With Amazon Q QuickSight, you can utilize the Generative BI authoring experience, create executive summaries of your data, ask and answer questions of data, and generate data stories. For more information, see Using Generative BI with Amazon Q QuickSight in the QuickSight User Guide. Management and governance Exploring nodes using text prompts Using AWS Systems Manager and Amazon Q, you can ask natural language questions about your managed nodes or instances. Amazon Q then uses the Systems Manager ListNodes action and creates filters based on your textual input to retrieve results. For more information, see Exploring nodes using text prompts in Amazon Q in the AWS Systems Manager User Guide. Analytics 4 Amazon Q Developer User Guide Investigating operational issues Amazon CloudWatch investigations enhance your ability to investigate and analyze resources, events, and activities across your AWS environment. By leveraging natural language processing, Amazon Q simplifies the process of understanding complex scenarios and relationships within your AWS account. Amazon Q Developer now helps you accelerate"} +{"global_id": 740, "doc_id": "qdeveloper", "chunk_id": "3", "question_id": 1, "question": "What does Amazon CloudWatch investigations enhance?", "answer_span": "Amazon CloudWatch investigations enhance your ability to investigate and analyze resources, events, and activities across your AWS environment.", "chunk": "User Guide Investigating operational issues Amazon CloudWatch investigations enhance your ability to investigate and analyze resources, events, and activities across your AWS environment. By leveraging natural language processing, Amazon Q simplifies the process of understanding complex scenarios and relationships within your AWS account. Amazon Q Developer now helps you accelerate CloudWatch investigations across your AWS environment. Q looks for anomalies in your telemetry, surfaces related signals for you to explore, identifies potential root-cause hypothesis, and suggests next steps to help you remediate issues faster. By integrating Amazon Q into your investigative workflows, you can accelerate problem solving, enhance your understanding of your AWS environment, and make more informed decisions about your infrastructure and applications. For example questions to ask Amazon Q in the context of Amazon CloudWatch investigations, see Chatting about your telemetry and operations . For more information about CloudWatch investigations in general, see CloudWatch investigations in the Amazon CloudWatch User Guide. Taking inventory of your AWS resources You can ask Amazon Q about your specific AWS account resources from anywhere in the AWS Management Console. You might not know where to locate relevant information about your resources, or you might be in one service console and want to access information about another service’s resources without disrupting your workflow. Amazon Q Developer answers your natural language questions about resources and provides deep links to those resources so you can quickly find them. You can ask Amazon Q to list a type of resource in your account, for details about a specific resource, or to list resources based on a criteria such as region or state. For example, you may want to know how many Amazon EC2 instances you currently have running. In that case, you can ask Amazon Q your question in natural language, and it will provide an"} +{"global_id": 741, "doc_id": "qdeveloper", "chunk_id": "3", "question_id": 2, "question": "How does Amazon Q help with CloudWatch investigations?", "answer_span": "Amazon Q Developer now helps you accelerate CloudWatch investigations across your AWS environment.", "chunk": "User Guide Investigating operational issues Amazon CloudWatch investigations enhance your ability to investigate and analyze resources, events, and activities across your AWS environment. By leveraging natural language processing, Amazon Q simplifies the process of understanding complex scenarios and relationships within your AWS account. Amazon Q Developer now helps you accelerate CloudWatch investigations across your AWS environment. Q looks for anomalies in your telemetry, surfaces related signals for you to explore, identifies potential root-cause hypothesis, and suggests next steps to help you remediate issues faster. By integrating Amazon Q into your investigative workflows, you can accelerate problem solving, enhance your understanding of your AWS environment, and make more informed decisions about your infrastructure and applications. For example questions to ask Amazon Q in the context of Amazon CloudWatch investigations, see Chatting about your telemetry and operations . For more information about CloudWatch investigations in general, see CloudWatch investigations in the Amazon CloudWatch User Guide. Taking inventory of your AWS resources You can ask Amazon Q about your specific AWS account resources from anywhere in the AWS Management Console. You might not know where to locate relevant information about your resources, or you might be in one service console and want to access information about another service’s resources without disrupting your workflow. Amazon Q Developer answers your natural language questions about resources and provides deep links to those resources so you can quickly find them. You can ask Amazon Q to list a type of resource in your account, for details about a specific resource, or to list resources based on a criteria such as region or state. For example, you may want to know how many Amazon EC2 instances you currently have running. In that case, you can ask Amazon Q your question in natural language, and it will provide an"} +{"global_id": 742, "doc_id": "qdeveloper", "chunk_id": "3", "question_id": 3, "question": "What can you ask Amazon Q about your AWS resources?", "answer_span": "You can ask Amazon Q about your specific AWS account resources from anywhere in the AWS Management Console.", "chunk": "User Guide Investigating operational issues Amazon CloudWatch investigations enhance your ability to investigate and analyze resources, events, and activities across your AWS environment. By leveraging natural language processing, Amazon Q simplifies the process of understanding complex scenarios and relationships within your AWS account. Amazon Q Developer now helps you accelerate CloudWatch investigations across your AWS environment. Q looks for anomalies in your telemetry, surfaces related signals for you to explore, identifies potential root-cause hypothesis, and suggests next steps to help you remediate issues faster. By integrating Amazon Q into your investigative workflows, you can accelerate problem solving, enhance your understanding of your AWS environment, and make more informed decisions about your infrastructure and applications. For example questions to ask Amazon Q in the context of Amazon CloudWatch investigations, see Chatting about your telemetry and operations . For more information about CloudWatch investigations in general, see CloudWatch investigations in the Amazon CloudWatch User Guide. Taking inventory of your AWS resources You can ask Amazon Q about your specific AWS account resources from anywhere in the AWS Management Console. You might not know where to locate relevant information about your resources, or you might be in one service console and want to access information about another service’s resources without disrupting your workflow. Amazon Q Developer answers your natural language questions about resources and provides deep links to those resources so you can quickly find them. You can ask Amazon Q to list a type of resource in your account, for details about a specific resource, or to list resources based on a criteria such as region or state. For example, you may want to know how many Amazon EC2 instances you currently have running. In that case, you can ask Amazon Q your question in natural language, and it will provide an"} +{"global_id": 743, "doc_id": "qdeveloper", "chunk_id": "3", "question_id": 4, "question": "What type of questions can you ask Amazon Q?", "answer_span": "You can ask Amazon Q to list a type of resource in your account, for details about a specific resource, or to list resources based on a criteria such as region or state.", "chunk": "User Guide Investigating operational issues Amazon CloudWatch investigations enhance your ability to investigate and analyze resources, events, and activities across your AWS environment. By leveraging natural language processing, Amazon Q simplifies the process of understanding complex scenarios and relationships within your AWS account. Amazon Q Developer now helps you accelerate CloudWatch investigations across your AWS environment. Q looks for anomalies in your telemetry, surfaces related signals for you to explore, identifies potential root-cause hypothesis, and suggests next steps to help you remediate issues faster. By integrating Amazon Q into your investigative workflows, you can accelerate problem solving, enhance your understanding of your AWS environment, and make more informed decisions about your infrastructure and applications. For example questions to ask Amazon Q in the context of Amazon CloudWatch investigations, see Chatting about your telemetry and operations . For more information about CloudWatch investigations in general, see CloudWatch investigations in the Amazon CloudWatch User Guide. Taking inventory of your AWS resources You can ask Amazon Q about your specific AWS account resources from anywhere in the AWS Management Console. You might not know where to locate relevant information about your resources, or you might be in one service console and want to access information about another service’s resources without disrupting your workflow. Amazon Q Developer answers your natural language questions about resources and provides deep links to those resources so you can quickly find them. You can ask Amazon Q to list a type of resource in your account, for details about a specific resource, or to list resources based on a criteria such as region or state. For example, you may want to know how many Amazon EC2 instances you currently have running. In that case, you can ask Amazon Q your question in natural language, and it will provide an"} +{"global_id": 744, "doc_id": "qdeveloper", "chunk_id": "4", "question_id": 1, "question": "What can you ask Amazon Q in natural language?", "answer_span": "you can ask Amazon Q your question in natural language, and it will provide an answer based on your specific resources.", "chunk": "a specific resource, or to list resources based on a criteria such as region or state. For example, you may want to know how many Amazon EC2 instances you currently have running. In that case, you can ask Amazon Q your question in natural language, and it will provide an answer based on your specific resources. For more information, see Chatting about your resources with Amazon Q Developer . Investigating 5 Amazon Q Developer User Guide For information about specific limits for each type, and how they relate to pricing for specific subscription package, see Amazon Q Developer pricing. Use Amazon Q in the AWS Console Mobile Application Amazon Q is integrated with the AWS Console Mobile Application to answer questions about AWS. You configure access the same way that you get access to Amazon Q in the AWS Management Console. For more information, see Getting started with Amazon Q Developer . Diagnosing console errors In the AWS Management Console, Amazon Q Developer can diagnose common errors you receive while working with AWS services, such as insufficient permissions, incorrect configuration, and exceeding service limits. For more information, see Diagnosing common errors in the console with Amazon Q Developer . Compute Choosing Amazon Elastic Compute Cloud instances With so many Amazon EC2 instance types available, finding the right instance types for your workload can be time-consuming and complex. The Amazon Q instance type selector considers your use case, workload type, CPU manufacturer preference, and how you prioritize price and performance, as well as additional parameters that you can specify. It then uses this data to provide suggestions and guidance for Amazon EC2 instance types that are best suited to your new workloads. For more information, see Get recommendations from Amazon EC2 instance type finder in the Amazon Elastic Compute Cloud User"} +{"global_id": 745, "doc_id": "qdeveloper", "chunk_id": "4", "question_id": 2, "question": "How does Amazon Q help with diagnosing console errors?", "answer_span": "Amazon Q Developer can diagnose common errors you receive while working with AWS services, such as insufficient permissions, incorrect configuration, and exceeding service limits.", "chunk": "a specific resource, or to list resources based on a criteria such as region or state. For example, you may want to know how many Amazon EC2 instances you currently have running. In that case, you can ask Amazon Q your question in natural language, and it will provide an answer based on your specific resources. For more information, see Chatting about your resources with Amazon Q Developer . Investigating 5 Amazon Q Developer User Guide For information about specific limits for each type, and how they relate to pricing for specific subscription package, see Amazon Q Developer pricing. Use Amazon Q in the AWS Console Mobile Application Amazon Q is integrated with the AWS Console Mobile Application to answer questions about AWS. You configure access the same way that you get access to Amazon Q in the AWS Management Console. For more information, see Getting started with Amazon Q Developer . Diagnosing console errors In the AWS Management Console, Amazon Q Developer can diagnose common errors you receive while working with AWS services, such as insufficient permissions, incorrect configuration, and exceeding service limits. For more information, see Diagnosing common errors in the console with Amazon Q Developer . Compute Choosing Amazon Elastic Compute Cloud instances With so many Amazon EC2 instance types available, finding the right instance types for your workload can be time-consuming and complex. The Amazon Q instance type selector considers your use case, workload type, CPU manufacturer preference, and how you prioritize price and performance, as well as additional parameters that you can specify. It then uses this data to provide suggestions and guidance for Amazon EC2 instance types that are best suited to your new workloads. For more information, see Get recommendations from Amazon EC2 instance type finder in the Amazon Elastic Compute Cloud User"} +{"global_id": 746, "doc_id": "qdeveloper", "chunk_id": "4", "question_id": 3, "question": "What does the Amazon Q instance type selector consider?", "answer_span": "The Amazon Q instance type selector considers your use case, workload type, CPU manufacturer preference, and how you prioritize price and performance, as well as additional parameters that you can specify.", "chunk": "a specific resource, or to list resources based on a criteria such as region or state. For example, you may want to know how many Amazon EC2 instances you currently have running. In that case, you can ask Amazon Q your question in natural language, and it will provide an answer based on your specific resources. For more information, see Chatting about your resources with Amazon Q Developer . Investigating 5 Amazon Q Developer User Guide For information about specific limits for each type, and how they relate to pricing for specific subscription package, see Amazon Q Developer pricing. Use Amazon Q in the AWS Console Mobile Application Amazon Q is integrated with the AWS Console Mobile Application to answer questions about AWS. You configure access the same way that you get access to Amazon Q in the AWS Management Console. For more information, see Getting started with Amazon Q Developer . Diagnosing console errors In the AWS Management Console, Amazon Q Developer can diagnose common errors you receive while working with AWS services, such as insufficient permissions, incorrect configuration, and exceeding service limits. For more information, see Diagnosing common errors in the console with Amazon Q Developer . Compute Choosing Amazon Elastic Compute Cloud instances With so many Amazon EC2 instance types available, finding the right instance types for your workload can be time-consuming and complex. The Amazon Q instance type selector considers your use case, workload type, CPU manufacturer preference, and how you prioritize price and performance, as well as additional parameters that you can specify. It then uses this data to provide suggestions and guidance for Amazon EC2 instance types that are best suited to your new workloads. For more information, see Get recommendations from Amazon EC2 instance type finder in the Amazon Elastic Compute Cloud User"} +{"global_id": 747, "doc_id": "qdeveloper", "chunk_id": "4", "question_id": 4, "question": "Where is Amazon Q integrated?", "answer_span": "Amazon Q is integrated with the AWS Console Mobile Application to answer questions about AWS.", "chunk": "a specific resource, or to list resources based on a criteria such as region or state. For example, you may want to know how many Amazon EC2 instances you currently have running. In that case, you can ask Amazon Q your question in natural language, and it will provide an answer based on your specific resources. For more information, see Chatting about your resources with Amazon Q Developer . Investigating 5 Amazon Q Developer User Guide For information about specific limits for each type, and how they relate to pricing for specific subscription package, see Amazon Q Developer pricing. Use Amazon Q in the AWS Console Mobile Application Amazon Q is integrated with the AWS Console Mobile Application to answer questions about AWS. You configure access the same way that you get access to Amazon Q in the AWS Management Console. For more information, see Getting started with Amazon Q Developer . Diagnosing console errors In the AWS Management Console, Amazon Q Developer can diagnose common errors you receive while working with AWS services, such as insufficient permissions, incorrect configuration, and exceeding service limits. For more information, see Diagnosing common errors in the console with Amazon Q Developer . Compute Choosing Amazon Elastic Compute Cloud instances With so many Amazon EC2 instance types available, finding the right instance types for your workload can be time-consuming and complex. The Amazon Q instance type selector considers your use case, workload type, CPU manufacturer preference, and how you prioritize price and performance, as well as additional parameters that you can specify. It then uses this data to provide suggestions and guidance for Amazon EC2 instance types that are best suited to your new workloads. For more information, see Get recommendations from Amazon EC2 instance type finder in the Amazon Elastic Compute Cloud User"} +{"global_id": 748, "doc_id": "qdeveloper", "chunk_id": "5", "question_id": 1, "question": "What does Amazon Q generative SQL use to analyze user intent?", "answer_span": "Amazon Q generative SQL uses generative AI to analyze user intent, query patterns, and schema metadata", "chunk": "well as additional parameters that you can specify. It then uses this data to provide suggestions and guidance for Amazon EC2 instance types that are best suited to your new workloads. For more information, see Get recommendations from Amazon EC2 instance type finder in the Amazon Elastic Compute Cloud User Guide. Use Amazon Q in the AWS Console Mobile Application 6 Amazon Q Developer User Guide Databases Writing database queries with natural language Amazon Q generative SQL uses generative AI to analyze user intent, query patterns, and schema metadata to identify common SQL query patterns directly within Amazon Redshift, accelerating the query authoring process for users and reducing the time required to derive actionable data insights. For more information, see Interacting with Amazon Q generative SQL in the Amazon Redshift Management Guide. Networking and content delivery Analyzing network troubleshooting You can use Amazon Q to help you diagnose network connectivity issues for applications that run in your Amazon VPCs. Amazon Q network troubleshooting can understand natural language Databases 7 Amazon Q Developer User Guide queries, and works with Reachability Analyzer to provide relevant responses. With Amazon Q, you can ask network reachability questions in a conversational format. For more information, see Amazon Q network troubleshooting for Reachability Analyzer in the Amazon VPC Reachability Analyzer Guide. Security, Identity, & Compliance Analyzing network security configurations (preview) You can easily get answers, in natural language, to questions about your network security configurations from AWS Shield network security director. Amazon Q helps you analyze your network security findings and provides recommended remediation steps in the console and chat applications. For more information, see Analyze network security with Amazon Q Developer in the AWS Shield network security director Developer Guide. Developer tools Ask Amazon Q Developer questions about building at AWS and for assistance with"} +{"global_id": 749, "doc_id": "qdeveloper", "chunk_id": "5", "question_id": 2, "question": "What can you use Amazon Q for in relation to network connectivity issues?", "answer_span": "You can use Amazon Q to help you diagnose network connectivity issues for applications that run in your Amazon VPCs.", "chunk": "well as additional parameters that you can specify. It then uses this data to provide suggestions and guidance for Amazon EC2 instance types that are best suited to your new workloads. For more information, see Get recommendations from Amazon EC2 instance type finder in the Amazon Elastic Compute Cloud User Guide. Use Amazon Q in the AWS Console Mobile Application 6 Amazon Q Developer User Guide Databases Writing database queries with natural language Amazon Q generative SQL uses generative AI to analyze user intent, query patterns, and schema metadata to identify common SQL query patterns directly within Amazon Redshift, accelerating the query authoring process for users and reducing the time required to derive actionable data insights. For more information, see Interacting with Amazon Q generative SQL in the Amazon Redshift Management Guide. Networking and content delivery Analyzing network troubleshooting You can use Amazon Q to help you diagnose network connectivity issues for applications that run in your Amazon VPCs. Amazon Q network troubleshooting can understand natural language Databases 7 Amazon Q Developer User Guide queries, and works with Reachability Analyzer to provide relevant responses. With Amazon Q, you can ask network reachability questions in a conversational format. For more information, see Amazon Q network troubleshooting for Reachability Analyzer in the Amazon VPC Reachability Analyzer Guide. Security, Identity, & Compliance Analyzing network security configurations (preview) You can easily get answers, in natural language, to questions about your network security configurations from AWS Shield network security director. Amazon Q helps you analyze your network security findings and provides recommended remediation steps in the console and chat applications. For more information, see Analyze network security with Amazon Q Developer in the AWS Shield network security director Developer Guide. Developer tools Ask Amazon Q Developer questions about building at AWS and for assistance with"} +{"global_id": 750, "doc_id": "qdeveloper", "chunk_id": "5", "question_id": 3, "question": "What does Amazon Q help you analyze regarding network security?", "answer_span": "Amazon Q helps you analyze your network security findings and provides recommended remediation steps in the console and chat applications.", "chunk": "well as additional parameters that you can specify. It then uses this data to provide suggestions and guidance for Amazon EC2 instance types that are best suited to your new workloads. For more information, see Get recommendations from Amazon EC2 instance type finder in the Amazon Elastic Compute Cloud User Guide. Use Amazon Q in the AWS Console Mobile Application 6 Amazon Q Developer User Guide Databases Writing database queries with natural language Amazon Q generative SQL uses generative AI to analyze user intent, query patterns, and schema metadata to identify common SQL query patterns directly within Amazon Redshift, accelerating the query authoring process for users and reducing the time required to derive actionable data insights. For more information, see Interacting with Amazon Q generative SQL in the Amazon Redshift Management Guide. Networking and content delivery Analyzing network troubleshooting You can use Amazon Q to help you diagnose network connectivity issues for applications that run in your Amazon VPCs. Amazon Q network troubleshooting can understand natural language Databases 7 Amazon Q Developer User Guide queries, and works with Reachability Analyzer to provide relevant responses. With Amazon Q, you can ask network reachability questions in a conversational format. For more information, see Amazon Q network troubleshooting for Reachability Analyzer in the Amazon VPC Reachability Analyzer Guide. Security, Identity, & Compliance Analyzing network security configurations (preview) You can easily get answers, in natural language, to questions about your network security configurations from AWS Shield network security director. Amazon Q helps you analyze your network security findings and provides recommended remediation steps in the console and chat applications. For more information, see Analyze network security with Amazon Q Developer in the AWS Shield network security director Developer Guide. Developer tools Ask Amazon Q Developer questions about building at AWS and for assistance with"} +{"global_id": 751, "doc_id": "qdeveloper", "chunk_id": "5", "question_id": 4, "question": "Where can you find more information about Amazon Q instance type finder?", "answer_span": "For more information, see Get recommendations from Amazon EC2 instance type finder in the Amazon Elastic Compute Cloud User Guide.", "chunk": "well as additional parameters that you can specify. It then uses this data to provide suggestions and guidance for Amazon EC2 instance types that are best suited to your new workloads. For more information, see Get recommendations from Amazon EC2 instance type finder in the Amazon Elastic Compute Cloud User Guide. Use Amazon Q in the AWS Console Mobile Application 6 Amazon Q Developer User Guide Databases Writing database queries with natural language Amazon Q generative SQL uses generative AI to analyze user intent, query patterns, and schema metadata to identify common SQL query patterns directly within Amazon Redshift, accelerating the query authoring process for users and reducing the time required to derive actionable data insights. For more information, see Interacting with Amazon Q generative SQL in the Amazon Redshift Management Guide. Networking and content delivery Analyzing network troubleshooting You can use Amazon Q to help you diagnose network connectivity issues for applications that run in your Amazon VPCs. Amazon Q network troubleshooting can understand natural language Databases 7 Amazon Q Developer User Guide queries, and works with Reachability Analyzer to provide relevant responses. With Amazon Q, you can ask network reachability questions in a conversational format. For more information, see Amazon Q network troubleshooting for Reachability Analyzer in the Amazon VPC Reachability Analyzer Guide. Security, Identity, & Compliance Analyzing network security configurations (preview) You can easily get answers, in natural language, to questions about your network security configurations from AWS Shield network security director. Amazon Q helps you analyze your network security findings and provides recommended remediation steps in the console and chat applications. For more information, see Analyze network security with Amazon Q Developer in the AWS Shield network security director Developer Guide. Developer tools Ask Amazon Q Developer questions about building at AWS and for assistance with"} +{"global_id": 752, "doc_id": "qdeveloper", "chunk_id": "6", "question_id": 1, "question": "What does Amazon Q provide in the console and chat applications?", "answer_span": "your network security findings and provides recommended remediation steps in the console and chat applications.", "chunk": "your network security findings and provides recommended remediation steps in the console and chat applications. For more information, see Analyze network security with Amazon Q Developer in the AWS Shield network security director Developer Guide. Developer tools Ask Amazon Q Developer questions about building at AWS and for assistance with software development. Amazon Q can explain coding concepts and code snippets, generate code and unit tests, and improve code, including debugging or refactoring. Developing code features After you explain, in natural language, the feature that you want to develop, Amazon Q can use the context of your current project to generate an implementation plan and the accompanying code. Amazon Q can help you build AWS projects or your own applications. For more information, see Developing features with Amazon Q Developer . Getting inline code suggestions Amazon Q provides you with code recommendations in real time. As you write code, Amazon Q automatically generates suggestions based on your existing code and comments. For more information, see Generating inline suggestions with Amazon Q Developer. Chatting about code in IDEs Within integrated development environments (IDEs), Amazon Q can answer questions related to the software development process, including conceptual questions about programming and how Security, Identity, & Compliance 8 Amazon Q Developer User Guide specific code works. You can also ask Amazon Q to update and improve code snippets from the chat panel. With multi-language support, you can chat with Amazon Q in any of the supported natural languages, including English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Chatting with Amazon Q Developer about code . To write code and get development assistance in the most full-featured environment with Amazon Q Developer, see Using Amazon Q Developer in the IDE. To enable basic"} +{"global_id": 753, "doc_id": "qdeveloper", "chunk_id": "6", "question_id": 2, "question": "What can Amazon Q explain related to coding?", "answer_span": "Amazon Q can explain coding concepts and code snippets, generate code and unit tests, and improve code, including debugging or refactoring.", "chunk": "your network security findings and provides recommended remediation steps in the console and chat applications. For more information, see Analyze network security with Amazon Q Developer in the AWS Shield network security director Developer Guide. Developer tools Ask Amazon Q Developer questions about building at AWS and for assistance with software development. Amazon Q can explain coding concepts and code snippets, generate code and unit tests, and improve code, including debugging or refactoring. Developing code features After you explain, in natural language, the feature that you want to develop, Amazon Q can use the context of your current project to generate an implementation plan and the accompanying code. Amazon Q can help you build AWS projects or your own applications. For more information, see Developing features with Amazon Q Developer . Getting inline code suggestions Amazon Q provides you with code recommendations in real time. As you write code, Amazon Q automatically generates suggestions based on your existing code and comments. For more information, see Generating inline suggestions with Amazon Q Developer. Chatting about code in IDEs Within integrated development environments (IDEs), Amazon Q can answer questions related to the software development process, including conceptual questions about programming and how Security, Identity, & Compliance 8 Amazon Q Developer User Guide specific code works. You can also ask Amazon Q to update and improve code snippets from the chat panel. With multi-language support, you can chat with Amazon Q in any of the supported natural languages, including English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Chatting with Amazon Q Developer about code . To write code and get development assistance in the most full-featured environment with Amazon Q Developer, see Using Amazon Q Developer in the IDE. To enable basic"} +{"global_id": 754, "doc_id": "qdeveloper", "chunk_id": "6", "question_id": 3, "question": "What does Amazon Q do as you write code?", "answer_span": "Amazon Q automatically generates suggestions based on your existing code and comments.", "chunk": "your network security findings and provides recommended remediation steps in the console and chat applications. For more information, see Analyze network security with Amazon Q Developer in the AWS Shield network security director Developer Guide. Developer tools Ask Amazon Q Developer questions about building at AWS and for assistance with software development. Amazon Q can explain coding concepts and code snippets, generate code and unit tests, and improve code, including debugging or refactoring. Developing code features After you explain, in natural language, the feature that you want to develop, Amazon Q can use the context of your current project to generate an implementation plan and the accompanying code. Amazon Q can help you build AWS projects or your own applications. For more information, see Developing features with Amazon Q Developer . Getting inline code suggestions Amazon Q provides you with code recommendations in real time. As you write code, Amazon Q automatically generates suggestions based on your existing code and comments. For more information, see Generating inline suggestions with Amazon Q Developer. Chatting about code in IDEs Within integrated development environments (IDEs), Amazon Q can answer questions related to the software development process, including conceptual questions about programming and how Security, Identity, & Compliance 8 Amazon Q Developer User Guide specific code works. You can also ask Amazon Q to update and improve code snippets from the chat panel. With multi-language support, you can chat with Amazon Q in any of the supported natural languages, including English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Chatting with Amazon Q Developer about code . To write code and get development assistance in the most full-featured environment with Amazon Q Developer, see Using Amazon Q Developer in the IDE. To enable basic"} +{"global_id": 755, "doc_id": "qdeveloper", "chunk_id": "6", "question_id": 4, "question": "In which environments can you chat with Amazon Q?", "answer_span": "Within integrated development environments (IDEs), Amazon Q can answer questions related to the software development process.", "chunk": "your network security findings and provides recommended remediation steps in the console and chat applications. For more information, see Analyze network security with Amazon Q Developer in the AWS Shield network security director Developer Guide. Developer tools Ask Amazon Q Developer questions about building at AWS and for assistance with software development. Amazon Q can explain coding concepts and code snippets, generate code and unit tests, and improve code, including debugging or refactoring. Developing code features After you explain, in natural language, the feature that you want to develop, Amazon Q can use the context of your current project to generate an implementation plan and the accompanying code. Amazon Q can help you build AWS projects or your own applications. For more information, see Developing features with Amazon Q Developer . Getting inline code suggestions Amazon Q provides you with code recommendations in real time. As you write code, Amazon Q automatically generates suggestions based on your existing code and comments. For more information, see Generating inline suggestions with Amazon Q Developer. Chatting about code in IDEs Within integrated development environments (IDEs), Amazon Q can answer questions related to the software development process, including conceptual questions about programming and how Security, Identity, & Compliance 8 Amazon Q Developer User Guide specific code works. You can also ask Amazon Q to update and improve code snippets from the chat panel. With multi-language support, you can chat with Amazon Q in any of the supported natural languages, including English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Chatting with Amazon Q Developer about code . To write code and get development assistance in the most full-featured environment with Amazon Q Developer, see Using Amazon Q Developer in the IDE. To enable basic"} +{"global_id": 756, "doc_id": "qdeveloper", "chunk_id": "7", "question_id": 1, "question": "What languages are available with Amazon Q Developer?", "answer_span": "Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available.", "chunk": "Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Chatting with Amazon Q Developer about code . To write code and get development assistance in the most full-featured environment with Amazon Q Developer, see Using Amazon Q Developer in the IDE. To enable basic code completion functionality in other interfaces across AWS, see Generating inline suggestions in AWS coding environments . Reviewing your code for security vulnerabilities and quality issues Within IDEs, Amazon Q reviews your code for security vulnerabilities and code quality issues. Amazon Q can review as you code or review entire projects to monitor the security and quality of your applications throughout development. For more information, see Reviewing code with Amazon Q Developer . Transforming code Amazon Q can perform automated language and operating system (OS)-level upgrades for your applications. For more information, see Transforming code in the IDE with Amazon Q Developer . Generating unit tests Amazon Q Developer provides an AI-powered unit test generation feature to help development teams improve code coverage throughout their software development lifecycle. The Amazon Q Developer agent for unit test generation is available in the following environments: • Amazon Q Developer IDE extension. For more information, see Generating unit tests with Amazon Q . • GitLab, as part of GitLab Duo. For more information, see the section called “GitLab quick actions” . Reviewing your code for security vulnerabilities and quality issues 9 Amazon Q Developer User Guide Note The unit test generation capability is available in all Amazon Q Developer supported regions. Developing software in Amazon CodeCatalyst Amazon Q Developer in CodeCatalyst includes generative AI features that can help users in projects in your space develop software faster. You can assign issues to Amazon Q or recommend tasks for Amazon Q. You can also"} +{"global_id": 757, "doc_id": "qdeveloper", "chunk_id": "7", "question_id": 2, "question": "What feature does Amazon Q Developer provide to help improve code coverage?", "answer_span": "Amazon Q Developer provides an AI-powered unit test generation feature to help development teams improve code coverage throughout their software development lifecycle.", "chunk": "Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Chatting with Amazon Q Developer about code . To write code and get development assistance in the most full-featured environment with Amazon Q Developer, see Using Amazon Q Developer in the IDE. To enable basic code completion functionality in other interfaces across AWS, see Generating inline suggestions in AWS coding environments . Reviewing your code for security vulnerabilities and quality issues Within IDEs, Amazon Q reviews your code for security vulnerabilities and code quality issues. Amazon Q can review as you code or review entire projects to monitor the security and quality of your applications throughout development. For more information, see Reviewing code with Amazon Q Developer . Transforming code Amazon Q can perform automated language and operating system (OS)-level upgrades for your applications. For more information, see Transforming code in the IDE with Amazon Q Developer . Generating unit tests Amazon Q Developer provides an AI-powered unit test generation feature to help development teams improve code coverage throughout their software development lifecycle. The Amazon Q Developer agent for unit test generation is available in the following environments: • Amazon Q Developer IDE extension. For more information, see Generating unit tests with Amazon Q . • GitLab, as part of GitLab Duo. For more information, see the section called “GitLab quick actions” . Reviewing your code for security vulnerabilities and quality issues 9 Amazon Q Developer User Guide Note The unit test generation capability is available in all Amazon Q Developer supported regions. Developing software in Amazon CodeCatalyst Amazon Q Developer in CodeCatalyst includes generative AI features that can help users in projects in your space develop software faster. You can assign issues to Amazon Q or recommend tasks for Amazon Q. You can also"} +{"global_id": 758, "doc_id": "qdeveloper", "chunk_id": "7", "question_id": 3, "question": "What can Amazon Q do while you code?", "answer_span": "Amazon Q can review as you code or review entire projects to monitor the security and quality of your applications throughout development.", "chunk": "Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Chatting with Amazon Q Developer about code . To write code and get development assistance in the most full-featured environment with Amazon Q Developer, see Using Amazon Q Developer in the IDE. To enable basic code completion functionality in other interfaces across AWS, see Generating inline suggestions in AWS coding environments . Reviewing your code for security vulnerabilities and quality issues Within IDEs, Amazon Q reviews your code for security vulnerabilities and code quality issues. Amazon Q can review as you code or review entire projects to monitor the security and quality of your applications throughout development. For more information, see Reviewing code with Amazon Q Developer . Transforming code Amazon Q can perform automated language and operating system (OS)-level upgrades for your applications. For more information, see Transforming code in the IDE with Amazon Q Developer . Generating unit tests Amazon Q Developer provides an AI-powered unit test generation feature to help development teams improve code coverage throughout their software development lifecycle. The Amazon Q Developer agent for unit test generation is available in the following environments: • Amazon Q Developer IDE extension. For more information, see Generating unit tests with Amazon Q . • GitLab, as part of GitLab Duo. For more information, see the section called “GitLab quick actions” . Reviewing your code for security vulnerabilities and quality issues 9 Amazon Q Developer User Guide Note The unit test generation capability is available in all Amazon Q Developer supported regions. Developing software in Amazon CodeCatalyst Amazon Q Developer in CodeCatalyst includes generative AI features that can help users in projects in your space develop software faster. You can assign issues to Amazon Q or recommend tasks for Amazon Q. You can also"} +{"global_id": 759, "doc_id": "qdeveloper", "chunk_id": "7", "question_id": 4, "question": "In which environments is the Amazon Q Developer agent for unit test generation available?", "answer_span": "The Amazon Q Developer agent for unit test generation is available in the following environments: • Amazon Q Developer IDE extension.", "chunk": "Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Chatting with Amazon Q Developer about code . To write code and get development assistance in the most full-featured environment with Amazon Q Developer, see Using Amazon Q Developer in the IDE. To enable basic code completion functionality in other interfaces across AWS, see Generating inline suggestions in AWS coding environments . Reviewing your code for security vulnerabilities and quality issues Within IDEs, Amazon Q reviews your code for security vulnerabilities and code quality issues. Amazon Q can review as you code or review entire projects to monitor the security and quality of your applications throughout development. For more information, see Reviewing code with Amazon Q Developer . Transforming code Amazon Q can perform automated language and operating system (OS)-level upgrades for your applications. For more information, see Transforming code in the IDE with Amazon Q Developer . Generating unit tests Amazon Q Developer provides an AI-powered unit test generation feature to help development teams improve code coverage throughout their software development lifecycle. The Amazon Q Developer agent for unit test generation is available in the following environments: • Amazon Q Developer IDE extension. For more information, see Generating unit tests with Amazon Q . • GitLab, as part of GitLab Duo. For more information, see the section called “GitLab quick actions” . Reviewing your code for security vulnerabilities and quality issues 9 Amazon Q Developer User Guide Note The unit test generation capability is available in all Amazon Q Developer supported regions. Developing software in Amazon CodeCatalyst Amazon Q Developer in CodeCatalyst includes generative AI features that can help users in projects in your space develop software faster. You can assign issues to Amazon Q or recommend tasks for Amazon Q. You can also"} +{"global_id": 760, "doc_id": "qdeveloper", "chunk_id": "8", "question_id": 1, "question": "What features does Amazon Q Developer in CodeCatalyst include?", "answer_span": "Developing software in Amazon CodeCatalyst Amazon Q Developer in CodeCatalyst includes generative AI features that can help users in projects in your space develop software faster.", "chunk": "available in all Amazon Q Developer supported regions. Developing software in Amazon CodeCatalyst Amazon Q Developer in CodeCatalyst includes generative AI features that can help users in projects in your space develop software faster. You can assign issues to Amazon Q or recommend tasks for Amazon Q. You can also ask Amazon Q to write a description or to summarize content. For more information, see Managing generative AI features in Amazon CodeCatalyst in the Amazon CodeCatalyst administrator guide. Chatting about code in Amazon SageMaker AI Studio Amazon SageMaker AI Studio is a web-based experience for running ML workflows. You can chat with Amazon Q Developer inside Studio to get guidance on SageMaker AI features, troubleshoot JupyterLab errors, and get sample code. For more information, see Use Amazon Q to Expedite Your Machine Learning Workflows in the SageMaker AI Developer Guide. Developing software in Amazon CodeCatalyst 10 Amazon Q Developer User Guide Interacting with the command line and AWS CloudShell Command Line Interface (CLI) After installing Amazon Q for the command line, you can use it to complete CLI commands as it populates contextually relevant subcommands, options and arguments. It provides AI-generated completions as you type in the command line. Additionally, you can use Amazon Q to write natural language instructions that are instantly translated to an executable shell code snippet. You can also ask Amazon Q complex questions, and it provides feedback and instructions based on the conversation, as well as context and information outside of the conversation. You can then provide permission to Amazon Q so it performs actions on your behalf. With multi-language support, you can chat with Amazon Q in any of the supported natural languages, including English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Using"} +{"global_id": 761, "doc_id": "qdeveloper", "chunk_id": "8", "question_id": 2, "question": "What can you ask Amazon Q to do?", "answer_span": "You can also ask Amazon Q to write a description or to summarize content.", "chunk": "available in all Amazon Q Developer supported regions. Developing software in Amazon CodeCatalyst Amazon Q Developer in CodeCatalyst includes generative AI features that can help users in projects in your space develop software faster. You can assign issues to Amazon Q or recommend tasks for Amazon Q. You can also ask Amazon Q to write a description or to summarize content. For more information, see Managing generative AI features in Amazon CodeCatalyst in the Amazon CodeCatalyst administrator guide. Chatting about code in Amazon SageMaker AI Studio Amazon SageMaker AI Studio is a web-based experience for running ML workflows. You can chat with Amazon Q Developer inside Studio to get guidance on SageMaker AI features, troubleshoot JupyterLab errors, and get sample code. For more information, see Use Amazon Q to Expedite Your Machine Learning Workflows in the SageMaker AI Developer Guide. Developing software in Amazon CodeCatalyst 10 Amazon Q Developer User Guide Interacting with the command line and AWS CloudShell Command Line Interface (CLI) After installing Amazon Q for the command line, you can use it to complete CLI commands as it populates contextually relevant subcommands, options and arguments. It provides AI-generated completions as you type in the command line. Additionally, you can use Amazon Q to write natural language instructions that are instantly translated to an executable shell code snippet. You can also ask Amazon Q complex questions, and it provides feedback and instructions based on the conversation, as well as context and information outside of the conversation. You can then provide permission to Amazon Q so it performs actions on your behalf. With multi-language support, you can chat with Amazon Q in any of the supported natural languages, including English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Using"} +{"global_id": 762, "doc_id": "qdeveloper", "chunk_id": "8", "question_id": 3, "question": "What is Amazon SageMaker AI Studio?", "answer_span": "Amazon SageMaker AI Studio is a web-based experience for running ML workflows.", "chunk": "available in all Amazon Q Developer supported regions. Developing software in Amazon CodeCatalyst Amazon Q Developer in CodeCatalyst includes generative AI features that can help users in projects in your space develop software faster. You can assign issues to Amazon Q or recommend tasks for Amazon Q. You can also ask Amazon Q to write a description or to summarize content. For more information, see Managing generative AI features in Amazon CodeCatalyst in the Amazon CodeCatalyst administrator guide. Chatting about code in Amazon SageMaker AI Studio Amazon SageMaker AI Studio is a web-based experience for running ML workflows. You can chat with Amazon Q Developer inside Studio to get guidance on SageMaker AI features, troubleshoot JupyterLab errors, and get sample code. For more information, see Use Amazon Q to Expedite Your Machine Learning Workflows in the SageMaker AI Developer Guide. Developing software in Amazon CodeCatalyst 10 Amazon Q Developer User Guide Interacting with the command line and AWS CloudShell Command Line Interface (CLI) After installing Amazon Q for the command line, you can use it to complete CLI commands as it populates contextually relevant subcommands, options and arguments. It provides AI-generated completions as you type in the command line. Additionally, you can use Amazon Q to write natural language instructions that are instantly translated to an executable shell code snippet. You can also ask Amazon Q complex questions, and it provides feedback and instructions based on the conversation, as well as context and information outside of the conversation. You can then provide permission to Amazon Q so it performs actions on your behalf. With multi-language support, you can chat with Amazon Q in any of the supported natural languages, including English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Using"} +{"global_id": 763, "doc_id": "qdeveloper", "chunk_id": "8", "question_id": 4, "question": "What can you do after installing Amazon Q for the command line?", "answer_span": "After installing Amazon Q for the command line, you can use it to complete CLI commands as it populates contextually relevant subcommands, options and arguments.", "chunk": "available in all Amazon Q Developer supported regions. Developing software in Amazon CodeCatalyst Amazon Q Developer in CodeCatalyst includes generative AI features that can help users in projects in your space develop software faster. You can assign issues to Amazon Q or recommend tasks for Amazon Q. You can also ask Amazon Q to write a description or to summarize content. For more information, see Managing generative AI features in Amazon CodeCatalyst in the Amazon CodeCatalyst administrator guide. Chatting about code in Amazon SageMaker AI Studio Amazon SageMaker AI Studio is a web-based experience for running ML workflows. You can chat with Amazon Q Developer inside Studio to get guidance on SageMaker AI features, troubleshoot JupyterLab errors, and get sample code. For more information, see Use Amazon Q to Expedite Your Machine Learning Workflows in the SageMaker AI Developer Guide. Developing software in Amazon CodeCatalyst 10 Amazon Q Developer User Guide Interacting with the command line and AWS CloudShell Command Line Interface (CLI) After installing Amazon Q for the command line, you can use it to complete CLI commands as it populates contextually relevant subcommands, options and arguments. It provides AI-generated completions as you type in the command line. Additionally, you can use Amazon Q to write natural language instructions that are instantly translated to an executable shell code snippet. You can also ask Amazon Q complex questions, and it provides feedback and instructions based on the conversation, as well as context and information outside of the conversation. You can then provide permission to Amazon Q so it performs actions on your behalf. With multi-language support, you can chat with Amazon Q in any of the supported natural languages, including English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Using"} +{"global_id": 764, "doc_id": "qdeveloper", "chunk_id": "9", "question_id": 1, "question": "What permission do you need to provide to Amazon Q?", "answer_span": "then provide permission to Amazon Q so it performs actions on your behalf.", "chunk": "then provide permission to Amazon Q so it performs actions on your behalf. With multi-language support, you can chat with Amazon Q in any of the supported natural languages, including English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Using Amazon Q Developer on the command line . AWS CloudShell You can also use Amazon Q CLI in AWS CloudShell to interact in natural language conversations, ask questions, and receive responses from Amazon Q in your terminal. You can get the related shell command that reduces the need to search for or remember syntax. With Amazon Q, you can receive command suggestions as you type in the terminal. For more information, see Using Amazon Q AWS CLI in AWS CloudShell. Application integration Writing scripts to automate AWS services You may know exactly what to do with your AWS resources, and you may find yourself taking the same actions repeatedly. In that case, you can ask Amazon Q to write code that will automate the repetitive tasks. For example, you may be working on a project that uses Amazon VPCs, Amazon EC2 instances, and Amazon RDS databases. In the course of your testing, you find that every time you create a Amazon VPC, spin up a server, and deploy a database, the configuration is the same. You always choose the same instance and database type, with the same options selected, using the same security groups, in subnets with the same NACL configuration. You don’t want to have to go through the same manual process every time you want to re-create your test conditions. Interacting with the command line and AWS CloudShell 11 Amazon Q Developer User Guide You can use Amazon Q’s Console-to-Code feature to automate a workflow instead of performing"} +{"global_id": 765, "doc_id": "qdeveloper", "chunk_id": "9", "question_id": 2, "question": "In which languages can you chat with Amazon Q?", "answer_span": "you can chat with Amazon Q in any of the supported natural languages, including English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available.", "chunk": "then provide permission to Amazon Q so it performs actions on your behalf. With multi-language support, you can chat with Amazon Q in any of the supported natural languages, including English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Using Amazon Q Developer on the command line . AWS CloudShell You can also use Amazon Q CLI in AWS CloudShell to interact in natural language conversations, ask questions, and receive responses from Amazon Q in your terminal. You can get the related shell command that reduces the need to search for or remember syntax. With Amazon Q, you can receive command suggestions as you type in the terminal. For more information, see Using Amazon Q AWS CLI in AWS CloudShell. Application integration Writing scripts to automate AWS services You may know exactly what to do with your AWS resources, and you may find yourself taking the same actions repeatedly. In that case, you can ask Amazon Q to write code that will automate the repetitive tasks. For example, you may be working on a project that uses Amazon VPCs, Amazon EC2 instances, and Amazon RDS databases. In the course of your testing, you find that every time you create a Amazon VPC, spin up a server, and deploy a database, the configuration is the same. You always choose the same instance and database type, with the same options selected, using the same security groups, in subnets with the same NACL configuration. You don’t want to have to go through the same manual process every time you want to re-create your test conditions. Interacting with the command line and AWS CloudShell 11 Amazon Q Developer User Guide You can use Amazon Q’s Console-to-Code feature to automate a workflow instead of performing"} +{"global_id": 766, "doc_id": "qdeveloper", "chunk_id": "9", "question_id": 3, "question": "What can you do with Amazon Q CLI in AWS CloudShell?", "answer_span": "You can also use Amazon Q CLI in AWS CloudShell to interact in natural language conversations, ask questions, and receive responses from Amazon Q in your terminal.", "chunk": "then provide permission to Amazon Q so it performs actions on your behalf. With multi-language support, you can chat with Amazon Q in any of the supported natural languages, including English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Using Amazon Q Developer on the command line . AWS CloudShell You can also use Amazon Q CLI in AWS CloudShell to interact in natural language conversations, ask questions, and receive responses from Amazon Q in your terminal. You can get the related shell command that reduces the need to search for or remember syntax. With Amazon Q, you can receive command suggestions as you type in the terminal. For more information, see Using Amazon Q AWS CLI in AWS CloudShell. Application integration Writing scripts to automate AWS services You may know exactly what to do with your AWS resources, and you may ���nd yourself taking the same actions repeatedly. In that case, you can ask Amazon Q to write code that will automate the repetitive tasks. For example, you may be working on a project that uses Amazon VPCs, Amazon EC2 instances, and Amazon RDS databases. In the course of your testing, you find that every time you create a Amazon VPC, spin up a server, and deploy a database, the configuration is the same. You always choose the same instance and database type, with the same options selected, using the same security groups, in subnets with the same NACL configuration. You don’t want to have to go through the same manual process every time you want to re-create your test conditions. Interacting with the command line and AWS CloudShell 11 Amazon Q Developer User Guide You can use Amazon Q’s Console-to-Code feature to automate a workflow instead of performing"} +{"global_id": 767, "doc_id": "qdeveloper", "chunk_id": "9", "question_id": 4, "question": "What feature does Amazon Q offer to automate a workflow?", "answer_span": "You can use Amazon Q’s Console-to-Code feature to automate a workflow instead of performing", "chunk": "then provide permission to Amazon Q so it performs actions on your behalf. With multi-language support, you can chat with Amazon Q in any of the supported natural languages, including English, Mandarin, French, German, Italian, Japanese, Spanish, Korean, Hindi and Portuguese, with more languages available. For more information, see Using Amazon Q Developer on the command line . AWS CloudShell You can also use Amazon Q CLI in AWS CloudShell to interact in natural language conversations, ask questions, and receive responses from Amazon Q in your terminal. You can get the related shell command that reduces the need to search for or remember syntax. With Amazon Q, you can receive command suggestions as you type in the terminal. For more information, see Using Amazon Q AWS CLI in AWS CloudShell. Application integration Writing scripts to automate AWS services You may know exactly what to do with your AWS resources, and you may find yourself taking the same actions repeatedly. In that case, you can ask Amazon Q to write code that will automate the repetitive tasks. For example, you may be working on a project that uses Amazon VPCs, Amazon EC2 instances, and Amazon RDS databases. In the course of your testing, you find that every time you create a Amazon VPC, spin up a server, and deploy a database, the configuration is the same. You always choose the same instance and database type, with the same options selected, using the same security groups, in subnets with the same NACL configuration. You don’t want to have to go through the same manual process every time you want to re-create your test conditions. Interacting with the command line and AWS CloudShell 11 Amazon Q Developer User Guide You can use Amazon Q’s Console-to-Code feature to automate a workflow instead of performing"} +{"global_id": 768, "doc_id": "qdeveloper", "chunk_id": "10", "question_id": 1, "question": "What feature does Amazon Q provide to automate workflows?", "answer_span": "You can use Amazon Q’s Console-to-Code feature to automate a workflow instead of performing it manually every time.", "chunk": "configuration. You don’t want to have to go through the same manual process every time you want to re-create your test conditions. Interacting with the command line and AWS CloudShell 11 Amazon Q Developer User Guide You can use Amazon Q’s Console-to-Code feature to automate a workflow instead of performing it manually every time. First, you activate Console-to-Code in the Amazon EC2 console. Then, Amazon Q records your actions as you go through the process of configuring and launching your instance. Finally, Amazon Q provides you with code, in a language of your choice, that automates the process you just performed. For more information, see Automating AWS services with Amazon Q Developer Console-to-Code . Writing ETL scripts and integrating data AWS Glue is a serverless data integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. Amazon Q data integration in AWS Glue includes the following capabilities: • Chat – Amazon Q data integration in AWS Glue can answer natural language questions in English about AWS Glue and data integration domains like AWS Glue source and destination connectors, AWS Glue ETL jobs, Data Catalog, crawlers and AWS Lake Formation, and other feature documentation, and best practices. Amazon Q data integration in AWS Glue responds with step-by-step instructions, and includes references to its information sources. • Data integration code generation – Amazon Q data integration in AWS Glue can answer questions about AWS Glue ETL scripts, and generate new code given a natural language question in English. • Troubleshoot – Amazon Q data integration in AWS Glue is purpose built to help you understand errors in AWS Glue jobs and provides step-by-step instructions, to root cause and resolve your issues. For more information, see Amazon Q data integration in AWS Glue in"} +{"global_id": 769, "doc_id": "qdeveloper", "chunk_id": "10", "question_id": 2, "question": "What service does AWS Glue provide?", "answer_span": "AWS Glue is a serverless data integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources.", "chunk": "configuration. You don’t want to have to go through the same manual process every time you want to re-create your test conditions. Interacting with the command line and AWS CloudShell 11 Amazon Q Developer User Guide You can use Amazon Q’s Console-to-Code feature to automate a workflow instead of performing it manually every time. First, you activate Console-to-Code in the Amazon EC2 console. Then, Amazon Q records your actions as you go through the process of configuring and launching your instance. Finally, Amazon Q provides you with code, in a language of your choice, that automates the process you just performed. For more information, see Automating AWS services with Amazon Q Developer Console-to-Code . Writing ETL scripts and integrating data AWS Glue is a serverless data integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. Amazon Q data integration in AWS Glue includes the following capabilities: • Chat – Amazon Q data integration in AWS Glue can answer natural language questions in English about AWS Glue and data integration domains like AWS Glue source and destination connectors, AWS Glue ETL jobs, Data Catalog, crawlers and AWS Lake Formation, and other feature documentation, and best practices. Amazon Q data integration in AWS Glue responds with step-by-step instructions, and includes references to its information sources. • Data integration code generation – Amazon Q data integration in AWS Glue can answer questions about AWS Glue ETL scripts, and generate new code given a natural language question in English. • Troubleshoot – Amazon Q data integration in AWS Glue is purpose built to help you understand errors in AWS Glue jobs and provides step-by-step instructions, to root cause and resolve your issues. For more information, see Amazon Q data integration in AWS Glue in"} +{"global_id": 770, "doc_id": "qdeveloper", "chunk_id": "10", "question_id": 3, "question": "What can Amazon Q data integration in AWS Glue do with natural language questions?", "answer_span": "Amazon Q data integration in AWS Glue can answer natural language questions in English about AWS Glue and data integration domains like AWS Glue source and destination connectors, AWS Glue ETL jobs, Data Catalog, crawlers and AWS Lake Formation, and other feature documentation, and best practices.", "chunk": "configuration. You don’t want to have to go through the same manual process every time you want to re-create your test conditions. Interacting with the command line and AWS CloudShell 11 Amazon Q Developer User Guide You can use Amazon Q’s Console-to-Code feature to automate a workflow instead of performing it manually every time. First, you activate Console-to-Code in the Amazon EC2 console. Then, Amazon Q records your actions as you go through the process of configuring and launching your instance. Finally, Amazon Q provides you with code, in a language of your choice, that automates the process you just performed. For more information, see Automating AWS services with Amazon Q Developer Console-to-Code . Writing ETL scripts and integrating data AWS Glue is a serverless data integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. Amazon Q data integration in AWS Glue includes the following capabilities: • Chat – Amazon Q data integration in AWS Glue can answer natural language questions in English about AWS Glue and data integration domains like AWS Glue source and destination connectors, AWS Glue ETL jobs, Data Catalog, crawlers and AWS Lake Formation, and other feature documentation, and best practices. Amazon Q data integration in AWS Glue responds with step-by-step instructions, and includes references to its information sources. • Data integration code generation – Amazon Q data integration in AWS Glue can answer questions about AWS Glue ETL scripts, and generate new code given a natural language question in English. • Troubleshoot – Amazon Q data integration in AWS Glue is purpose built to help you understand errors in AWS Glue jobs and provides step-by-step instructions, to root cause and resolve your issues. For more information, see Amazon Q data integration in AWS Glue in"} +{"global_id": 771, "doc_id": "qdeveloper", "chunk_id": "10", "question_id": 4, "question": "What does Amazon Q data integration in AWS Glue provide to help troubleshoot errors?", "answer_span": "Amazon Q data integration in AWS Glue is purpose built to help you understand errors in AWS Glue jobs and provides step-by-step instructions, to root cause and resolve your issues.", "chunk": "configuration. You don’t want to have to go through the same manual process every time you want to re-create your test conditions. Interacting with the command line and AWS CloudShell 11 Amazon Q Developer User Guide You can use Amazon Q’s Console-to-Code feature to automate a workflow instead of performing it manually every time. First, you activate Console-to-Code in the Amazon EC2 console. Then, Amazon Q records your actions as you go through the process of configuring and launching your instance. Finally, Amazon Q provides you with code, in a language of your choice, that automates the process you just performed. For more information, see Automating AWS services with Amazon Q Developer Console-to-Code . Writing ETL scripts and integrating data AWS Glue is a serverless data integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. Amazon Q data integration in AWS Glue includes the following capabilities: • Chat – Amazon Q data integration in AWS Glue can answer natural language questions in English about AWS Glue and data integration domains like AWS Glue source and destination connectors, AWS Glue ETL jobs, Data Catalog, crawlers and AWS Lake Formation, and other feature documentation, and best practices. Amazon Q data integration in AWS Glue responds with step-by-step instructions, and includes references to its information sources. • Data integration code generation – Amazon Q data integration in AWS Glue can answer questions about AWS Glue ETL scripts, and generate new code given a natural language question in English. • Troubleshoot – Amazon Q data integration in AWS Glue is purpose built to help you understand errors in AWS Glue jobs and provides step-by-step instructions, to root cause and resolve your issues. For more information, see Amazon Q data integration in AWS Glue in"} +{"global_id": 772, "doc_id": "qdeveloper", "chunk_id": "11", "question_id": 1, "question": "What is Amazon Q data integration in AWS Glue used for?", "answer_span": "Amazon Q data integration in AWS Glue is purpose built to help you understand errors in AWS Glue jobs and provides step-by-step instructions, to root cause and resolve your issues.", "chunk": "natural language question in English. • Troubleshoot – Amazon Q data integration in AWS Glue is purpose built to help you understand errors in AWS Glue jobs and provides step-by-step instructions, to root cause and resolve your issues. For more information, see Amazon Q data integration in AWS Glue in the AWS Glue User Guide. Third-party integrations Using GitLab Duo with Amazon Q You can GitLab Duo with Amazon Q for your software development operations and source code management workflows. After setting up Amazon Q in GitLab Duo, you can invoke quick actions to automate tasks, including implement code for your ideas, transform your codebase, review merge requests for quality and vulnerabilities, and suggest unit tests. Writing ETL scripts and integrating data 12 Amazon Q Developer User Guide For more information, see GitLab Duo with Amazon Q . Using Amazon Q Developer features in GitHub You can leverage Amazon Q Developer capabilities for your software development workflows. With specialized development agents, you can implement new ideas, review code for quality issues, address vulnerabilities with unit tests, and modernize legacy Java applications. For more information, see Amazon Q Developer for GitHub (Preview) . Cloud Financial Management Understanding your costs You can ask Amazon Q about your AWS bill and account costs in the AWS Management Console. Amazon Q can retrieve your cost data, explain costs, and analyze cost trends. For more information, see Chatting about your costs . Customer support Getting customer support directly from Amazon Q Amazon Q can answer your questions about account activation, cost spikes, bill adjustment, fraud events, health events, and issues with your AWS resources. For more information, see Chatting about your costs , and Asking Amazon Q to troubleshoot your resources . Creating a support ticket Amazon Q can help you create a support case"} +{"global_id": 773, "doc_id": "qdeveloper", "chunk_id": "11", "question_id": 2, "question": "What can you do with GitLab Duo and Amazon Q?", "answer_span": "You can GitLab Duo with Amazon Q for your software development operations and source code management workflows.", "chunk": "natural language question in English. • Troubleshoot – Amazon Q data integration in AWS Glue is purpose built to help you understand errors in AWS Glue jobs and provides step-by-step instructions, to root cause and resolve your issues. For more information, see Amazon Q data integration in AWS Glue in the AWS Glue User Guide. Third-party integrations Using GitLab Duo with Amazon Q You can GitLab Duo with Amazon Q for your software development operations and source code management workflows. After setting up Amazon Q in GitLab Duo, you can invoke quick actions to automate tasks, including implement code for your ideas, transform your codebase, review merge requests for quality and vulnerabilities, and suggest unit tests. Writing ETL scripts and integrating data 12 Amazon Q Developer User Guide For more information, see GitLab Duo with Amazon Q . Using Amazon Q Developer features in GitHub You can leverage Amazon Q Developer capabilities for your software development workflows. With specialized development agents, you can implement new ideas, review code for quality issues, address vulnerabilities with unit tests, and modernize legacy Java applications. For more information, see Amazon Q Developer for GitHub (Preview) . Cloud Financial Management Understanding your costs You can ask Amazon Q about your AWS bill and account costs in the AWS Management Console. Amazon Q can retrieve your cost data, explain costs, and analyze cost trends. For more information, see Chatting about your costs . Customer support Getting customer support directly from Amazon Q Amazon Q can answer your questions about account activation, cost spikes, bill adjustment, fraud events, health events, and issues with your AWS resources. For more information, see Chatting about your costs , and Asking Amazon Q to troubleshoot your resources . Creating a support ticket Amazon Q can help you create a support case"} +{"global_id": 774, "doc_id": "qdeveloper", "chunk_id": "11", "question_id": 3, "question": "How can Amazon Q assist with understanding costs?", "answer_span": "You can ask Amazon Q about your AWS bill and account costs in the AWS Management Console.", "chunk": "natural language question in English. • Troubleshoot – Amazon Q data integration in AWS Glue is purpose built to help you understand errors in AWS Glue jobs and provides step-by-step instructions, to root cause and resolve your issues. For more information, see Amazon Q data integration in AWS Glue in the AWS Glue User Guide. Third-party integrations Using GitLab Duo with Amazon Q You can GitLab Duo with Amazon Q for your software development operations and source code management workflows. After setting up Amazon Q in GitLab Duo, you can invoke quick actions to automate tasks, including implement code for your ideas, transform your codebase, review merge requests for quality and vulnerabilities, and suggest unit tests. Writing ETL scripts and integrating data 12 Amazon Q Developer User Guide For more information, see GitLab Duo with Amazon Q . Using Amazon Q Developer features in GitHub You can leverage Amazon Q Developer capabilities for your software development workflows. With specialized development agents, you can implement new ideas, review code for quality issues, address vulnerabilities with unit tests, and modernize legacy Java applications. For more information, see Amazon Q Developer for GitHub (Preview) . Cloud Financial Management Understanding your costs You can ask Amazon Q about your AWS bill and account costs in the AWS Management Console. Amazon Q can retrieve your cost data, explain costs, and analyze cost trends. For more information, see Chatting about your costs . Customer support Getting customer support directly from Amazon Q Amazon Q can answer your questions about account activation, cost spikes, bill adjustment, fraud events, health events, and issues with your AWS resources. For more information, see Chatting about your costs , and Asking Amazon Q to troubleshoot your resources . Creating a support ticket Amazon Q can help you create a support case"} +{"global_id": 775, "doc_id": "qdeveloper", "chunk_id": "11", "question_id": 4, "question": "What types of questions can Amazon Q answer regarding customer support?", "answer_span": "Amazon Q can answer your questions about account activation, cost spikes, bill adjustment, fraud events, health events, and issues with your AWS resources.", "chunk": "natural language question in English. • Troubleshoot – Amazon Q data integration in AWS Glue is purpose built to help you understand errors in AWS Glue jobs and provides step-by-step instructions, to root cause and resolve your issues. For more information, see Amazon Q data integration in AWS Glue in the AWS Glue User Guide. Third-party integrations Using GitLab Duo with Amazon Q You can GitLab Duo with Amazon Q for your software development operations and source code management workflows. After setting up Amazon Q in GitLab Duo, you can invoke quick actions to automate tasks, including implement code for your ideas, transform your codebase, review merge requests for quality and vulnerabilities, and suggest unit tests. Writing ETL scripts and integrating data 12 Amazon Q Developer User Guide For more information, see GitLab Duo with Amazon Q . Using Amazon Q Developer features in GitHub You can leverage Amazon Q Developer capabilities for your software development workflows. With specialized development agents, you can implement new ideas, review code for quality issues, address vulnerabilities with unit tests, and modernize legacy Java applications. For more information, see Amazon Q Developer for GitHub (Preview) . Cloud Financial Management Understanding your costs You can ask Amazon Q about your AWS bill and account costs in the AWS Management Console. Amazon Q can retrieve your cost data, explain costs, and analyze cost trends. For more information, see Chatting about your costs . Customer support Getting customer support directly from Amazon Q Amazon Q can answer your questions about account activation, cost spikes, bill adjustment, fraud events, health events, and issues with your AWS resources. For more information, see Chatting about your costs , and Asking Amazon Q to troubleshoot your resources . Creating a support ticket Amazon Q can help you create a support case"} +{"global_id": 776, "doc_id": "qdeveloper", "chunk_id": "12", "question_id": 1, "question": "What can Amazon Q help you create?", "answer_span": "Amazon Q can help you create a support case", "chunk": "your questions about account activation, cost spikes, bill adjustment, fraud events, health events, and issues with your AWS resources. For more information, see Chatting about your costs , and Asking Amazon Q to troubleshoot your resources . Creating a support ticket Amazon Q can help you create a support case and then connect you to a human support agent at AWS. For more information, see Using Amazon Q Developer to chat with Support . Amazon Q in chat applications You can activate Amazon Q in your Slack and Microsoft Teams applications to ask questions about building at AWS. To add Amazon Q to your chat applications, see Chatting with Amazon Q Using Amazon Q Developer features in GitHub 13"} +{"global_id": 777, "doc_id": "qdeveloper", "chunk_id": "12", "question_id": 2, "question": "In which applications can you activate Amazon Q?", "answer_span": "You can activate Amazon Q in your Slack and Microsoft Teams applications", "chunk": "your questions about account activation, cost spikes, bill adjustment, fraud events, health events, and issues with your AWS resources. For more information, see Chatting about your costs , and Asking Amazon Q to troubleshoot your resources . Creating a support ticket Amazon Q can help you create a support case and then connect you to a human support agent at AWS. For more information, see Using Amazon Q Developer to chat with Support . Amazon Q in chat applications You can activate Amazon Q in your Slack and Microsoft Teams applications to ask questions about building at AWS. To add Amazon Q to your chat applications, see Chatting with Amazon Q Using Amazon Q Developer features in GitHub 13"} +{"global_id": 778, "doc_id": "qdeveloper", "chunk_id": "12", "question_id": 3, "question": "What types of questions can you ask Amazon Q?", "answer_span": "your questions about account activation, cost spikes, bill adjustment, fraud events, health events, and issues with your AWS resources", "chunk": "your questions about account activation, cost spikes, bill adjustment, fraud events, health events, and issues with your AWS resources. For more information, see Chatting about your costs , and Asking Amazon Q to troubleshoot your resources . Creating a support ticket Amazon Q can help you create a support case and then connect you to a human support agent at AWS. For more information, see Using Amazon Q Developer to chat with Support . Amazon Q in chat applications You can activate Amazon Q in your Slack and Microsoft Teams applications to ask questions about building at AWS. To add Amazon Q to your chat applications, see Chatting with Amazon Q Using Amazon Q Developer features in GitHub 13"} +{"global_id": 779, "doc_id": "qdeveloper", "chunk_id": "12", "question_id": 4, "question": "Where can you find more information about using Amazon Q?", "answer_span": "For more information, see Using Amazon Q Developer to chat with Support", "chunk": "your questions about account activation, cost spikes, bill adjustment, fraud events, health events, and issues with your AWS resources. For more information, see Chatting about your costs , and Asking Amazon Q to troubleshoot your resources . Creating a support ticket Amazon Q can help you create a support case and then connect you to a human support agent at AWS. For more information, see Using Amazon Q Developer to chat with Support . Amazon Q in chat applications You can activate Amazon Q in your Slack and Microsoft Teams applications to ask questions about building at AWS. To add Amazon Q to your chat applications, see Chatting with Amazon Q Using Amazon Q Developer features in GitHub 13"} +{"global_id": 780, "doc_id": "s3", "chunk_id": "0", "question_id": 1, "question": "What is Amazon S3?", "answer_span": "Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industryleading scalability, data availability, security, and performance.", "chunk": "Amazon Simple Storage Service User Guide What is Amazon S3? Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industryleading scalability, data availability, security, and performance. Customers of all sizes and industries can use Amazon S3 to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides management features so that you can optimize, organize, and configure access to your data to meet your specific business, organizational, and compliance requirements. Note For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see S3 Express One Zone and Working with directory buckets. Topics • Features of Amazon S3 • How Amazon S3 works • Amazon S3 data consistency model • Related services • Accessing Amazon S3 • Paying for Amazon S3 • PCI DSS compliance Features of Amazon S3 Storage classes Amazon S3 offers a range of storage classes designed for different use cases. For example, you can store mission-critical production data in S3 Standard or S3 Express One Zone for frequent access, save costs by storing infrequently accessed data in S3 Standard-IA or S3 One Zone-IA, and archive data at the lowest costs in S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive. Features of Amazon S3 API Version 2006-03-01 1 Amazon Simple Storage Service User Guide Amazon S3 Express One Zone is a high-performance, single-zone Amazon S3 storage class that is purpose-built to deliver consistent, single-digit millisecond data access for your most latencysensitive applications. S3 Express One Zone is the lowest latency cloud object storage class available today, with data access speeds up to 10x faster and with request costs"} +{"global_id": 781, "doc_id": "s3", "chunk_id": "0", "question_id": 2, "question": "What can customers use Amazon S3 for?", "answer_span": "Customers of all sizes and industries can use Amazon S3 to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.", "chunk": "Amazon Simple Storage Service User Guide What is Amazon S3? Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industryleading scalability, data availability, security, and performance. Customers of all sizes and industries can use Amazon S3 to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides management features so that you can optimize, organize, and configure access to your data to meet your specific business, organizational, and compliance requirements. Note For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see S3 Express One Zone and Working with directory buckets. Topics • Features of Amazon S3 • How Amazon S3 works • Amazon S3 data consistency model • Related services • Accessing Amazon S3 • Paying for Amazon S3 • PCI DSS compliance Features of Amazon S3 Storage classes Amazon S3 offers a range of storage classes designed for different use cases. For example, you can store mission-critical production data in S3 Standard or S3 Express One Zone for frequent access, save costs by storing infrequently accessed data in S3 Standard-IA or S3 One Zone-IA, and archive data at the lowest costs in S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive. Features of Amazon S3 API Version 2006-03-01 1 Amazon Simple Storage Service User Guide Amazon S3 Express One Zone is a high-performance, single-zone Amazon S3 storage class that is purpose-built to deliver consistent, single-digit millisecond data access for your most latencysensitive applications. S3 Express One Zone is the lowest latency cloud object storage class available today, with data access speeds up to 10x faster and with request costs"} +{"global_id": 782, "doc_id": "s3", "chunk_id": "0", "question_id": 3, "question": "What is S3 Express One Zone?", "answer_span": "Amazon S3 Express One Zone is a high-performance, single-zone Amazon S3 storage class that is purpose-built to deliver consistent, single-digit millisecond data access for your most latencysensitive applications.", "chunk": "Amazon Simple Storage Service User Guide What is Amazon S3? Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industryleading scalability, data availability, security, and performance. Customers of all sizes and industries can use Amazon S3 to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides management features so that you can optimize, organize, and configure access to your data to meet your specific business, organizational, and compliance requirements. Note For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see S3 Express One Zone and Working with directory buckets. Topics • Features of Amazon S3 • How Amazon S3 works • Amazon S3 data consistency model • Related services • Accessing Amazon S3 • Paying for Amazon S3 • PCI DSS compliance Features of Amazon S3 Storage classes Amazon S3 offers a range of storage classes designed for different use cases. For example, you can store mission-critical production data in S3 Standard or S3 Express One Zone for frequent access, save costs by storing infrequently accessed data in S3 Standard-IA or S3 One Zone-IA, and archive data at the lowest costs in S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive. Features of Amazon S3 API Version 2006-03-01 1 Amazon Simple Storage Service User Guide Amazon S3 Express One Zone is a high-performance, single-zone Amazon S3 storage class that is purpose-built to deliver consistent, single-digit millisecond data access for your most latencysensitive applications. S3 Express One Zone is the lowest latency cloud object storage class available today, with data access speeds up to 10x faster and with request costs"} +{"global_id": 783, "doc_id": "s3", "chunk_id": "0", "question_id": 4, "question": "What is the lowest latency cloud object storage class available today?", "answer_span": "S3 Express One Zone is the lowest latency cloud object storage class available today, with data access speeds up to 10x faster and with request costs.", "chunk": "Amazon Simple Storage Service User Guide What is Amazon S3? Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industryleading scalability, data availability, security, and performance. Customers of all sizes and industries can use Amazon S3 to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides management features so that you can optimize, organize, and configure access to your data to meet your specific business, organizational, and compliance requirements. Note For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see S3 Express One Zone and Working with directory buckets. Topics • Features of Amazon S3 • How Amazon S3 works • Amazon S3 data consistency model • Related services • Accessing Amazon S3 • Paying for Amazon S3 • PCI DSS compliance Features of Amazon S3 Storage classes Amazon S3 offers a range of storage classes designed for different use cases. For example, you can store mission-critical production data in S3 Standard or S3 Express One Zone for frequent access, save costs by storing infrequently accessed data in S3 Standard-IA or S3 One Zone-IA, and archive data at the lowest costs in S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive. Features of Amazon S3 API Version 2006-03-01 1 Amazon Simple Storage Service User Guide Amazon S3 Express One Zone is a high-performance, single-zone Amazon S3 storage class that is purpose-built to deliver consistent, single-digit millisecond data access for your most latencysensitive applications. S3 Express One Zone is the lowest latency cloud object storage class available today, with data access speeds up to 10x faster and with request costs"} +{"global_id": 784, "doc_id": "s3", "chunk_id": "1", "question_id": 1, "question": "What is S3 Express One Zone designed for?", "answer_span": "S3 Express One Zone is a high-performance, single-zone Amazon S3 storage class that is purpose-built to deliver consistent, single-digit millisecond data access for your most latency-sensitive applications.", "chunk": "Zone is a high-performance, single-zone Amazon S3 storage class that is purpose-built to deliver consistent, single-digit millisecond data access for your most latencysensitive applications. S3 Express One Zone is the lowest latency cloud object storage class available today, with data access speeds up to 10x faster and with request costs 50 percent lower than S3 Standard. S3 Express One Zone is the first S3 storage class where you can select a single Availability Zone with the option to co-locate your object storage with your compute resources, which provides the highest possible access speed. Additionally, to further increase access speed and support hundreds of thousands of requests per second, data is stored in a new bucket type: an Amazon S3 directory bucket. For more information, see S3 Express One Zone and Working with directory buckets. You can store data with changing or unknown access patterns in S3 Intelligent-Tiering, which optimizes storage costs by automatically moving your data between four access tiers when your access patterns change. These four access tiers include two low-latency access tiers optimized for frequent and infrequent access, and two opt-in archive access tiers designed for asynchronous access for rarely accessed data. For more information, see Understanding and managing Amazon S3 storage classes. Storage management Amazon S3 has storage management features that you can use to manage costs, meet regulatory requirements, reduce latency, and save multiple distinct copies of your data for compliance requirements. • S3 Lifecycle – Configure a lifecycle configuration to manage your objects and store them cost effectively throughout their lifecycle. You can transition objects to other S3 storage classes or expire objects that reach the end of their lifetimes. • S3 Object Lock – Prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely. You can use"} +{"global_id": 785, "doc_id": "s3", "chunk_id": "1", "question_id": 2, "question": "How much faster is S3 Express One Zone compared to S3 Standard?", "answer_span": "S3 Express One Zone is the lowest latency cloud object storage class available today, with data access speeds up to 10x faster and with request costs 50 percent lower than S3 Standard.", "chunk": "Zone is a high-performance, single-zone Amazon S3 storage class that is purpose-built to deliver consistent, single-digit millisecond data access for your most latencysensitive applications. S3 Express One Zone is the lowest latency cloud object storage class available today, with data access speeds up to 10x faster and with request costs 50 percent lower than S3 Standard. S3 Express One Zone is the first S3 storage class where you can select a single Availability Zone with the option to co-locate your object storage with your compute resources, which provides the highest possible access speed. Additionally, to further increase access speed and support hundreds of thousands of requests per second, data is stored in a new bucket type: an Amazon S3 directory bucket. For more information, see S3 Express One Zone and Working with directory buckets. You can store data with changing or unknown access patterns in S3 Intelligent-Tiering, which optimizes storage costs by automatically moving your data between four access tiers when your access patterns change. These four access tiers include two low-latency access tiers optimized for frequent and infrequent access, and two opt-in archive access tiers designed for asynchronous access for rarely accessed data. For more information, see Understanding and managing Amazon S3 storage classes. Storage management Amazon S3 has storage management features that you can use to manage costs, meet regulatory requirements, reduce latency, and save multiple distinct copies of your data for compliance requirements. • S3 Lifecycle – Configure a lifecycle configuration to manage your objects and store them cost effectively throughout their lifecycle. You can transition objects to other S3 storage classes or expire objects that reach the end of their lifetimes. • S3 Object Lock – Prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely. You can use"} +{"global_id": 786, "doc_id": "s3", "chunk_id": "1", "question_id": 3, "question": "What feature does S3 Intelligent-Tiering provide?", "answer_span": "You can store data with changing or unknown access patterns in S3 Intelligent-Tiering, which optimizes storage costs by automatically moving your data between four access tiers when your access patterns change.", "chunk": "Zone is a high-performance, single-zone Amazon S3 storage class that is purpose-built to deliver consistent, single-digit millisecond data access for your most latencysensitive applications. S3 Express One Zone is the lowest latency cloud object storage class available today, with data access speeds up to 10x faster and with request costs 50 percent lower than S3 Standard. S3 Express One Zone is the first S3 storage class where you can select a single Availability Zone with the option to co-locate your object storage with your compute resources, which provides the highest possible access speed. Additionally, to further increase access speed and support hundreds of thousands of requests per second, data is stored in a new bucket type: an Amazon S3 directory bucket. For more information, see S3 Express One Zone and Working with directory buckets. You can store data with changing or unknown access patterns in S3 Intelligent-Tiering, which optimizes storage costs by automatically moving your data between four access tiers when your access patterns change. These four access tiers include two low-latency access tiers optimized for frequent and infrequent access, and two opt-in archive access tiers designed for asynchronous access for rarely accessed data. For more information, see Understanding and managing Amazon S3 storage classes. Storage management Amazon S3 has storage management features that you can use to manage costs, meet regulatory requirements, reduce latency, and save multiple distinct copies of your data for compliance requirements. • S3 Lifecycle – Configure a lifecycle configuration to manage your objects and store them cost effectively throughout their lifecycle. You can transition objects to other S3 storage classes or expire objects that reach the end of their lifetimes. • S3 Object Lock – Prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely. You can use"} +{"global_id": 787, "doc_id": "s3", "chunk_id": "1", "question_id": 4, "question": "What is the purpose of S3 Lifecycle?", "answer_span": "S3 Lifecycle – Configure a lifecycle configuration to manage your objects and store them cost effectively throughout their lifecycle.", "chunk": "Zone is a high-performance, single-zone Amazon S3 storage class that is purpose-built to deliver consistent, single-digit millisecond data access for your most latencysensitive applications. S3 Express One Zone is the lowest latency cloud object storage class available today, with data access speeds up to 10x faster and with request costs 50 percent lower than S3 Standard. S3 Express One Zone is the first S3 storage class where you can select a single Availability Zone with the option to co-locate your object storage with your compute resources, which provides the highest possible access speed. Additionally, to further increase access speed and support hundreds of thousands of requests per second, data is stored in a new bucket type: an Amazon S3 directory bucket. For more information, see S3 Express One Zone and Working with directory buckets. You can store data with changing or unknown access patterns in S3 Intelligent-Tiering, which optimizes storage costs by automatically moving your data between four access tiers when your access patterns change. These four access tiers include two low-latency access tiers optimized for frequent and infrequent access, and two opt-in archive access tiers designed for asynchronous access for rarely accessed data. For more information, see Understanding and managing Amazon S3 storage classes. Storage management Amazon S3 has storage management features that you can use to manage costs, meet regulatory requirements, reduce latency, and save multiple distinct copies of your data for compliance requirements. • S3 Lifecycle – Configure a lifecycle configuration to manage your objects and store them cost effectively throughout their lifecycle. You can transition objects to other S3 storage classes or expire objects that reach the end of their lifetimes. • S3 Object Lock – Prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely. You can use"} +{"global_id": 788, "doc_id": "s3", "chunk_id": "2", "question_id": 1, "question": "What can you do with S3 Object Lock?", "answer_span": "You can use Object Lock to help meet regulatory requirements that require write-once-read-many (WORM) storage or to simply add another layer of protection against object changes and deletions.", "chunk": "them cost effectively throughout their lifecycle. You can transition objects to other S3 storage classes or expire objects that reach the end of their lifetimes. • S3 Object Lock – Prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely. You can use Object Lock to help meet regulatory requirements that require write-once-read-many (WORM) storage or to simply add another layer of protection against object changes and deletions. • S3 Replication – Replicate objects and their respective metadata and object tags to one or more destination buckets in the same or different AWS Regions for reduced latency, compliance, security, and other use cases. • S3 Batch Operations – Manage billions of objects at scale with a single S3 API request or a few clicks in the Amazon S3 console. You can use Batch Operations to perform operations such as Copy, Invoke AWS Lambda function, and Restore on millions or billions of objects. Storage management API Version 2006-03-01 2 Amazon Simple Storage Service User Guide Access management and security Amazon S3 provides features for auditing and managing access to your buckets and objects. By default, S3 buckets and the objects in them are private. You have access only to the S3 resources that you create. To grant granular resource permissions that support your specific use case or to audit the permissions of your Amazon S3 resources, you can use the following features. • S3 Block Public Access – Block public access to S3 buckets and objects. By default, Block Public Access settings are turned on at the bucket level. We recommend that you keep all Block Public Access settings enabled unless you know that you need to turn off one or more of them for your specific use case. For more information, see Configuring"} +{"global_id": 789, "doc_id": "s3", "chunk_id": "2", "question_id": 2, "question": "What is the purpose of S3 Replication?", "answer_span": "Replicate objects and their respective metadata and object tags to one or more destination buckets in the same or different AWS Regions for reduced latency, compliance, security, and other use cases.", "chunk": "them cost effectively throughout their lifecycle. You can transition objects to other S3 storage classes or expire objects that reach the end of their lifetimes. • S3 Object Lock – Prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely. You can use Object Lock to help meet regulatory requirements that require write-once-read-many (WORM) storage or to simply add another layer of protection against object changes and deletions. • S3 Replication – Replicate objects and their respective metadata and object tags to one or more destination buckets in the same or different AWS Regions for reduced latency, compliance, security, and other use cases. • S3 Batch Operations – Manage billions of objects at scale with a single S3 API request or a few clicks in the Amazon S3 console. You can use Batch Operations to perform operations such as Copy, Invoke AWS Lambda function, and Restore on millions or billions of objects. Storage management API Version 2006-03-01 2 Amazon Simple Storage Service User Guide Access management and security Amazon S3 provides features for auditing and managing access to your buckets and objects. By default, S3 buckets and the objects in them are private. You have access only to the S3 resources that you create. To grant granular resource permissions that support your specific use case or to audit the permissions of your Amazon S3 resources, you can use the following features. • S3 Block Public Access – Block public access to S3 buckets and objects. By default, Block Public Access settings are turned on at the bucket level. We recommend that you keep all Block Public Access settings enabled unless you know that you need to turn off one or more of them for your specific use case. For more information, see Configuring"} +{"global_id": 790, "doc_id": "s3", "chunk_id": "2", "question_id": 3, "question": "What is the default access level for S3 buckets?", "answer_span": "By default, S3 buckets and the objects in them are private.", "chunk": "them cost effectively throughout their lifecycle. You can transition objects to other S3 storage classes or expire objects that reach the end of their lifetimes. • S3 Object Lock – Prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely. You can use Object Lock to help meet regulatory requirements that require write-once-read-many (WORM) storage or to simply add another layer of protection against object changes and deletions. • S3 Replication – Replicate objects and their respective metadata and object tags to one or more destination buckets in the same or different AWS Regions for reduced latency, compliance, security, and other use cases. • S3 Batch Operations – Manage billions of objects at scale with a single S3 API request or a few clicks in the Amazon S3 console. You can use Batch Operations to perform operations such as Copy, Invoke AWS Lambda function, and Restore on millions or billions of objects. Storage management API Version 2006-03-01 2 Amazon Simple Storage Service User Guide Access management and security Amazon S3 provides features for auditing and managing access to your buckets and objects. By default, S3 buckets and the objects in them are private. You have access only to the S3 resources that you create. To grant granular resource permissions that support your specific use case or to audit the permissions of your Amazon S3 resources, you can use the following features. • S3 Block Public Access – Block public access to S3 buckets and objects. By default, Block Public Access settings are turned on at the bucket level. We recommend that you keep all Block Public Access settings enabled unless you know that you need to turn off one or more of them for your specific use case. For more information, see Configuring"} +{"global_id": 791, "doc_id": "s3", "chunk_id": "2", "question_id": 4, "question": "What does S3 Block Public Access do?", "answer_span": "Block public access to S3 buckets and objects.", "chunk": "them cost effectively throughout their lifecycle. You can transition objects to other S3 storage classes or expire objects that reach the end of their lifetimes. • S3 Object Lock – Prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely. You can use Object Lock to help meet regulatory requirements that require write-once-read-many (WORM) storage or to simply add another layer of protection against object changes and deletions. • S3 Replication – Replicate objects and their respective metadata and object tags to one or more destination buckets in the same or different AWS Regions for reduced latency, compliance, security, and other use cases. • S3 Batch Operations – Manage billions of objects at scale with a single S3 API request or a few clicks in the Amazon S3 console. You can use Batch Operations to perform operations such as Copy, Invoke AWS Lambda function, and Restore on millions or billions of objects. Storage management API Version 2006-03-01 2 Amazon Simple Storage Service User Guide Access management and security Amazon S3 provides features for auditing and managing access to your buckets and objects. By default, S3 buckets and the objects in them are private. You have access only to the S3 resources that you create. To grant granular resource permissions that support your specific use case or to audit the permissions of your Amazon S3 resources, you can use the following features. • S3 Block Public Access – Block public access to S3 buckets and objects. By default, Block Public Access settings are turned on at the bucket level. We recommend that you keep all Block Public Access settings enabled unless you know that you need to turn off one or more of them for your specific use case. For more information, see Configuring"} +{"global_id": 792, "doc_id": "s3", "chunk_id": "3", "question_id": 1, "question": "What are Block Public Access settings turned on at by default?", "answer_span": "By default, Block Public Access settings are turned on at the bucket level.", "chunk": "and objects. By default, Block Public Access settings are turned on at the bucket level. We recommend that you keep all Block Public Access settings enabled unless you know that you need to turn off one or more of them for your specific use case. For more information, see Configuring block public access settings for your S3 buckets. • AWS Identity and Access Management (IAM) – IAM is a web service that helps you securely control access to AWS resources, including your Amazon S3 resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. • Bucket policies – Use IAM-based policy language to configure resource-based permissions for your S3 buckets and the objects in them. • Amazon S3 access points – Configure named network endpoints with dedicated access policies to manage data access at scale for shared datasets in Amazon S3. • Access control lists (ACLs) – Grant read and write permissions for individual buckets and objects to authorized users. As a general rule, we recommend using S3 resource-based policies (bucket policies and access point policies) or IAM user policies for access control instead of ACLs. Policies are a simplified and more flexible access control option. With bucket policies and access point policies, you can define rules that apply broadly across all requests to your Amazon S3 resources. For more information about the specific cases when you'd use ACLs instead of resource-based policies or IAM user policies, see Managing access with ACLs. • S3 Object Ownership – Take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3. S3 Object Ownership is an Amazon S3 bucketlevel setting that you can use"} +{"global_id": 793, "doc_id": "s3", "chunk_id": "3", "question_id": 2, "question": "What is AWS Identity and Access Management (IAM)?", "answer_span": "IAM is a web service that helps you securely control access to AWS resources, including your Amazon S3 resources.", "chunk": "and objects. By default, Block Public Access settings are turned on at the bucket level. We recommend that you keep all Block Public Access settings enabled unless you know that you need to turn off one or more of them for your specific use case. For more information, see Configuring block public access settings for your S3 buckets. • AWS Identity and Access Management (IAM) – IAM is a web service that helps you securely control access to AWS resources, including your Amazon S3 resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. • Bucket policies – Use IAM-based policy language to configure resource-based permissions for your S3 buckets and the objects in them. • Amazon S3 access points – Configure named network endpoints with dedicated access policies to manage data access at scale for shared datasets in Amazon S3. • Access control lists (ACLs) – Grant read and write permissions for individual buckets and objects to authorized users. As a general rule, we recommend using S3 resource-based policies (bucket policies and access point policies) or IAM user policies for access control instead of ACLs. Policies are a simplified and more flexible access control option. With bucket policies and access point policies, you can define rules that apply broadly across all requests to your Amazon S3 resources. For more information about the specific cases when you'd use ACLs instead of resource-based policies or IAM user policies, see Managing access with ACLs. • S3 Object Ownership – Take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3. S3 Object Ownership is an Amazon S3 bucketlevel setting that you can use"} +{"global_id": 794, "doc_id": "s3", "chunk_id": "3", "question_id": 3, "question": "What do bucket policies use to configure permissions?", "answer_span": "Use IAM-based policy language to configure resource-based permissions for your S3 buckets and the objects in them.", "chunk": "and objects. By default, Block Public Access settings are turned on at the bucket level. We recommend that you keep all Block Public Access settings enabled unless you know that you need to turn off one or more of them for your specific use case. For more information, see Configuring block public access settings for your S3 buckets. • AWS Identity and Access Management (IAM) – IAM is a web service that helps you securely control access to AWS resources, including your Amazon S3 resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. • Bucket policies – Use IAM-based policy language to configure resource-based permissions for your S3 buckets and the objects in them. • Amazon S3 access points – Configure named network endpoints with dedicated access policies to manage data access at scale for shared datasets in Amazon S3. • Access control lists (ACLs) – Grant read and write permissions for individual buckets and objects to authorized users. As a general rule, we recommend using S3 resource-based policies (bucket policies and access point policies) or IAM user policies for access control instead of ACLs. Policies are a simplified and more flexible access control option. With bucket policies and access point policies, you can define rules that apply broadly across all requests to your Amazon S3 resources. For more information about the specific cases when you'd use ACLs instead of resource-based policies or IAM user policies, see Managing access with ACLs. • S3 Object Ownership – Take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3. S3 Object Ownership is an Amazon S3 bucketlevel setting that you can use"} +{"global_id": 795, "doc_id": "s3", "chunk_id": "3", "question_id": 4, "question": "What is S3 Object Ownership?", "answer_span": "S3 Object Ownership is an Amazon S3 bucket-level setting that you can use.", "chunk": "and objects. By default, Block Public Access settings are turned on at the bucket level. We recommend that you keep all Block Public Access settings enabled unless you know that you need to turn off one or more of them for your specific use case. For more information, see Configuring block public access settings for your S3 buckets. • AWS Identity and Access Management (IAM) – IAM is a web service that helps you securely control access to AWS resources, including your Amazon S3 resources. With IAM, you can centrally manage permissions that control which AWS resources users can access. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. • Bucket policies – Use IAM-based policy language to configure resource-based permissions for your S3 buckets and the objects in them. • Amazon S3 access points – Configure named network endpoints with dedicated access policies to manage data access at scale for shared datasets in Amazon S3. • Access control lists (ACLs) – Grant read and write permissions for individual buckets and objects to authorized users. As a general rule, we recommend using S3 resource-based policies (bucket policies and access point policies) or IAM user policies for access control instead of ACLs. Policies are a simplified and more flexible access control option. With bucket policies and access point policies, you can define rules that apply broadly across all requests to your Amazon S3 resources. For more information about the specific cases when you'd use ACLs instead of resource-based policies or IAM user policies, see Managing access with ACLs. • S3 Object Ownership – Take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3. S3 Object Ownership is an Amazon S3 bucketlevel setting that you can use"} +{"global_id": 796, "doc_id": "s3", "chunk_id": "4", "question_id": 1, "question": "What is the purpose of S3 Object Ownership?", "answer_span": "S3 Object Ownership – Take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3.", "chunk": "use ACLs instead of resource-based policies or IAM user policies, see Managing access with ACLs. • S3 Object Ownership – Take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3. S3 Object Ownership is an Amazon S3 bucketlevel setting that you can use to disable or enable ACLs. By default, ACLs are disabled. With ACLs disabled, the bucket owner owns all the objects in the bucket and manages access to data exclusively by using access-management policies. • IAM Access Analyzer for S3 – Evaluate and monitor your S3 bucket access policies, ensuring that the policies provide only the intended access to your S3 resources. Access management and security API Version 2006-03-01 3 Amazon Simple Storage Service User Guide Data processing To transform data and trigger workflows to automate a variety of other processing activities at scale, you can use the following features. • S3 Object Lambda – Add your own code to S3 GET, HEAD, and LIST requests to modify and process data as it is returned to an application. Filter rows, dynamically resize images, redact confidential data, and much more. • Event notifications – Trigger workflows that use Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), and AWS Lambda when a change is made to your S3 resources. Storage logging and monitoring Amazon S3 provides logging and monitoring tools that you can use to monitor and control how your Amazon S3 resources are being used. For more information, see Monitoring tools. Automated monitoring tools • Amazon CloudWatch metrics for Amazon S3 – Track the operational health of your S3 resources and configure billing alerts when estimated charges reach a user-defined threshold. • AWS CloudTrail – Record actions taken by a user, a role, or an AWS service in"} +{"global_id": 797, "doc_id": "s3", "chunk_id": "4", "question_id": 2, "question": "What is the default setting for ACLs in S3?", "answer_span": "By default, ACLs are disabled.", "chunk": "use ACLs instead of resource-based policies or IAM user policies, see Managing access with ACLs. • S3 Object Ownership – Take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3. S3 Object Ownership is an Amazon S3 bucketlevel setting that you can use to disable or enable ACLs. By default, ACLs are disabled. With ACLs disabled, the bucket owner owns all the objects in the bucket and manages access to data exclusively by using access-management policies. • IAM Access Analyzer for S3 – Evaluate and monitor your S3 bucket access policies, ensuring that the policies provide only the intended access to your S3 resources. Access management and security API Version 2006-03-01 3 Amazon Simple Storage Service User Guide Data processing To transform data and trigger workflows to automate a variety of other processing activities at scale, you can use the following features. • S3 Object Lambda – Add your own code to S3 GET, HEAD, and LIST requests to modify and process data as it is returned to an application. Filter rows, dynamically resize images, redact confidential data, and much more. • Event notifications – Trigger workflows that use Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), and AWS Lambda when a change is made to your S3 resources. Storage logging and monitoring Amazon S3 provides logging and monitoring tools that you can use to monitor and control how your Amazon S3 resources are being used. For more information, see Monitoring tools. Automated monitoring tools • Amazon CloudWatch metrics for Amazon S3 – Track the operational health of your S3 resources and configure billing alerts when estimated charges reach a user-defined threshold. • AWS CloudTrail – Record actions taken by a user, a role, or an AWS service in"} +{"global_id": 798, "doc_id": "s3", "chunk_id": "4", "question_id": 3, "question": "What does IAM Access Analyzer for S3 do?", "answer_span": "Evaluate and monitor your S3 bucket access policies, ensuring that the policies provide only the intended access to your S3 resources.", "chunk": "use ACLs instead of resource-based policies or IAM user policies, see Managing access with ACLs. • S3 Object Ownership – Take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3. S3 Object Ownership is an Amazon S3 bucketlevel setting that you can use to disable or enable ACLs. By default, ACLs are disabled. With ACLs disabled, the bucket owner owns all the objects in the bucket and manages access to data exclusively by using access-management policies. • IAM Access Analyzer for S3 – Evaluate and monitor your S3 bucket access policies, ensuring that the policies provide only the intended access to your S3 resources. Access management and security API Version 2006-03-01 3 Amazon Simple Storage Service User Guide Data processing To transform data and trigger workflows to automate a variety of other processing activities at scale, you can use the following features. • S3 Object Lambda – Add your own code to S3 GET, HEAD, and LIST requests to modify and process data as it is returned to an application. Filter rows, dynamically resize images, redact confidential data, and much more. • Event notifications – Trigger workflows that use Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), and AWS Lambda when a change is made to your S3 resources. Storage logging and monitoring Amazon S3 provides logging and monitoring tools that you can use to monitor and control how your Amazon S3 resources are being used. For more information, see Monitoring tools. Automated monitoring tools • Amazon CloudWatch metrics for Amazon S3 – Track the operational health of your S3 resources and configure billing alerts when estimated charges reach a user-defined threshold. • AWS CloudTrail – Record actions taken by a user, a role, or an AWS service in"} +{"global_id": 799, "doc_id": "s3", "chunk_id": "4", "question_id": 4, "question": "What can you use to trigger workflows when a change is made to your S3 resources?", "answer_span": "Trigger workflows that use Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), and AWS Lambda when a change is made to your S3 resources.", "chunk": "use ACLs instead of resource-based policies or IAM user policies, see Managing access with ACLs. • S3 Object Ownership – Take ownership of every object in your bucket, simplifying access management for data stored in Amazon S3. S3 Object Ownership is an Amazon S3 bucketlevel setting that you can use to disable or enable ACLs. By default, ACLs are disabled. With ACLs disabled, the bucket owner owns all the objects in the bucket and manages access to data exclusively by using access-management policies. • IAM Access Analyzer for S3 – Evaluate and monitor your S3 bucket access policies, ensuring that the policies provide only the intended access to your S3 resources. Access management and security API Version 2006-03-01 3 Amazon Simple Storage Service User Guide Data processing To transform data and trigger workflows to automate a variety of other processing activities at scale, you can use the following features. • S3 Object Lambda – Add your own code to S3 GET, HEAD, and LIST requests to modify and process data as it is returned to an application. Filter rows, dynamically resize images, redact confidential data, and much more. • Event notifications – Trigger workflows that use Amazon Simple Notification Service (Amazon SNS), Amazon Simple Queue Service (Amazon SQS), and AWS Lambda when a change is made to your S3 resources. Storage logging and monitoring Amazon S3 provides logging and monitoring tools that you can use to monitor and control how your Amazon S3 resources are being used. For more information, see Monitoring tools. Automated monitoring tools • Amazon CloudWatch metrics for Amazon S3 – Track the operational health of your S3 resources and configure billing alerts when estimated charges reach a user-defined threshold. • AWS CloudTrail – Record actions taken by a user, a role, or an AWS service in"} +{"global_id": 800, "doc_id": "s3", "chunk_id": "5", "question_id": 1, "question": "What does Amazon CloudWatch metrics for Amazon S3 track?", "answer_span": "Track the operational health of your S3 resources and configure billing alerts when estimated charges reach a user-defined threshold.", "chunk": "see Monitoring tools. Automated monitoring tools • Amazon CloudWatch metrics for Amazon S3 – Track the operational health of your S3 resources and configure billing alerts when estimated charges reach a user-defined threshold. • AWS CloudTrail – Record actions taken by a user, a role, or an AWS service in Amazon S3. CloudTrail logs provide you with detailed API tracking for S3 bucket-level and object-level operations. Manual monitoring tools • Server access logging – Get detailed records for the requests that are made to a bucket. You can use server access logs for many use cases, such as conducting security and access audits, learning about your customer base, and understanding your Amazon S3 bill. • AWS Trusted Advisor – Evaluate your account by using AWS best practice checks to identify ways to optimize your AWS infrastructure, improve security and performance, reduce costs, and monitor service quotas. You can then follow the recommendations to optimize your services and resources. Data processing API Version 2006-03-01 4 Amazon Simple Storage Service User Guide Analytics and insights Amazon S3 offers features to help you gain visibility into your storage usage, which empowers you to better understand, analyze, and optimize your storage at scale. • Amazon S3 Storage Lens – Understand, analyze, and optimize your storage. S3 Storage Lens provides 60+ usage and activity metrics and interactive dashboards to aggregate data for your entire organization, specific accounts, AWS Regions, buckets, or prefixes. • Storage Class Analysis – Analyze storage access patterns to decide when it's time to move data to a more cost-effective storage class. • S3 Inventory with Inventory reports – Audit and report on objects and their corresponding metadata and configure other Amazon S3 features to take action in Inventory reports. For example, you can report on the replication and encryption status of"} +{"global_id": 801, "doc_id": "s3", "chunk_id": "5", "question_id": 2, "question": "What does AWS CloudTrail do?", "answer_span": "Record actions taken by a user, a role, or an AWS service in Amazon S3.", "chunk": "see Monitoring tools. Automated monitoring tools • Amazon CloudWatch metrics for Amazon S3 – Track the operational health of your S3 resources and configure billing alerts when estimated charges reach a user-defined threshold. • AWS CloudTrail – Record actions taken by a user, a role, or an AWS service in Amazon S3. CloudTrail logs provide you with detailed API tracking for S3 bucket-level and object-level operations. Manual monitoring tools • Server access logging – Get detailed records for the requests that are made to a bucket. You can use server access logs for many use cases, such as conducting security and access audits, learning about your customer base, and understanding your Amazon S3 bill. • AWS Trusted Advisor – Evaluate your account by using AWS best practice checks to identify ways to optimize your AWS infrastructure, improve security and performance, reduce costs, and monitor service quotas. You can then follow the recommendations to optimize your services and resources. Data processing API Version 2006-03-01 4 Amazon Simple Storage Service User Guide Analytics and insights Amazon S3 offers features to help you gain visibility into your storage usage, which empowers you to better understand, analyze, and optimize your storage at scale. • Amazon S3 Storage Lens – Understand, analyze, and optimize your storage. S3 Storage Lens provides 60+ usage and activity metrics and interactive dashboards to aggregate data for your entire organization, specific accounts, AWS Regions, buckets, or prefixes. • Storage Class Analysis – Analyze storage access patterns to decide when it's time to move data to a more cost-effective storage class. • S3 Inventory with Inventory reports – Audit and report on objects and their corresponding metadata and configure other Amazon S3 features to take action in Inventory reports. For example, you can report on the replication and encryption status of"} +{"global_id": 802, "doc_id": "s3", "chunk_id": "5", "question_id": 3, "question": "What is the purpose of server access logging?", "answer_span": "Get detailed records for the requests that are made to a bucket.", "chunk": "see Monitoring tools. Automated monitoring tools • Amazon CloudWatch metrics for Amazon S3 – Track the operational health of your S3 resources and configure billing alerts when estimated charges reach a user-defined threshold. • AWS CloudTrail – Record actions taken by a user, a role, or an AWS service in Amazon S3. CloudTrail logs provide you with detailed API tracking for S3 bucket-level and object-level operations. Manual monitoring tools • Server access logging – Get detailed records for the requests that are made to a bucket. You can use server access logs for many use cases, such as conducting security and access audits, learning about your customer base, and understanding your Amazon S3 bill. • AWS Trusted Advisor – Evaluate your account by using AWS best practice checks to identify ways to optimize your AWS infrastructure, improve security and performance, reduce costs, and monitor service quotas. You can then follow the recommendations to optimize your services and resources. Data processing API Version 2006-03-01 4 Amazon Simple Storage Service User Guide Analytics and insights Amazon S3 offers features to help you gain visibility into your storage usage, which empowers you to better understand, analyze, and optimize your storage at scale. • Amazon S3 Storage Lens – Understand, analyze, and optimize your storage. S3 Storage Lens provides 60+ usage and activity metrics and interactive dashboards to aggregate data for your entire organization, specific accounts, AWS Regions, buckets, or prefixes. • Storage Class Analysis – Analyze storage access patterns to decide when it's time to move data to a more cost-effective storage class. • S3 Inventory with Inventory reports – Audit and report on objects and their corresponding metadata and configure other Amazon S3 features to take action in Inventory reports. For example, you can report on the replication and encryption status of"} +{"global_id": 803, "doc_id": "s3", "chunk_id": "5", "question_id": 4, "question": "What does Amazon S3 Storage Lens provide?", "answer_span": "S3 Storage Lens provides 60+ usage and activity metrics and interactive dashboards to aggregate data for your entire organization, specific accounts, AWS Regions, buckets, or prefixes.", "chunk": "see Monitoring tools. Automated monitoring tools • Amazon CloudWatch metrics for Amazon S3 – Track the operational health of your S3 resources and configure billing alerts when estimated charges reach a user-defined threshold. • AWS CloudTrail – Record actions taken by a user, a role, or an AWS service in Amazon S3. CloudTrail logs provide you with detailed API tracking for S3 bucket-level and object-level operations. Manual monitoring tools • Server access logging – Get detailed records for the requests that are made to a bucket. You can use server access logs for many use cases, such as conducting security and access audits, learning about your customer base, and understanding your Amazon S3 bill. • AWS Trusted Advisor – Evaluate your account by using AWS best practice checks to identify ways to optimize your AWS infrastructure, improve security and performance, reduce costs, and monitor service quotas. You can then follow the recommendations to optimize your services and resources. Data processing API Version 2006-03-01 4 Amazon Simple Storage Service User Guide Analytics and insights Amazon S3 offers features to help you gain visibility into your storage usage, which empowers you to better understand, analyze, and optimize your storage at scale. • Amazon S3 Storage Lens – Understand, analyze, and optimize your storage. S3 Storage Lens provides 60+ usage and activity metrics and interactive dashboards to aggregate data for your entire organization, specific accounts, AWS Regions, buckets, or prefixes. • Storage Class Analysis – Analyze storage access patterns to decide when it's time to move data to a more cost-effective storage class. • S3 Inventory with Inventory reports – Audit and report on objects and their corresponding metadata and configure other Amazon S3 features to take action in Inventory reports. For example, you can report on the replication and encryption status of"} +{"global_id": 804, "doc_id": "s3", "chunk_id": "6", "question_id": 1, "question": "What does S3 Inventory provide?", "answer_span": "S3 Inventory with Inventory reports – Audit and report on objects and their corresponding metadata and configure other Amazon S3 features to take action in Inventory reports.", "chunk": "time to move data to a more cost-effective storage class. • S3 Inventory with Inventory reports – Audit and report on objects and their corresponding metadata and configure other Amazon S3 features to take action in Inventory reports. For example, you can report on the replication and encryption status of your objects. For a list of all the metadata available for each object in Inventory reports, see Amazon S3 Inventory list. Strong consistency Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your Amazon S3 bucket in all AWS Regions. This behavior applies to both writes of new objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access control lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. For more information, see Amazon S3 data consistency model. How Amazon S3 works Amazon S3 is an object storage service that stores data as objects, hierarchical data, or tabular data within buckets. An object is a file and any metadata that describes the file. A bucket is a container for objects. To store your data in Amazon S3, you first create a bucket and specify a bucket name and AWS Region. Then, you upload your data to that bucket as objects in Amazon S3. Each object has a key (or key name), which is the unique identifier for the object within the bucket. S3 provides features that you can configure to support your specific use case. For example, you can use S3 Versioning to keep multiple versions of an object in the same bucket, which allows you to restore objects that are accidentally deleted or overwritten. Analytics and insights API Version 2006-03-01 5 Amazon Simple"} +{"global_id": 805, "doc_id": "s3", "chunk_id": "6", "question_id": 2, "question": "What type of consistency does Amazon S3 provide?", "answer_span": "Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your Amazon S3 bucket in all AWS Regions.", "chunk": "time to move data to a more cost-effective storage class. • S3 Inventory with Inventory reports – Audit and report on objects and their corresponding metadata and configure other Amazon S3 features to take action in Inventory reports. For example, you can report on the replication and encryption status of your objects. For a list of all the metadata available for each object in Inventory reports, see Amazon S3 Inventory list. Strong consistency Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your Amazon S3 bucket in all AWS Regions. This behavior applies to both writes of new objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access control lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. For more information, see Amazon S3 data consistency model. How Amazon S3 works Amazon S3 is an object storage service that stores data as objects, hierarchical data, or tabular data within buckets. An object is a file and any metadata that describes the file. A bucket is a container for objects. To store your data in Amazon S3, you first create a bucket and specify a bucket name and AWS Region. Then, you upload your data to that bucket as objects in Amazon S3. Each object has a key (or key name), which is the unique identifier for the object within the bucket. S3 provides features that you can configure to support your specific use case. For example, you can use S3 Versioning to keep multiple versions of an object in the same bucket, which allows you to restore objects that are accidentally deleted or overwritten. Analytics and insights API Version 2006-03-01 5 Amazon Simple"} +{"global_id": 806, "doc_id": "s3", "chunk_id": "6", "question_id": 3, "question": "What is an object in Amazon S3?", "answer_span": "An object is a file and any metadata that describes the file.", "chunk": "time to move data to a more cost-effective storage class. • S3 Inventory with Inventory reports – Audit and report on objects and their corresponding metadata and configure other Amazon S3 features to take action in Inventory reports. For example, you can report on the replication and encryption status of your objects. For a list of all the metadata available for each object in Inventory reports, see Amazon S3 Inventory list. Strong consistency Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your Amazon S3 bucket in all AWS Regions. This behavior applies to both writes of new objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access control lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. For more information, see Amazon S3 data consistency model. How Amazon S3 works Amazon S3 is an object storage service that stores data as objects, hierarchical data, or tabular data within buckets. An object is a file and any metadata that describes the file. A bucket is a container for objects. To store your data in Amazon S3, you first create a bucket and specify a bucket name and AWS Region. Then, you upload your data to that bucket as objects in Amazon S3. Each object has a key (or key name), which is the unique identifier for the object within the bucket. S3 provides features that you can configure to support your specific use case. For example, you can use S3 Versioning to keep multiple versions of an object in the same bucket, which allows you to restore objects that are accidentally deleted or overwritten. Analytics and insights API Version 2006-03-01 5 Amazon Simple"} +{"global_id": 807, "doc_id": "s3", "chunk_id": "6", "question_id": 4, "question": "What feature allows you to keep multiple versions of an object in the same bucket?", "answer_span": "you can use S3 Versioning to keep multiple versions of an object in the same bucket.", "chunk": "time to move data to a more cost-effective storage class. • S3 Inventory with Inventory reports – Audit and report on objects and their corresponding metadata and configure other Amazon S3 features to take action in Inventory reports. For example, you can report on the replication and encryption status of your objects. For a list of all the metadata available for each object in Inventory reports, see Amazon S3 Inventory list. Strong consistency Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your Amazon S3 bucket in all AWS Regions. This behavior applies to both writes of new objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access control lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. For more information, see Amazon S3 data consistency model. How Amazon S3 works Amazon S3 is an object storage service that stores data as objects, hierarchical data, or tabular data within buckets. An object is a file and any metadata that describes the file. A bucket is a container for objects. To store your data in Amazon S3, you first create a bucket and specify a bucket name and AWS Region. Then, you upload your data to that bucket as objects in Amazon S3. Each object has a key (or key name), which is the unique identifier for the object within the bucket. S3 provides features that you can configure to support your specific use case. For example, you can use S3 Versioning to keep multiple versions of an object in the same bucket, which allows you to restore objects that are accidentally deleted or overwritten. Analytics and insights API Version 2006-03-01 5 Amazon Simple"} +{"global_id": 808, "doc_id": "s3", "chunk_id": "7", "question_id": 1, "question": "What can you use S3 Versioning for?", "answer_span": "you can use S3 Versioning to keep multiple versions of an object in the same bucket, which allows you to restore objects that are accidentally deleted or overwritten.", "chunk": "features that you can configure to support your specific use case. For example, you can use S3 Versioning to keep multiple versions of an object in the same bucket, which allows you to restore objects that are accidentally deleted or overwritten. Analytics and insights API Version 2006-03-01 5 Amazon Simple Storage Service User Guide Buckets and the objects in them are private and can be accessed only if you explicitly grant access permissions. You can use bucket policies, AWS Identity and Access Management (IAM) policies, access control lists (ACLs), and S3 Access Points to manage access. Topics • Buckets • Objects • Keys • S3 Versioning • Version ID • Bucket policy • S3 access points • Access control lists (ACLs) • Regions Buckets Amazon S3 supports four types of buckets—general purpose buckets, directory buckets, table buckets, and vector buckets. Each type of bucket provides a unique set of features for different use cases. General purpose buckets – General purpose buckets are recommended for most use cases and access patterns and are the original S3 bucket type. A general purpose bucket is a container for objects stored in Amazon S3, and you can store any number of objects in a bucket and across all storage classes (except for S3 Express One Zone), so you can redundantly store objects across multiple Availability Zones. For more information, see Creating, configuring, and working with Amazon S3 general purpose buckets. Note By default, all general purpose buckets are private. However, you can grant public access to general purpose buckets. You can control access to general purpose buckets at the bucket, prefix (folder), or object tag level. For more information, see Access control in Amazon S3. Buckets API Version 2006-03-01 6 Amazon Simple Storage Service User Guide Directory buckets – Recommended for low-latency use cases"} +{"global_id": 809, "doc_id": "s3", "chunk_id": "7", "question_id": 2, "question": "What are the four types of buckets supported by Amazon S3?", "answer_span": "Amazon S3 supports four types of buckets—general purpose buckets, directory buckets, table buckets, and vector buckets.", "chunk": "features that you can configure to support your specific use case. For example, you can use S3 Versioning to keep multiple versions of an object in the same bucket, which allows you to restore objects that are accidentally deleted or overwritten. Analytics and insights API Version 2006-03-01 5 Amazon Simple Storage Service User Guide Buckets and the objects in them are private and can be accessed only if you explicitly grant access permissions. You can use bucket policies, AWS Identity and Access Management (IAM) policies, access control lists (ACLs), and S3 Access Points to manage access. Topics • Buckets • Objects • Keys • S3 Versioning • Version ID • Bucket policy • S3 access points • Access control lists (ACLs) • Regions Buckets Amazon S3 supports four types of buckets—general purpose buckets, directory buckets, table buckets, and vector buckets. Each type of bucket provides a unique set of features for different use cases. General purpose buckets – General purpose buckets are recommended for most use cases and access patterns and are the original S3 bucket type. A general purpose bucket is a container for objects stored in Amazon S3, and you can store any number of objects in a bucket and across all storage classes (except for S3 Express One Zone), so you can redundantly store objects across multiple Availability Zones. For more information, see Creating, configuring, and working with Amazon S3 general purpose buckets. Note By default, all general purpose buckets are private. However, you can grant public access to general purpose buckets. You can control access to general purpose buckets at the bucket, prefix (folder), or object tag level. For more information, see Access control in Amazon S3. Buckets API Version 2006-03-01 6 Amazon Simple Storage Service User Guide Directory buckets – Recommended for low-latency use cases"} +{"global_id": 810, "doc_id": "s3", "chunk_id": "7", "question_id": 3, "question": "What is a general purpose bucket?", "answer_span": "General purpose buckets are recommended for most use cases and access patterns and are the original S3 bucket type.", "chunk": "features that you can configure to support your specific use case. For example, you can use S3 Versioning to keep multiple versions of an object in the same bucket, which allows you to restore objects that are accidentally deleted or overwritten. Analytics and insights API Version 2006-03-01 5 Amazon Simple Storage Service User Guide Buckets and the objects in them are private and can be accessed only if you explicitly grant access permissions. You can use bucket policies, AWS Identity and Access Management (IAM) policies, access control lists (ACLs), and S3 Access Points to manage access. Topics • Buckets • Objects • Keys • S3 Versioning • Version ID • Bucket policy • S3 access points • Access control lists (ACLs) • Regions Buckets Amazon S3 supports four types of buckets—general purpose buckets, directory buckets, table buckets, and vector buckets. Each type of bucket provides a unique set of features for different use cases. General purpose buckets – General purpose buckets are recommended for most use cases and access patterns and are the original S3 bucket type. A general purpose bucket is a container for objects stored in Amazon S3, and you can store any number of objects in a bucket and across all storage classes (except for S3 Express One Zone), so you can redundantly store objects across multiple Availability Zones. For more information, see Creating, configuring, and working with Amazon S3 general purpose buckets. Note By default, all general purpose buckets are private. However, you can grant public access to general purpose buckets. You can control access to general purpose buckets at the bucket, prefix (folder), or object tag level. For more information, see Access control in Amazon S3. Buckets API Version 2006-03-01 6 Amazon Simple Storage Service User Guide Directory buckets – Recommended for low-latency use cases"} +{"global_id": 811, "doc_id": "s3", "chunk_id": "7", "question_id": 4, "question": "Are general purpose buckets private by default?", "answer_span": "By default, all general purpose buckets are private.", "chunk": "features that you can configure to support your specific use case. For example, you can use S3 Versioning to keep multiple versions of an object in the same bucket, which allows you to restore objects that are accidentally deleted or overwritten. Analytics and insights API Version 2006-03-01 5 Amazon Simple Storage Service User Guide Buckets and the objects in them are private and can be accessed only if you explicitly grant access permissions. You can use bucket policies, AWS Identity and Access Management (IAM) policies, access control lists (ACLs), and S3 Access Points to manage access. Topics • Buckets • Objects • Keys • S3 Versioning • Version ID • Bucket policy • S3 access points • Access control lists (ACLs) • Regions Buckets Amazon S3 supports four types of buckets—general purpose buckets, directory buckets, table buckets, and vector buckets. Each type of bucket provides a unique set of features for different use cases. General purpose buckets – General purpose buckets are recommended for most use cases and access patterns and are the original S3 bucket type. A general purpose bucket is a container for objects stored in Amazon S3, and you can store any number of objects in a bucket and across all storage classes (except for S3 Express One Zone), so you can redundantly store objects across multiple Availability Zones. For more information, see Creating, configuring, and working with Amazon S3 general purpose buckets. Note By default, all general purpose buckets are private. However, you can grant public access to general purpose buckets. You can control access to general purpose buckets at the bucket, prefix (folder), or object tag level. For more information, see Access control in Amazon S3. Buckets API Version 2006-03-01 6 Amazon Simple Storage Service User Guide Directory buckets – Recommended for low-latency use cases"} +{"global_id": 812, "doc_id": "s3", "chunk_id": "8", "question_id": 1, "question": "What can you control access to in general purpose buckets?", "answer_span": "You can control access to general purpose buckets at the bucket, prefix (folder), or object tag level.", "chunk": "access to general purpose buckets. You can control access to general purpose buckets at the bucket, prefix (folder), or object tag level. For more information, see Access control in Amazon S3. Buckets API Version 2006-03-01 6 Amazon Simple Storage Service User Guide Directory buckets – Recommended for low-latency use cases and data-residency use cases. By default, you can create up to 100 directory buckets in your AWS account, with no limit on the number of objects that you can store in a directory bucket. Directory buckets organize objects into hierarchical directories (prefixes) instead of the flat storage structure of general purpose buckets. This bucket type has no prefix limits and individual directories can scale horizontally. For more information, see Working with directory buckets. • For low-latency use cases, you can create a directory bucket in a single AWS Availability Zone to store data. Directory buckets in Availability Zones support the S3 Express One Zone storage class. With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. The S3 Express One Zone storage class is recommended if your application is performance sensitive and benefits from single-digit millisecond PUT and GET latencies. To learn more about creating directory buckets in Availability Zones, see High performance workloads. • For data-residency use cases, you can create a directory bucket in a single AWS Dedicated Local Zone (DLZ) to store data. In Dedicated Local Zones, you can create S3 directory buckets to store data in a specific data perimeter, which helps support your data residency and isolation use cases. Directory buckets in Local Zones support the S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage class. To learn more about creating directory buckets in Local Zones, see Data residency workloads. Note Directory buckets have all public access"} +{"global_id": 813, "doc_id": "s3", "chunk_id": "8", "question_id": 2, "question": "How many directory buckets can you create in your AWS account by default?", "answer_span": "By default, you can create up to 100 directory buckets in your AWS account.", "chunk": "access to general purpose buckets. You can control access to general purpose buckets at the bucket, prefix (folder), or object tag level. For more information, see Access control in Amazon S3. Buckets API Version 2006-03-01 6 Amazon Simple Storage Service User Guide Directory buckets – Recommended for low-latency use cases and data-residency use cases. By default, you can create up to 100 directory buckets in your AWS account, with no limit on the number of objects that you can store in a directory bucket. Directory buckets organize objects into hierarchical directories (prefixes) instead of the flat storage structure of general purpose buckets. This bucket type has no prefix limits and individual directories can scale horizontally. For more information, see Working with directory buckets. • For low-latency use cases, you can create a directory bucket in a single AWS Availability Zone to store data. Directory buckets in Availability Zones support the S3 Express One Zone storage class. With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. The S3 Express One Zone storage class is recommended if your application is performance sensitive and benefits from single-digit millisecond PUT and GET latencies. To learn more about creating directory buckets in Availability Zones, see High performance workloads. • For data-residency use cases, you can create a directory bucket in a single AWS Dedicated Local Zone (DLZ) to store data. In Dedicated Local Zones, you can create S3 directory buckets to store data in a specific data perimeter, which helps support your data residency and isolation use cases. Directory buckets in Local Zones support the S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage class. To learn more about creating directory buckets in Local Zones, see Data residency workloads. Note Directory buckets have all public access"} +{"global_id": 814, "doc_id": "s3", "chunk_id": "8", "question_id": 3, "question": "What is recommended for low-latency use cases?", "answer_span": "For low-latency use cases, you can create a directory bucket in a single AWS Availability Zone to store data.", "chunk": "access to general purpose buckets. You can control access to general purpose buckets at the bucket, prefix (folder), or object tag level. For more information, see Access control in Amazon S3. Buckets API Version 2006-03-01 6 Amazon Simple Storage Service User Guide Directory buckets – Recommended for low-latency use cases and data-residency use cases. By default, you can create up to 100 directory buckets in your AWS account, with no limit on the number of objects that you can store in a directory bucket. Directory buckets organize objects into hierarchical directories (prefixes) instead of the flat storage structure of general purpose buckets. This bucket type has no prefix limits and individual directories can scale horizontally. For more information, see Working with directory buckets. • For low-latency use cases, you can create a directory bucket in a single AWS Availability Zone to store data. Directory buckets in Availability Zones support the S3 Express One Zone storage class. With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. The S3 Express One Zone storage class is recommended if your application is performance sensitive and benefits from single-digit millisecond PUT and GET latencies. To learn more about creating directory buckets in Availability Zones, see High performance workloads. • For data-residency use cases, you can create a directory bucket in a single AWS Dedicated Local Zone (DLZ) to store data. In Dedicated Local Zones, you can create S3 directory buckets to store data in a specific data perimeter, which helps support your data residency and isolation use cases. Directory buckets in Local Zones support the S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage class. To learn more about creating directory buckets in Local Zones, see Data residency workloads. Note Directory buckets have all public access"} +{"global_id": 815, "doc_id": "s3", "chunk_id": "8", "question_id": 4, "question": "What storage class do directory buckets in Local Zones support?", "answer_span": "Directory buckets in Local Zones support the S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage class.", "chunk": "access to general purpose buckets. You can control access to general purpose buckets at the bucket, prefix (folder), or object tag level. For more information, see Access control in Amazon S3. Buckets API Version 2006-03-01 6 Amazon Simple Storage Service User Guide Directory buckets – Recommended for low-latency use cases and data-residency use cases. By default, you can create up to 100 directory buckets in your AWS account, with no limit on the number of objects that you can store in a directory bucket. Directory buckets organize objects into hierarchical directories (prefixes) instead of the flat storage structure of general purpose buckets. This bucket type has no prefix limits and individual directories can scale horizontally. For more information, see Working with directory buckets. • For low-latency use cases, you can create a directory bucket in a single AWS Availability Zone to store data. Directory buckets in Availability Zones support the S3 Express One Zone storage class. With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. The S3 Express One Zone storage class is recommended if your application is performance sensitive and benefits from single-digit millisecond PUT and GET latencies. To learn more about creating directory buckets in Availability Zones, see High performance workloads. • For data-residency use cases, you can create a directory bucket in a single AWS Dedicated Local Zone (DLZ) to store data. In Dedicated Local Zones, you can create S3 directory buckets to store data in a specific data perimeter, which helps support your data residency and isolation use cases. Directory buckets in Local Zones support the S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage class. To learn more about creating directory buckets in Local Zones, see Data residency workloads. Note Directory buckets have all public access"} +{"global_id": 816, "doc_id": "s3", "chunk_id": "9", "question_id": 1, "question": "What storage class do directory buckets in Local Zones support?", "answer_span": "Directory buckets in Local Zones support the S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage class.", "chunk": "data perimeter, which helps support your data residency and isolation use cases. Directory buckets in Local Zones support the S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage class. To learn more about creating directory buckets in Local Zones, see Data residency workloads. Note Directory buckets have all public access disabled by default. This behavior can't be changed. You can't grant access to objects stored in directory buckets. You can grant access only to your directory buckets. For more information, see Authenticating and authorizing requests. Table buckets – Recommended for storing tabular data, such as daily purchase transactions, streaming sensor data, or ad impressions. Tabular data represents data in columns and rows, like in a database table. Table buckets provide S3 storage that's optimized for analytics and machine learning workloads, with features designed to continuously improve query performance and reduce storage costs for tables. S3 Tables are purpose-built for storing tabular data in the Apache Iceberg format. You can query tabular data in S3 Tables with popular query engines, including Amazon Athena, Amazon Redshift, and Apache Spark. By default, you can create up to 10 table buckets per Buckets API Version 2006-03-01 7 Amazon Simple Storage Service User Guide AWS account per AWS Region and up to 10,000 tables per table bucket. For more information, see Working with S3 Tables and table buckets. Note All table buckets and tables are private and can't be made public. These resources can only be accessed by users who are explicitly granted access. To grant access, you can use IAM resource-based policies for table buckets and tables, and IAM identity-based policies for users and roles. For more information, see Security for S3 Tables. Vector buckets – S3 vector buckets are a type of Amazon S3 bucket that are purpose-built to store and query vectors."} +{"global_id": 817, "doc_id": "s3", "chunk_id": "9", "question_id": 2, "question": "What is the default access level for directory buckets?", "answer_span": "Note Directory buckets have all public access disabled by default.", "chunk": "data perimeter, which helps support your data residency and isolation use cases. Directory buckets in Local Zones support the S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage class. To learn more about creating directory buckets in Local Zones, see Data residency workloads. Note Directory buckets have all public access disabled by default. This behavior can't be changed. You can't grant access to objects stored in directory buckets. You can grant access only to your directory buckets. For more information, see Authenticating and authorizing requests. Table buckets – Recommended for storing tabular data, such as daily purchase transactions, streaming sensor data, or ad impressions. Tabular data represents data in columns and rows, like in a database table. Table buckets provide S3 storage that's optimized for analytics and machine learning workloads, with features designed to continuously improve query performance and reduce storage costs for tables. S3 Tables are purpose-built for storing tabular data in the Apache Iceberg format. You can query tabular data in S3 Tables with popular query engines, including Amazon Athena, Amazon Redshift, and Apache Spark. By default, you can create up to 10 table buckets per Buckets API Version 2006-03-01 7 Amazon Simple Storage Service User Guide AWS account per AWS Region and up to 10,000 tables per table bucket. For more information, see Working with S3 Tables and table buckets. Note All table buckets and tables are private and can't be made public. These resources can only be accessed by users who are explicitly granted access. To grant access, you can use IAM resource-based policies for table buckets and tables, and IAM identity-based policies for users and roles. For more information, see Security for S3 Tables. Vector buckets – S3 vector buckets are a type of Amazon S3 bucket that are purpose-built to store and query vectors."} +{"global_id": 818, "doc_id": "s3", "chunk_id": "9", "question_id": 3, "question": "How many table buckets can you create per AWS account per AWS Region?", "answer_span": "By default, you can create up to 10 table buckets per AWS account per AWS Region.", "chunk": "data perimeter, which helps support your data residency and isolation use cases. Directory buckets in Local Zones support the S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage class. To learn more about creating directory buckets in Local Zones, see Data residency workloads. Note Directory buckets have all public access disabled by default. This behavior can't be changed. You can't grant access to objects stored in directory buckets. You can grant access only to your directory buckets. For more information, see Authenticating and authorizing requests. Table buckets – Recommended for storing tabular data, such as daily purchase transactions, streaming sensor data, or ad impressions. Tabular data represents data in columns and rows, like in a database table. Table buckets provide S3 storage that's optimized for analytics and machine learning workloads, with features designed to continuously improve query performance and reduce storage costs for tables. S3 Tables are purpose-built for storing tabular data in the Apache Iceberg format. You can query tabular data in S3 Tables with popular query engines, including Amazon Athena, Amazon Redshift, and Apache Spark. By default, you can create up to 10 table buckets per Buckets API Version 2006-03-01 7 Amazon Simple Storage Service User Guide AWS account per AWS Region and up to 10,000 tables per table bucket. For more information, see Working with S3 Tables and table buckets. Note All table buckets and tables are private and can't be made public. These resources can only be accessed by users who are explicitly granted access. To grant access, you can use IAM resource-based policies for table buckets and tables, and IAM identity-based policies for users and roles. For more information, see Security for S3 Tables. Vector buckets – S3 vector buckets are a type of Amazon S3 bucket that are purpose-built to store and query vectors."} +{"global_id": 819, "doc_id": "s3", "chunk_id": "9", "question_id": 4, "question": "What format are S3 Tables purpose-built for storing tabular data?", "answer_span": "S3 Tables are purpose-built for storing tabular data in the Apache Iceberg format.", "chunk": "data perimeter, which helps support your data residency and isolation use cases. Directory buckets in Local Zones support the S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage class. To learn more about creating directory buckets in Local Zones, see Data residency workloads. Note Directory buckets have all public access disabled by default. This behavior can't be changed. You can't grant access to objects stored in directory buckets. You can grant access only to your directory buckets. For more information, see Authenticating and authorizing requests. Table buckets – Recommended for storing tabular data, such as daily purchase transactions, streaming sensor data, or ad impressions. Tabular data represents data in columns and rows, like in a database table. Table buckets provide S3 storage that's optimized for analytics and machine learning workloads, with features designed to continuously improve query performance and reduce storage costs for tables. S3 Tables are purpose-built for storing tabular data in the Apache Iceberg format. You can query tabular data in S3 Tables with popular query engines, including Amazon Athena, Amazon Redshift, and Apache Spark. By default, you can create up to 10 table buckets per Buckets API Version 2006-03-01 7 Amazon Simple Storage Service User Guide AWS account per AWS Region and up to 10,000 tables per table bucket. For more information, see Working with S3 Tables and table buckets. Note All table buckets and tables are private and can't be made public. These resources can only be accessed by users who are explicitly granted access. To grant access, you can use IAM resource-based policies for table buckets and tables, and IAM identity-based policies for users and roles. For more information, see Security for S3 Tables. Vector buckets – S3 vector buckets are a type of Amazon S3 bucket that are purpose-built to store and query vectors."} +{"global_id": 820, "doc_id": "s3", "chunk_id": "10", "question_id": 1, "question": "What are S3 vector buckets purpose-built for?", "answer_span": "S3 vector buckets are a type of Amazon S3 bucket that are purpose-built to store and query vectors.", "chunk": "grant access, you can use IAM resource-based policies for table buckets and tables, and IAM identity-based policies for users and roles. For more information, see Security for S3 Tables. Vector buckets – S3 vector buckets are a type of Amazon S3 bucket that are purpose-built to store and query vectors. Vector buckets use dedicated API operations to write and query vector data efficiently. With S3 vector buckets, you can store vector embeddings for machine learning models, perform similarity searches, and integrate with services like Amazon Bedrock and Amazon OpenSearch. S3 vector buckets organize data using vector indexes, which are resources within a bucket that store and organize vector data for efficient similarity search. Each vector index can be configured with specific dimensions, distance metrics (like cosine similarity), and metadata configurations to optimize for your specific use case. For more information, see Working with S3 Vectors and vector buckets. Additional information about all bucket types When you create a bucket, you enter a bucket name and choose the AWS Region where the bucket will reside. After you create a bucket, you cannot change the name of the bucket or its Region. Bucket names must follow the following bucket naming rules: • General purpose bucket naming rules • Directory bucket naming rules • Table bucket naming rules Buckets also: • Organize the Amazon S3 namespace at the highest level. For general purpose buckets, this namespace is S3. For directory buckets, this namespace is s3express. For table buckets, this namespace is s3tables. Buckets API Version 2006-03-01 8 Amazon Simple Storage Service User Guide • Identify the account responsible for storage and data transfer charges. • Serve as the unit of aggregation for usage reporting. Objects Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata. The metadata"} +{"global_id": 821, "doc_id": "s3", "chunk_id": "10", "question_id": 2, "question": "What can you do with S3 vector buckets?", "answer_span": "With S3 vector buckets, you can store vector embeddings for machine learning models, perform similarity searches, and integrate with services like Amazon Bedrock and Amazon OpenSearch.", "chunk": "grant access, you can use IAM resource-based policies for table buckets and tables, and IAM identity-based policies for users and roles. For more information, see Security for S3 Tables. Vector buckets – S3 vector buckets are a type of Amazon S3 bucket that are purpose-built to store and query vectors. Vector buckets use dedicated API operations to write and query vector data efficiently. With S3 vector buckets, you can store vector embeddings for machine learning models, perform similarity searches, and integrate with services like Amazon Bedrock and Amazon OpenSearch. S3 vector buckets organize data using vector indexes, which are resources within a bucket that store and organize vector data for efficient similarity search. Each vector index can be configured with specific dimensions, distance metrics (like cosine similarity), and metadata configurations to optimize for your specific use case. For more information, see Working with S3 Vectors and vector buckets. Additional information about all bucket types When you create a bucket, you enter a bucket name and choose the AWS Region where the bucket will reside. After you create a bucket, you cannot change the name of the bucket or its Region. Bucket names must follow the following bucket naming rules: • General purpose bucket naming rules • Directory bucket naming rules • Table bucket naming rules Buckets also: • Organize the Amazon S3 namespace at the highest level. For general purpose buckets, this namespace is S3. For directory buckets, this namespace is s3express. For table buckets, this namespace is s3tables. Buckets API Version 2006-03-01 8 Amazon Simple Storage Service User Guide • Identify the account responsible for storage and data transfer charges. • Serve as the unit of aggregation for usage reporting. Objects Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata. The metadata"} +{"global_id": 822, "doc_id": "s3", "chunk_id": "10", "question_id": 3, "question": "What must you do when you create a bucket?", "answer_span": "When you create a bucket, you enter a bucket name and choose the AWS Region where the bucket will reside.", "chunk": "grant access, you can use IAM resource-based policies for table buckets and tables, and IAM identity-based policies for users and roles. For more information, see Security for S3 Tables. Vector buckets – S3 vector buckets are a type of Amazon S3 bucket that are purpose-built to store and query vectors. Vector buckets use dedicated API operations to write and query vector data efficiently. With S3 vector buckets, you can store vector embeddings for machine learning models, perform similarity searches, and integrate with services like Amazon Bedrock and Amazon OpenSearch. S3 vector buckets organize data using vector indexes, which are resources within a bucket that store and organize vector data for efficient similarity search. Each vector index can be configured with specific dimensions, distance metrics (like cosine similarity), and metadata configurations to optimize for your specific use case. For more information, see Working with S3 Vectors and vector buckets. Additional information about all bucket types When you create a bucket, you enter a bucket name and choose the AWS Region where the bucket will reside. After you create a bucket, you cannot change the name of the bucket or its Region. Bucket names must follow the following bucket naming rules: • General purpose bucket naming rules • Directory bucket naming rules • Table bucket naming rules Buckets also: • Organize the Amazon S3 namespace at the highest level. For general purpose buckets, this namespace is S3. For directory buckets, this namespace is s3express. For table buckets, this namespace is s3tables. Buckets API Version 2006-03-01 8 Amazon Simple Storage Service User Guide • Identify the account responsible for storage and data transfer charges. • Serve as the unit of aggregation for usage reporting. Objects Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata. The metadata"} +{"global_id": 823, "doc_id": "s3", "chunk_id": "10", "question_id": 4, "question": "What are objects in Amazon S3?", "answer_span": "Objects are the fundamental entities stored in Amazon S3.", "chunk": "grant access, you can use IAM resource-based policies for table buckets and tables, and IAM identity-based policies for users and roles. For more information, see Security for S3 Tables. Vector buckets – S3 vector buckets are a type of Amazon S3 bucket that are purpose-built to store and query vectors. Vector buckets use dedicated API operations to write and query vector data efficiently. With S3 vector buckets, you can store vector embeddings for machine learning models, perform similarity searches, and integrate with services like Amazon Bedrock and Amazon OpenSearch. S3 vector buckets organize data using vector indexes, which are resources within a bucket that store and organize vector data for efficient similarity search. Each vector index can be configured with specific dimensions, distance metrics (like cosine similarity), and metadata configurations to optimize for your specific use case. For more information, see Working with S3 Vectors and vector buckets. Additional information about all bucket types When you create a bucket, you enter a bucket name and choose the AWS Region where the bucket will reside. After you create a bucket, you cannot change the name of the bucket or its Region. Bucket names must follow the following bucket naming rules: • General purpose bucket naming rules • Directory bucket naming rules • Table bucket naming rules Buckets also: • Organize the Amazon S3 namespace at the highest level. For general purpose buckets, this namespace is S3. For directory buckets, this namespace is s3express. For table buckets, this namespace is s3tables. Buckets API Version 2006-03-01 8 Amazon Simple Storage Service User Guide • Identify the account responsible for storage and data transfer charges. • Serve as the unit of aggregation for usage reporting. Objects Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata. The metadata"} +{"global_id": 824, "doc_id": "s3", "chunk_id": "11", "question_id": 1, "question": "What are the fundamental entities stored in Amazon S3?", "answer_span": "Objects are the fundamental entities stored in Amazon S3.", "chunk": "API Version 2006-03-01 8 Amazon Simple Storage Service User Guide • Identify the account responsible for storage and data transfer charges. • Serve as the unit of aggregation for usage reporting. Objects Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata. The metadata is a set of name-value pairs that describe the object. These pairs include some default metadata, such as the date last modified, and standard HTTP metadata, such as Content-Type. You can also specify custom metadata at the time that the object is stored. Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in the amzn-s3-demo-bucket general purpose bucket in the US West (Oregon) Region, then it is addressable by using the URL https://amzn-s3-demo-bucket.s3.uswest-2.amazonaws.com/photos/puppy.jpg. For more information, see Accessing a Bucket. An object is uniquely identified within a bucket by a key (name) and a version ID (if S3 Versioning is enabled on the bucket). For more information about objects, see Amazon S3 objects overview. Keys An object key (or key name) is the unique identifier for an object within a bucket. Every object in a bucket has exactly one key. The combination of a bucket, object key, and optionally, version ID (if S3 Versioning is enabled for the bucket) uniquely identify each object. So you can think of Amazon S3 as a basic data map between \"bucket + key + version\" and the object itself. Every object in Amazon S3 can be uniquely addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version. For example, in the URL https://amzns3-demo-bucket.s3.us-west-2.amazonaws.com/photos/puppy.jpg, amzn-s3-demobucket is the name of the bucket and photos/puppy.jpg is the key. For more information about object keys, see Naming Amazon S3 objects. S3 Versioning You can use"} +{"global_id": 825, "doc_id": "s3", "chunk_id": "11", "question_id": 2, "question": "What does the metadata of an object consist of?", "answer_span": "The metadata is a set of name-value pairs that describe the object.", "chunk": "API Version 2006-03-01 8 Amazon Simple Storage Service User Guide • Identify the account responsible for storage and data transfer charges. • Serve as the unit of aggregation for usage reporting. Objects Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata. The metadata is a set of name-value pairs that describe the object. These pairs include some default metadata, such as the date last modified, and standard HTTP metadata, such as Content-Type. You can also specify custom metadata at the time that the object is stored. Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in the amzn-s3-demo-bucket general purpose bucket in the US West (Oregon) Region, then it is addressable by using the URL https://amzn-s3-demo-bucket.s3.uswest-2.amazonaws.com/photos/puppy.jpg. For more information, see Accessing a Bucket. An object is uniquely identified within a bucket by a key (name) and a version ID (if S3 Versioning is enabled on the bucket). For more information about objects, see Amazon S3 objects overview. Keys An object key (or key name) is the unique identifier for an object within a bucket. Every object in a bucket has exactly one key. The combination of a bucket, object key, and optionally, version ID (if S3 Versioning is enabled for the bucket) uniquely identify each object. So you can think of Amazon S3 as a basic data map between \"bucket + key + version\" and the object itself. Every object in Amazon S3 can be uniquely addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version. For example, in the URL https://amzns3-demo-bucket.s3.us-west-2.amazonaws.com/photos/puppy.jpg, amzn-s3-demobucket is the name of the bucket and photos/puppy.jpg is the key. For more information about object keys, see Naming Amazon S3 objects. S3 Versioning You can use"} +{"global_id": 826, "doc_id": "s3", "chunk_id": "11", "question_id": 3, "question": "How is an object uniquely identified within a bucket?", "answer_span": "An object is uniquely identified within a bucket by a key (name) and a version ID (if S3 Versioning is enabled on the bucket).", "chunk": "API Version 2006-03-01 8 Amazon Simple Storage Service User Guide • Identify the account responsible for storage and data transfer charges. • Serve as the unit of aggregation for usage reporting. Objects Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata. The metadata is a set of name-value pairs that describe the object. These pairs include some default metadata, such as the date last modified, and standard HTTP metadata, such as Content-Type. You can also specify custom metadata at the time that the object is stored. Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in the amzn-s3-demo-bucket general purpose bucket in the US West (Oregon) Region, then it is addressable by using the URL https://amzn-s3-demo-bucket.s3.uswest-2.amazonaws.com/photos/puppy.jpg. For more information, see Accessing a Bucket. An object is uniquely identified within a bucket by a key (name) and a version ID (if S3 Versioning is enabled on the bucket). For more information about objects, see Amazon S3 objects overview. Keys An object key (or key name) is the unique identifier for an object within a bucket. Every object in a bucket has exactly one key. The combination of a bucket, object key, and optionally, version ID (if S3 Versioning is enabled for the bucket) uniquely identify each object. So you can think of Amazon S3 as a basic data map between \"bucket + key + version\" and the object itself. Every object in Amazon S3 can be uniquely addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version. For example, in the URL https://amzns3-demo-bucket.s3.us-west-2.amazonaws.com/photos/puppy.jpg, amzn-s3-demobucket is the name of the bucket and photos/puppy.jpg is the key. For more information about object keys, see Naming Amazon S3 objects. S3 Versioning You can use"} +{"global_id": 827, "doc_id": "s3", "chunk_id": "11", "question_id": 4, "question": "What is the unique identifier for an object within a bucket called?", "answer_span": "An object key (or key name) is the unique identifier for an object within a bucket.", "chunk": "API Version 2006-03-01 8 Amazon Simple Storage Service User Guide • Identify the account responsible for storage and data transfer charges. • Serve as the unit of aggregation for usage reporting. Objects Objects are the fundamental entities stored in Amazon S3. Objects consist of object data and metadata. The metadata is a set of name-value pairs that describe the object. These pairs include some default metadata, such as the date last modified, and standard HTTP metadata, such as Content-Type. You can also specify custom metadata at the time that the object is stored. Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in the amzn-s3-demo-bucket general purpose bucket in the US West (Oregon) Region, then it is addressable by using the URL https://amzn-s3-demo-bucket.s3.uswest-2.amazonaws.com/photos/puppy.jpg. For more information, see Accessing a Bucket. An object is uniquely identified within a bucket by a key (name) and a version ID (if S3 Versioning is enabled on the bucket). For more information about objects, see Amazon S3 objects overview. Keys An object key (or key name) is the unique identifier for an object within a bucket. Every object in a bucket has exactly one key. The combination of a bucket, object key, and optionally, version ID (if S3 Versioning is enabled for the bucket) uniquely identify each object. So you can think of Amazon S3 as a basic data map between \"bucket + key + version\" and the object itself. Every object in Amazon S3 can be uniquely addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version. For example, in the URL https://amzns3-demo-bucket.s3.us-west-2.amazonaws.com/photos/puppy.jpg, amzn-s3-demobucket is the name of the bucket and photos/puppy.jpg is the key. For more information about object keys, see Naming Amazon S3 objects. S3 Versioning You can use"} +{"global_id": 828, "doc_id": "s3", "chunk_id": "12", "question_id": 1, "question": "What is the name of the bucket in the example URL?", "answer_span": "amzn-s3-demobucket is the name of the bucket", "chunk": "addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version. For example, in the URL https://amzns3-demo-bucket.s3.us-west-2.amazonaws.com/photos/puppy.jpg, amzn-s3-demobucket is the name of the bucket and photos/puppy.jpg is the key. For more information about object keys, see Naming Amazon S3 objects. S3 Versioning You can use S3 Versioning to keep multiple variants of an object in the same bucket. With S3 Versioning, you can preserve, retrieve, and restore every version of every object stored in your buckets. You can easily recover from both unintended user actions and application failures. For more information, see Retaining multiple versions of objects with S3 Versioning. Objects API Version 2006-03-01 9 Amazon Simple Storage Service User Guide Version ID When you enable S3 Versioning in a bucket, Amazon S3 generates a unique version ID for each object added to the bucket. Objects that already existed in the bucket at the time that you enable versioning have a version ID of null. If you modify these (or any other) objects with other operations, such as CopyObject and PutObject, the new objects get a unique version ID. For more information, see Retaining multiple versions of objects with S3 Versioning. Bucket policy A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy that you can use to grant access permissions to your bucket and the objects in it. Only the bucket owner can associate a policy with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner. Bucket policies are limited to 20 KB in size. Bucket policies use JSON-based access policy language that is standard across AWS. You can use bucket policies to add or deny permissions for the objects in a bucket. Bucket policies allow"} +{"global_id": 829, "doc_id": "s3", "chunk_id": "12", "question_id": 2, "question": "What does S3 Versioning allow you to do?", "answer_span": "With S3 Versioning, you can preserve, retrieve, and restore every version of every object stored in your buckets.", "chunk": "addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version. For example, in the URL https://amzns3-demo-bucket.s3.us-west-2.amazonaws.com/photos/puppy.jpg, amzn-s3-demobucket is the name of the bucket and photos/puppy.jpg is the key. For more information about object keys, see Naming Amazon S3 objects. S3 Versioning You can use S3 Versioning to keep multiple variants of an object in the same bucket. With S3 Versioning, you can preserve, retrieve, and restore every version of every object stored in your buckets. You can easily recover from both unintended user actions and application failures. For more information, see Retaining multiple versions of objects with S3 Versioning. Objects API Version 2006-03-01 9 Amazon Simple Storage Service User Guide Version ID When you enable S3 Versioning in a bucket, Amazon S3 generates a unique version ID for each object added to the bucket. Objects that already existed in the bucket at the time that you enable versioning have a version ID of null. If you modify these (or any other) objects with other operations, such as CopyObject and PutObject, the new objects get a unique version ID. For more information, see Retaining multiple versions of objects with S3 Versioning. Bucket policy A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy that you can use to grant access permissions to your bucket and the objects in it. Only the bucket owner can associate a policy with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner. Bucket policies are limited to 20 KB in size. Bucket policies use JSON-based access policy language that is standard across AWS. You can use bucket policies to add or deny permissions for the objects in a bucket. Bucket policies allow"} +{"global_id": 830, "doc_id": "s3", "chunk_id": "12", "question_id": 3, "question": "What is generated when you enable S3 Versioning in a bucket?", "answer_span": "Amazon S3 generates a unique version ID for each object added to the bucket.", "chunk": "addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version. For example, in the URL https://amzns3-demo-bucket.s3.us-west-2.amazonaws.com/photos/puppy.jpg, amzn-s3-demobucket is the name of the bucket and photos/puppy.jpg is the key. For more information about object keys, see Naming Amazon S3 objects. S3 Versioning You can use S3 Versioning to keep multiple variants of an object in the same bucket. With S3 Versioning, you can preserve, retrieve, and restore every version of every object stored in your buckets. You can easily recover from both unintended user actions and application failures. For more information, see Retaining multiple versions of objects with S3 Versioning. Objects API Version 2006-03-01 9 Amazon Simple Storage Service User Guide Version ID When you enable S3 Versioning in a bucket, Amazon S3 generates a unique version ID for each object added to the bucket. Objects that already existed in the bucket at the time that you enable versioning have a version ID of null. If you modify these (or any other) objects with other operations, such as CopyObject and PutObject, the new objects get a unique version ID. For more information, see Retaining multiple versions of objects with S3 Versioning. Bucket policy A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy that you can use to grant access permissions to your bucket and the objects in it. Only the bucket owner can associate a policy with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner. Bucket policies are limited to 20 KB in size. Bucket policies use JSON-based access policy language that is standard across AWS. You can use bucket policies to add or deny permissions for the objects in a bucket. Bucket policies allow"} +{"global_id": 831, "doc_id": "s3", "chunk_id": "12", "question_id": 4, "question": "Who can associate a bucket policy with a bucket?", "answer_span": "Only the bucket owner can associate a policy with a bucket.", "chunk": "addressed through the combination of the web service endpoint, bucket name, key, and optionally, a version. For example, in the URL https://amzns3-demo-bucket.s3.us-west-2.amazonaws.com/photos/puppy.jpg, amzn-s3-demobucket is the name of the bucket and photos/puppy.jpg is the key. For more information about object keys, see Naming Amazon S3 objects. S3 Versioning You can use S3 Versioning to keep multiple variants of an object in the same bucket. With S3 Versioning, you can preserve, retrieve, and restore every version of every object stored in your buckets. You can easily recover from both unintended user actions and application failures. For more information, see Retaining multiple versions of objects with S3 Versioning. Objects API Version 2006-03-01 9 Amazon Simple Storage Service User Guide Version ID When you enable S3 Versioning in a bucket, Amazon S3 generates a unique version ID for each object added to the bucket. Objects that already existed in the bucket at the time that you enable versioning have a version ID of null. If you modify these (or any other) objects with other operations, such as CopyObject and PutObject, the new objects get a unique version ID. For more information, see Retaining multiple versions of objects with S3 Versioning. Bucket policy A bucket policy is a resource-based AWS Identity and Access Management (IAM) policy that you can use to grant access permissions to your bucket and the objects in it. Only the bucket owner can associate a policy with a bucket. The permissions attached to the bucket apply to all of the objects in the bucket that are owned by the bucket owner. Bucket policies are limited to 20 KB in size. Bucket policies use JSON-based access policy language that is standard across AWS. You can use bucket policies to add or deny permissions for the objects in a bucket. Bucket policies allow"} +{"global_id": 832, "doc_id": "s3", "chunk_id": "13", "question_id": 1, "question": "What are bucket policies limited to in size?", "answer_span": "Bucket policies are limited to 20 KB in size.", "chunk": "in the bucket that are owned by the bucket owner. Bucket policies are limited to 20 KB in size. Bucket policies use JSON-based access policy language that is standard across AWS. You can use bucket policies to add or deny permissions for the objects in a bucket. Bucket policies allow or deny requests based on the elements in the policy, including the requester, S3 actions, resources, and aspects or conditions of the request (for example, the IP address used to make the request). For example, you can create a bucket policy that grants cross-account permissions to upload objects to an S3 bucket while ensuring that the bucket owner has full control of the uploaded objects. For more information, see Examples of Amazon S3 bucket policies. In your bucket policy, you can use wildcard characters on Amazon Resource Names (ARNs) and other values to grant permissions to a subset of objects. For example, you can control access to groups of objects that begin with a common prefix or end with a given extension, such as .html. S3 access points Amazon S3 access points are named network endpoints with dedicated access policies that describe how data can be accessed using that endpoint. Access points are attached to an underlying data source, such as a general purpose bucket, directory bucket, or a FSx for OpenZFS volume, that you can use to perform S3 object operations, such as GetObject and PutObject. Access points simplify managing data access at scale for shared datasets in Amazon S3. Each access point has its own access point policy. You can configure Block Public Access settings for each access point attached to a bucket. To restrict Amazon S3 data access to a private network, you can also configure any access point to accept requests only from a virtual private"} +{"global_id": 833, "doc_id": "s3", "chunk_id": "13", "question_id": 2, "question": "What language do bucket policies use?", "answer_span": "Bucket policies use JSON-based access policy language that is standard across AWS.", "chunk": "in the bucket that are owned by the bucket owner. Bucket policies are limited to 20 KB in size. Bucket policies use JSON-based access policy language that is standard across AWS. You can use bucket policies to add or deny permissions for the objects in a bucket. Bucket policies allow or deny requests based on the elements in the policy, including the requester, S3 actions, resources, and aspects or conditions of the request (for example, the IP address used to make the request). For example, you can create a bucket policy that grants cross-account permissions to upload objects to an S3 bucket while ensuring that the bucket owner has full control of the uploaded objects. For more information, see Examples of Amazon S3 bucket policies. In your bucket policy, you can use wildcard characters on Amazon Resource Names (ARNs) and other values to grant permissions to a subset of objects. For example, you can control access to groups of objects that begin with a common prefix or end with a given extension, such as .html. S3 access points Amazon S3 access points are named network endpoints with dedicated access policies that describe how data can be accessed using that endpoint. Access points are attached to an underlying data source, such as a general purpose bucket, directory bucket, or a FSx for OpenZFS volume, that you can use to perform S3 object operations, such as GetObject and PutObject. Access points simplify managing data access at scale for shared datasets in Amazon S3. Each access point has its own access point policy. You can configure Block Public Access settings for each access point attached to a bucket. To restrict Amazon S3 data access to a private network, you can also configure any access point to accept requests only from a virtual private"} +{"global_id": 834, "doc_id": "s3", "chunk_id": "13", "question_id": 3, "question": "What can you use bucket policies to do?", "answer_span": "You can use bucket policies to add or deny permissions for the objects in a bucket.", "chunk": "in the bucket that are owned by the bucket owner. Bucket policies are limited to 20 KB in size. Bucket policies use JSON-based access policy language that is standard across AWS. You can use bucket policies to add or deny permissions for the objects in a bucket. Bucket policies allow or deny requests based on the elements in the policy, including the requester, S3 actions, resources, and aspects or conditions of the request (for example, the IP address used to make the request). For example, you can create a bucket policy that grants cross-account permissions to upload objects to an S3 bucket while ensuring that the bucket owner has full control of the uploaded objects. For more information, see Examples of Amazon S3 bucket policies. In your bucket policy, you can use wildcard characters on Amazon Resource Names (ARNs) and other values to grant permissions to a subset of objects. For example, you can control access to groups of objects that begin with a common prefix or end with a given extension, such as .html. S3 access points Amazon S3 access points are named network endpoints with dedicated access policies that describe how data can be accessed using that endpoint. Access points are attached to an underlying data source, such as a general purpose bucket, directory bucket, or a FSx for OpenZFS volume, that you can use to perform S3 object operations, such as GetObject and PutObject. Access points simplify managing data access at scale for shared datasets in Amazon S3. Each access point has its own access point policy. You can configure Block Public Access settings for each access point attached to a bucket. To restrict Amazon S3 data access to a private network, you can also configure any access point to accept requests only from a virtual private"} +{"global_id": 835, "doc_id": "s3", "chunk_id": "13", "question_id": 4, "question": "What do Amazon S3 access points describe?", "answer_span": "Amazon S3 access points are named network endpoints with dedicated access policies that describe how data can be accessed using that endpoint.", "chunk": "in the bucket that are owned by the bucket owner. Bucket policies are limited to 20 KB in size. Bucket policies use JSON-based access policy language that is standard across AWS. You can use bucket policies to add or deny permissions for the objects in a bucket. Bucket policies allow or deny requests based on the elements in the policy, including the requester, S3 actions, resources, and aspects or conditions of the request (for example, the IP address used to make the request). For example, you can create a bucket policy that grants cross-account permissions to upload objects to an S3 bucket while ensuring that the bucket owner has full control of the uploaded objects. For more information, see Examples of Amazon S3 bucket policies. In your bucket policy, you can use wildcard characters on Amazon Resource Names (ARNs) and other values to grant permissions to a subset of objects. For example, you can control access to groups of objects that begin with a common prefix or end with a given extension, such as .html. S3 access points Amazon S3 access points are named network endpoints with dedicated access policies that describe how data can be accessed using that endpoint. Access points are attached to an underlying data source, such as a general purpose bucket, directory bucket, or a FSx for OpenZFS volume, that you can use to perform S3 object operations, such as GetObject and PutObject. Access points simplify managing data access at scale for shared datasets in Amazon S3. Each access point has its own access point policy. You can configure Block Public Access settings for each access point attached to a bucket. To restrict Amazon S3 data access to a private network, you can also configure any access point to accept requests only from a virtual private"} +{"global_id": 836, "doc_id": "s3", "chunk_id": "14", "question_id": 1, "question": "What can you configure for each access point attached to a bucket?", "answer_span": "You can configure Block Public Access settings for each access point attached to a bucket.", "chunk": "S3. Each access point has its own access point policy. You can configure Block Public Access settings for each access point attached to a bucket. To restrict Amazon S3 data access to a private network, you can also configure any access point to accept requests only from a virtual private cloud (VPC). Version ID API Version 2006-03-01 10 Amazon Simple Storage Service User Guide For more information about access points for general purpose buckets, see Managing access to shared datasets with access points. For more information about access points for directory buckets, see Managing access to shared datasets in directory buckets with access points. Access control lists (ACLs) You can use ACLs to grant read and write permissions to authorized users for individual general purpose buckets and objects. Each general purpose bucket and object has an ACL attached to it as a subresource. The ACL defines which AWS accounts or groups are granted access and the type of access. ACLs are an access control mechanism that predates IAM. For more information about ACLs, see Access control list (ACL) overview. S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to both control ownership of the objects that are uploaded to your bucket and to disable or enable ACLs. By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled. When ACLs are disabled, the bucket owner owns all the objects in the bucket and manages access to them exclusively by using access-management policies. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in unusual circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to"} +{"global_id": 837, "doc_id": "s3", "chunk_id": "14", "question_id": 2, "question": "What does S3 Object Ownership control?", "answer_span": "S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to both control ownership of the objects that are uploaded to your bucket and to disable or enable ACLs.", "chunk": "S3. Each access point has its own access point policy. You can configure Block Public Access settings for each access point attached to a bucket. To restrict Amazon S3 data access to a private network, you can also configure any access point to accept requests only from a virtual private cloud (VPC). Version ID API Version 2006-03-01 10 Amazon Simple Storage Service User Guide For more information about access points for general purpose buckets, see Managing access to shared datasets with access points. For more information about access points for directory buckets, see Managing access to shared datasets in directory buckets with access points. Access control lists (ACLs) You can use ACLs to grant read and write permissions to authorized users for individual general purpose buckets and objects. Each general purpose bucket and object has an ACL attached to it as a subresource. The ACL defines which AWS accounts or groups are granted access and the type of access. ACLs are an access control mechanism that predates IAM. For more information about ACLs, see Access control list (ACL) overview. S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to both control ownership of the objects that are uploaded to your bucket and to disable or enable ACLs. By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled. When ACLs are disabled, the bucket owner owns all the objects in the bucket and manages access to them exclusively by using access-management policies. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in unusual circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to"} +{"global_id": 838, "doc_id": "s3", "chunk_id": "14", "question_id": 3, "question": "What is the default setting for Object Ownership?", "answer_span": "By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled.", "chunk": "S3. Each access point has its own access point policy. You can configure Block Public Access settings for each access point attached to a bucket. To restrict Amazon S3 data access to a private network, you can also configure any access point to accept requests only from a virtual private cloud (VPC). Version ID API Version 2006-03-01 10 Amazon Simple Storage Service User Guide For more information about access points for general purpose buckets, see Managing access to shared datasets with access points. For more information about access points for directory buckets, see Managing access to shared datasets in directory buckets with access points. Access control lists (ACLs) You can use ACLs to grant read and write permissions to authorized users for individual general purpose buckets and objects. Each general purpose bucket and object has an ACL attached to it as a subresource. The ACL defines which AWS accounts or groups are granted access and the type of access. ACLs are an access control mechanism that predates IAM. For more information about ACLs, see Access control list (ACL) overview. S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to both control ownership of the objects that are uploaded to your bucket and to disable or enable ACLs. By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled. When ACLs are disabled, the bucket owner owns all the objects in the bucket and manages access to them exclusively by using access-management policies. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in unusual circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to"} +{"global_id": 839, "doc_id": "s3", "chunk_id": "14", "question_id": 4, "question": "What do ACLs grant to authorized users?", "answer_span": "You can use ACLs to grant read and write permissions to authorized users for individual general purpose buckets and objects.", "chunk": "S3. Each access point has its own access point policy. You can configure Block Public Access settings for each access point attached to a bucket. To restrict Amazon S3 data access to a private network, you can also configure any access point to accept requests only from a virtual private cloud (VPC). Version ID API Version 2006-03-01 10 Amazon Simple Storage Service User Guide For more information about access points for general purpose buckets, see Managing access to shared datasets with access points. For more information about access points for directory buckets, see Managing access to shared datasets in directory buckets with access points. Access control lists (ACLs) You can use ACLs to grant read and write permissions to authorized users for individual general purpose buckets and objects. Each general purpose bucket and object has an ACL attached to it as a subresource. The ACL defines which AWS accounts or groups are granted access and the type of access. ACLs are an access control mechanism that predates IAM. For more information about ACLs, see Access control list (ACL) overview. S3 Object Ownership is an Amazon S3 bucket-level setting that you can use to both control ownership of the objects that are uploaded to your bucket and to disable or enable ACLs. By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled. When ACLs are disabled, the bucket owner owns all the objects in the bucket and manages access to them exclusively by using access-management policies. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in unusual circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to"} +{"global_id": 840, "doc_id": "s3", "chunk_id": "15", "question_id": 1, "question": "What do modern use cases in Amazon S3 no longer require?", "answer_span": "A majority of modern use cases in Amazon S3 no longer require the use of ACLs.", "chunk": "access-management policies. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in unusual circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to all objects in your bucket, regardless of who uploaded the objects to your bucket. For more information, see Controlling ownership of objects and disabling ACLs for your bucket. Regions You can choose the geographical AWS Region where Amazon S3 stores the buckets that you create. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. Objects stored in an AWS Region never leave the Region unless you explicitly transfer or replicate them to another Region. For example, objects stored in the Europe (Ireland) Region never leave it. Note You can access Amazon S3 and its features only in the AWS Regions that are enabled for your account. For more information about enabling a Region to create and manage AWS resources, see Managing AWS Regions in the AWS General Reference. Access control lists (ACLs) API Version 2006-03-01 11 Amazon Simple Storage Service User Guide For a list of Amazon S3 Regions and endpoints, see Regions and endpoints in the AWS General Reference. Amazon S3 data consistency model Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your Amazon S3 bucket in all AWS Regions. This behavior applies to both writes to new objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access controls lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. Updates to a single key are atomic. For"} +{"global_id": 841, "doc_id": "s3", "chunk_id": "15", "question_id": 2, "question": "What should you do with ACLs in most circumstances?", "answer_span": "We recommend that you keep ACLs disabled, except in unusual circumstances where you need to control access for each object individually.", "chunk": "access-management policies. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in unusual circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to all objects in your bucket, regardless of who uploaded the objects to your bucket. For more information, see Controlling ownership of objects and disabling ACLs for your bucket. Regions You can choose the geographical AWS Region where Amazon S3 stores the buckets that you create. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. Objects stored in an AWS Region never leave the Region unless you explicitly transfer or replicate them to another Region. For example, objects stored in the Europe (Ireland) Region never leave it. Note You can access Amazon S3 and its features only in the AWS Regions that are enabled for your account. For more information about enabling a Region to create and manage AWS resources, see Managing AWS Regions in the AWS General Reference. Access control lists (ACLs) API Version 2006-03-01 11 Amazon Simple Storage Service User Guide For a list of Amazon S3 Regions and endpoints, see Regions and endpoints in the AWS General Reference. Amazon S3 data consistency model Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your Amazon S3 bucket in all AWS Regions. This behavior applies to both writes to new objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access controls lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. Updates to a single key are atomic. For"} +{"global_id": 842, "doc_id": "s3", "chunk_id": "15", "question_id": 3, "question": "What does Amazon S3 provide for PUT and DELETE requests?", "answer_span": "Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your Amazon S3 bucket in all AWS Regions.", "chunk": "access-management policies. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in unusual circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to all objects in your bucket, regardless of who uploaded the objects to your bucket. For more information, see Controlling ownership of objects and disabling ACLs for your bucket. Regions You can choose the geographical AWS Region where Amazon S3 stores the buckets that you create. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. Objects stored in an AWS Region never leave the Region unless you explicitly transfer or replicate them to another Region. For example, objects stored in the Europe (Ireland) Region never leave it. Note You can access Amazon S3 and its features only in the AWS Regions that are enabled for your account. For more information about enabling a Region to create and manage AWS resources, see Managing AWS Regions in the AWS General Reference. Access control lists (ACLs) API Version 2006-03-01 11 Amazon Simple Storage Service User Guide For a list of Amazon S3 Regions and endpoints, see Regions and endpoints in the AWS General Reference. Amazon S3 data consistency model Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your Amazon S3 bucket in all AWS Regions. This behavior applies to both writes to new objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access controls lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. Updates to a single key are atomic. For"} +{"global_id": 843, "doc_id": "s3", "chunk_id": "15", "question_id": 4, "question": "What happens to objects stored in an AWS Region?", "answer_span": "Objects stored in an AWS Region never leave the Region unless you explicitly transfer or replicate them to another Region.", "chunk": "access-management policies. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in unusual circumstances where you need to control access for each object individually. With ACLs disabled, you can use policies to control access to all objects in your bucket, regardless of who uploaded the objects to your bucket. For more information, see Controlling ownership of objects and disabling ACLs for your bucket. Regions You can choose the geographical AWS Region where Amazon S3 stores the buckets that you create. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. Objects stored in an AWS Region never leave the Region unless you explicitly transfer or replicate them to another Region. For example, objects stored in the Europe (Ireland) Region never leave it. Note You can access Amazon S3 and its features only in the AWS Regions that are enabled for your account. For more information about enabling a Region to create and manage AWS resources, see Managing AWS Regions in the AWS General Reference. Access control lists (ACLs) API Version 2006-03-01 11 Amazon Simple Storage Service User Guide For a list of Amazon S3 Regions and endpoints, see Regions and endpoints in the AWS General Reference. Amazon S3 data consistency model Amazon S3 provides strong read-after-write consistency for PUT and DELETE requests of objects in your Amazon S3 bucket in all AWS Regions. This behavior applies to both writes to new objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access controls lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. Updates to a single key are atomic. For"} +{"global_id": 844, "doc_id": "s3", "chunk_id": "16", "question_id": 1, "question": "What types of requests can overwrite existing objects in Amazon S3?", "answer_span": "PUT requests that overwrite existing objects and DELETE requests.", "chunk": "objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access controls lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. Updates to a single key are atomic. For example, if you make a PUT request to an existing key from one thread and perform a GET request on the same key from a second thread concurrently, you will get either the old data or the new data, but never partial or corrupt data. Amazon S3 achieves high availability by replicating data across multiple servers within AWS data centers. If a PUT request is successful, your data is safely stored. Any read (GET or LIST request) that is initiated following the receipt of a successful PUT response will return the data written by the PUT request. Here are examples of this behavior: • A process writes a new object to Amazon S3 and immediately lists keys within its bucket. The new object appears in the list. • A process replaces an existing object and immediately tries to read it. Amazon S3 returns the new data. • A process deletes an existing object and immediately tries to read it. Amazon S3 does not return any data because the object has been deleted. • A process deletes an existing object and immediately lists keys within its bucket. The object does not appear in the listing. Note • Amazon S3 does not support object locking for concurrent writers. If two PUT requests are simultaneously made to the same key, the request with the latest timestamp wins. If this is an issue, you must build an object-locking mechanism into your application. Amazon S3 data consistency model API Version 2006-03-01 12 Amazon Simple Storage"} +{"global_id": 845, "doc_id": "s3", "chunk_id": "16", "question_id": 2, "question": "What is the consistency model for read operations on Amazon S3?", "answer_span": "read operations on Amazon S3 Select, Amazon S3 access controls lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent.", "chunk": "objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access controls lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. Updates to a single key are atomic. For example, if you make a PUT request to an existing key from one thread and perform a GET request on the same key from a second thread concurrently, you will get either the old data or the new data, but never partial or corrupt data. Amazon S3 achieves high availability by replicating data across multiple servers within AWS data centers. If a PUT request is successful, your data is safely stored. Any read (GET or LIST request) that is initiated following the receipt of a successful PUT response will return the data written by the PUT request. Here are examples of this behavior: • A process writes a new object to Amazon S3 and immediately lists keys within its bucket. The new object appears in the list. • A process replaces an existing object and immediately tries to read it. Amazon S3 returns the new data. • A process deletes an existing object and immediately tries to read it. Amazon S3 does not return any data because the object has been deleted. • A process deletes an existing object and immediately lists keys within its bucket. The object does not appear in the listing. Note • Amazon S3 does not support object locking for concurrent writers. If two PUT requests are simultaneously made to the same key, the request with the latest timestamp wins. If this is an issue, you must build an object-locking mechanism into your application. Amazon S3 data consistency model API Version 2006-03-01 12 Amazon Simple Storage"} +{"global_id": 846, "doc_id": "s3", "chunk_id": "16", "question_id": 3, "question": "What happens if a PUT request is successful?", "answer_span": "If a PUT request is successful, your data is safely stored.", "chunk": "objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access controls lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. Updates to a single key are atomic. For example, if you make a PUT request to an existing key from one thread and perform a GET request on the same key from a second thread concurrently, you will get either the old data or the new data, but never partial or corrupt data. Amazon S3 achieves high availability by replicating data across multiple servers within AWS data centers. If a PUT request is successful, your data is safely stored. Any read (GET or LIST request) that is initiated following the receipt of a successful PUT response will return the data written by the PUT request. Here are examples of this behavior: • A process writes a new object to Amazon S3 and immediately lists keys within its bucket. The new object appears in the list. • A process replaces an existing object and immediately tries to read it. Amazon S3 returns the new data. • A process deletes an existing object and immediately tries to read it. Amazon S3 does not return any data because the object has been deleted. • A process deletes an existing object and immediately lists keys within its bucket. The object does not appear in the listing. Note • Amazon S3 does not support object locking for concurrent writers. If two PUT requests are simultaneously made to the same key, the request with the latest timestamp wins. If this is an issue, you must build an object-locking mechanism into your application. Amazon S3 data consistency model API Version 2006-03-01 12 Amazon Simple Storage"} +{"global_id": 847, "doc_id": "s3", "chunk_id": "16", "question_id": 4, "question": "What does Amazon S3 return if a process deletes an existing object and immediately tries to read it?", "answer_span": "Amazon S3 does not return any data because the object has been deleted.", "chunk": "objects as well as PUT requests that overwrite existing objects and DELETE requests. In addition, read operations on Amazon S3 Select, Amazon S3 access controls lists (ACLs), Amazon S3 Object Tags, and object metadata (for example, the HEAD object) are strongly consistent. Updates to a single key are atomic. For example, if you make a PUT request to an existing key from one thread and perform a GET request on the same key from a second thread concurrently, you will get either the old data or the new data, but never partial or corrupt data. Amazon S3 achieves high availability by replicating data across multiple servers within AWS data centers. If a PUT request is successful, your data is safely stored. Any read (GET or LIST request) that is initiated following the receipt of a successful PUT response will return the data written by the PUT request. Here are examples of this behavior: • A process writes a new object to Amazon S3 and immediately lists keys within its bucket. The new object appears in the list. • A process replaces an existing object and immediately tries to read it. Amazon S3 returns the new data. • A process deletes an existing object and immediately tries to read it. Amazon S3 does not return any data because the object has been deleted. • A process deletes an existing object and immediately lists keys within its bucket. The object does not appear in the listing. Note • Amazon S3 does not support object locking for concurrent writers. If two PUT requests are simultaneously made to the same key, the request with the latest timestamp wins. If this is an issue, you must build an object-locking mechanism into your application. Amazon S3 data consistency model API Version 2006-03-01 12 Amazon Simple Storage"} +{"global_id": 848, "doc_id": "s3", "chunk_id": "17", "question_id": 1, "question": "What happens if two PUT requests are simultaneously made to the same key?", "answer_span": "If two PUT requests are simultaneously made to the same key, the request with the latest timestamp wins.", "chunk": "support object locking for concurrent writers. If two PUT requests are simultaneously made to the same key, the request with the latest timestamp wins. If this is an issue, you must build an object-locking mechanism into your application. Amazon S3 data consistency model API Version 2006-03-01 12 Amazon Simple Storage Service User Guide • Updates are key-based. There is no way to make atomic updates across keys. For example, you cannot make the update of one key dependent on the update of another key unless you design this functionality into your application. Bucket configurations have an eventual consistency model. Specifically, this means that: • If you delete a bucket and immediately list all buckets, the deleted bucket might still appear in the list. • If you enable versioning on a bucket for the first time, it might take a short amount of time for the change to be fully propagated. We recommend that you wait for 15 minutes after enabling versioning before issuing write operations (PUT or DELETE requests) on objects in the bucket. Concurrent applications This section provides examples of behavior to be expected from Amazon S3 when multiple clients are writing to the same items. In this example, both W1 (write 1) and W2 (write 2) finish before the start of R1 (read 1) and R2 (read 2). Because S3 is strongly consistent, R1 and R2 both return color = ruby. In the next example, W2 does not finish before the start of R1. Therefore, R1 might return color = ruby or color = garnet. However, because W1 and W2 finish before the start of R2, R2 returns color = garnet. Concurrent applications API Version 2006-03-01 13 Amazon Simple Storage Service User Guide In the last example, W2 begins before W1 has received an acknowledgment. Therefore, these writes"} +{"global_id": 849, "doc_id": "s3", "chunk_id": "17", "question_id": 2, "question": "What is the recommendation after enabling versioning on a bucket?", "answer_span": "We recommend that you wait for 15 minutes after enabling versioning before issuing write operations (PUT or DELETE requests) on objects in the bucket.", "chunk": "support object locking for concurrent writers. If two PUT requests are simultaneously made to the same key, the request with the latest timestamp wins. If this is an issue, you must build an object-locking mechanism into your application. Amazon S3 data consistency model API Version 2006-03-01 12 Amazon Simple Storage Service User Guide • Updates are key-based. There is no way to make atomic updates across keys. For example, you cannot make the update of one key dependent on the update of another key unless you design this functionality into your application. Bucket configurations have an eventual consistency model. Specifically, this means that: • If you delete a bucket and immediately list all buckets, the deleted bucket might still appear in the list. • If you enable versioning on a bucket for the first time, it might take a short amount of time for the change to be fully propagated. We recommend that you wait for 15 minutes after enabling versioning before issuing write operations (PUT or DELETE requests) on objects in the bucket. Concurrent applications This section provides examples of behavior to be expected from Amazon S3 when multiple clients are writing to the same items. In this example, both W1 (write 1) and W2 (write 2) finish before the start of R1 (read 1) and R2 (read 2). Because S3 is strongly consistent, R1 and R2 both return color = ruby. In the next example, W2 does not finish before the start of R1. Therefore, R1 might return color = ruby or color = garnet. However, because W1 and W2 finish before the start of R2, R2 returns color = garnet. Concurrent applications API Version 2006-03-01 13 Amazon Simple Storage Service User Guide In the last example, W2 begins before W1 has received an acknowledgment. Therefore, these writes"} +{"global_id": 850, "doc_id": "s3", "chunk_id": "17", "question_id": 3, "question": "What does the Amazon S3 data consistency model state about updates?", "answer_span": "Updates are key-based. There is no way to make atomic updates across keys.", "chunk": "support object locking for concurrent writers. If two PUT requests are simultaneously made to the same key, the request with the latest timestamp wins. If this is an issue, you must build an object-locking mechanism into your application. Amazon S3 data consistency model API Version 2006-03-01 12 Amazon Simple Storage Service User Guide • Updates are key-based. There is no way to make atomic updates across keys. For example, you cannot make the update of one key dependent on the update of another key unless you design this functionality into your application. Bucket configurations have an eventual consistency model. Specifically, this means that: • If you delete a bucket and immediately list all buckets, the deleted bucket might still appear in the list. • If you enable versioning on a bucket for the first time, it might take a short amount of time for the change to be fully propagated. We recommend that you wait for 15 minutes after enabling versioning before issuing write operations (PUT or DELETE requests) on objects in the bucket. Concurrent applications This section provides examples of behavior to be expected from Amazon S3 when multiple clients are writing to the same items. In this example, both W1 (write 1) and W2 (write 2) finish before the start of R1 (read 1) and R2 (read 2). Because S3 is strongly consistent, R1 and R2 both return color = ruby. In the next example, W2 does not finish before the start of R1. Therefore, R1 might return color = ruby or color = garnet. However, because W1 and W2 finish before the start of R2, R2 returns color = garnet. Concurrent applications API Version 2006-03-01 13 Amazon Simple Storage Service User Guide In the last example, W2 begins before W1 has received an acknowledgment. Therefore, these writes"} +{"global_id": 851, "doc_id": "s3", "chunk_id": "17", "question_id": 4, "question": "What might happen if you delete a bucket and immediately list all buckets?", "answer_span": "If you delete a bucket and immediately list all buckets, the deleted bucket might still appear in the list.", "chunk": "support object locking for concurrent writers. If two PUT requests are simultaneously made to the same key, the request with the latest timestamp wins. If this is an issue, you must build an object-locking mechanism into your application. Amazon S3 data consistency model API Version 2006-03-01 12 Amazon Simple Storage Service User Guide • Updates are key-based. There is no way to make atomic updates across keys. For example, you cannot make the update of one key dependent on the update of another key unless you design this functionality into your application. Bucket configurations have an eventual consistency model. Specifically, this means that: • If you delete a bucket and immediately list all buckets, the deleted bucket might still appear in the list. • If you enable versioning on a bucket for the first time, it might take a short amount of time for the change to be fully propagated. We recommend that you wait for 15 minutes after enabling versioning before issuing write operations (PUT or DELETE requests) on objects in the bucket. Concurrent applications This section provides examples of behavior to be expected from Amazon S3 when multiple clients are writing to the same items. In this example, both W1 (write 1) and W2 (write 2) finish before the start of R1 (read 1) and R2 (read 2). Because S3 is strongly consistent, R1 and R2 both return color = ruby. In the next example, W2 does not finish before the start of R1. Therefore, R1 might return color = ruby or color = garnet. However, because W1 and W2 finish before the start of R2, R2 returns color = garnet. Concurrent applications API Version 2006-03-01 13 Amazon Simple Storage Service User Guide In the last example, W2 begins before W1 has received an acknowledgment. Therefore, these writes"} +{"global_id": 852, "doc_id": "s3", "chunk_id": "18", "question_id": 1, "question": "What color does R2 return?", "answer_span": "R2 returns color = garnet.", "chunk": "color = ruby or color = garnet. However, because W1 and W2 finish before the start of R2, R2 returns color = garnet. Concurrent applications API Version 2006-03-01 13 Amazon Simple Storage Service User Guide In the last example, W2 begins before W1 has received an acknowledgment. Therefore, these writes are considered concurrent. Amazon S3 internally uses last-writer-wins semantics to determine which write takes precedence. However, the order in which Amazon S3 receives the requests and the order in which applications receive acknowledgments cannot be predicted because of various factors, such as network latency. For example, W2 might be initiated by an Amazon EC2 instance in the same Region, while W1 might be initiated by a host that is farther away. The best way to determine the final value is to perform a read after both writes have been acknowledged. Related services After you load your data into Amazon S3, you can use it with other AWS services. The following are the services that you might use most frequently: • Amazon Elastic Compute Cloud (Amazon EC2) – Provides secure and scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware Related services API Version 2006-03-01 14 Amazon Simple Storage Service User Guide upfront, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. • Amazon EMR – Helps businesses, researchers, data analysts, and developers easily and costeffectively process vast amounts of data. Amazon EMR uses a hosted Hadoop framework running on the web-scale infrastructure of Amazon EC2 and Amazon S3. • AWS Snow Family – Helps customers that need to run operations in austere, non-data center environments, and in locations where"} +{"global_id": 853, "doc_id": "s3", "chunk_id": "18", "question_id": 2, "question": "What semantics does Amazon S3 use to determine which write takes precedence?", "answer_span": "Amazon S3 internally uses last-writer-wins semantics to determine which write takes precedence.", "chunk": "color = ruby or color = garnet. However, because W1 and W2 finish before the start of R2, R2 returns color = garnet. Concurrent applications API Version 2006-03-01 13 Amazon Simple Storage Service User Guide In the last example, W2 begins before W1 has received an acknowledgment. Therefore, these writes are considered concurrent. Amazon S3 internally uses last-writer-wins semantics to determine which write takes precedence. However, the order in which Amazon S3 receives the requests and the order in which applications receive acknowledgments cannot be predicted because of various factors, such as network latency. For example, W2 might be initiated by an Amazon EC2 instance in the same Region, while W1 might be initiated by a host that is farther away. The best way to determine the final value is to perform a read after both writes have been acknowledged. Related services After you load your data into Amazon S3, you can use it with other AWS services. The following are the services that you might use most frequently: • Amazon Elastic Compute Cloud (Amazon EC2) – Provides secure and scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware Related services API Version 2006-03-01 14 Amazon Simple Storage Service User Guide upfront, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. • Amazon EMR – Helps businesses, researchers, data analysts, and developers easily and costeffectively process vast amounts of data. Amazon EMR uses a hosted Hadoop framework running on the web-scale infrastructure of Amazon EC2 and Amazon S3. • AWS Snow Family – Helps customers that need to run operations in austere, non-data center environments, and in locations where"} +{"global_id": 854, "doc_id": "s3", "chunk_id": "18", "question_id": 3, "question": "What service provides secure and scalable computing capacity in the AWS Cloud?", "answer_span": "Amazon Elastic Compute Cloud (Amazon EC2) – Provides secure and scalable computing capacity in the AWS Cloud.", "chunk": "color = ruby or color = garnet. However, because W1 and W2 finish before the start of R2, R2 returns color = garnet. Concurrent applications API Version 2006-03-01 13 Amazon Simple Storage Service User Guide In the last example, W2 begins before W1 has received an acknowledgment. Therefore, these writes are considered concurrent. Amazon S3 internally uses last-writer-wins semantics to determine which write takes precedence. However, the order in which Amazon S3 receives the requests and the order in which applications receive acknowledgments cannot be predicted because of various factors, such as network latency. For example, W2 might be initiated by an Amazon EC2 instance in the same Region, while W1 might be initiated by a host that is farther away. The best way to determine the final value is to perform a read after both writes have been acknowledged. Related services After you load your data into Amazon S3, you can use it with other AWS services. The following are the services that you might use most frequently: • Amazon Elastic Compute Cloud (Amazon EC2) – Provides secure and scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware Related services API Version 2006-03-01 14 Amazon Simple Storage Service User Guide upfront, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. • Amazon EMR – Helps businesses, researchers, data analysts, and developers easily and costeffectively process vast amounts of data. Amazon EMR uses a hosted Hadoop framework running on the web-scale infrastructure of Amazon EC2 and Amazon S3. • AWS Snow Family – Helps customers that need to run operations in austere, non-data center environments, and in locations where"} +{"global_id": 855, "doc_id": "s3", "chunk_id": "18", "question_id": 4, "question": "What does Amazon EMR help businesses do?", "answer_span": "Amazon EMR – Helps businesses, researchers, data analysts, and developers easily and costeffectively process vast amounts of data.", "chunk": "color = ruby or color = garnet. However, because W1 and W2 finish before the start of R2, R2 returns color = garnet. Concurrent applications API Version 2006-03-01 13 Amazon Simple Storage Service User Guide In the last example, W2 begins before W1 has received an acknowledgment. Therefore, these writes are considered concurrent. Amazon S3 internally uses last-writer-wins semantics to determine which write takes precedence. However, the order in which Amazon S3 receives the requests and the order in which applications receive acknowledgments cannot be predicted because of various factors, such as network latency. For example, W2 might be initiated by an Amazon EC2 instance in the same Region, while W1 might be initiated by a host that is farther away. The best way to determine the final value is to perform a read after both writes have been acknowledged. Related services After you load your data into Amazon S3, you can use it with other AWS services. The following are the services that you might use most frequently: • Amazon Elastic Compute Cloud (Amazon EC2) – Provides secure and scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware Related services API Version 2006-03-01 14 Amazon Simple Storage Service User Guide upfront, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. • Amazon EMR – Helps businesses, researchers, data analysts, and developers easily and costeffectively process vast amounts of data. Amazon EMR uses a hosted Hadoop framework running on the web-scale infrastructure of Amazon EC2 and Amazon S3. • AWS Snow Family – Helps customers that need to run operations in austere, non-data center environments, and in locations where"} +{"global_id": 856, "doc_id": "s3", "chunk_id": "19", "question_id": 1, "question": "What framework does Amazon EMR use?", "answer_span": "Amazon EMR uses a hosted Hadoop framework running on the web-scale infrastructure of Amazon EC2 and Amazon S3.", "chunk": "analysts, and developers easily and costeffectively process vast amounts of data. Amazon EMR uses a hosted Hadoop framework running on the web-scale infrastructure of Amazon EC2 and Amazon S3. • AWS Snow Family – Helps customers that need to run operations in austere, non-data center environments, and in locations where there's a lack of consistent network connectivity. You can use AWS Snow Family devices to locally and cost-effectively access the storage and compute power of the AWS Cloud in places where an internet connection might not be an option. • AWS Transfer Family – Provides fully managed support for file transfers directly into and out of Amazon S3 or Amazon Elastic File System (Amazon EFS) using Secure Shell (SSH) File Transfer Protocol (SFTP), File Transfer Protocol over SSL (FTPS), and File Transfer Protocol (FTP). Accessing Amazon S3 You can work with Amazon S3 in any of the following ways: AWS Management Console The console is a web-based user interface for managing Amazon S3 and AWS resources. If you've signed up for an AWS account, you can access the Amazon S3 console by signing into the AWS Management Console and choosing S3 from the AWS Management Console home page. AWS Command Line Interface You can use the AWS command line tools to issue commands or build scripts at your system's command line to perform AWS (including S3) tasks. The AWS Command Line Interface (AWS CLI) provides commands for a broad set of AWS services. The AWS CLI is supported on Windows, macOS, and Linux. To get started, see the AWS Command Line Interface User Guide. For more information about the commands for Amazon S3, see s3api and s3control in the AWS CLI Command Reference. AWS SDKs AWS provides SDKs (software development kits) that consist of libraries and sample code for various"} +{"global_id": 857, "doc_id": "s3", "chunk_id": "19", "question_id": 2, "question": "What does the AWS Snow Family help customers with?", "answer_span": "Helps customers that need to run operations in austere, non-data center environments, and in locations where there's a lack of consistent network connectivity.", "chunk": "analysts, and developers easily and costeffectively process vast amounts of data. Amazon EMR uses a hosted Hadoop framework running on the web-scale infrastructure of Amazon EC2 and Amazon S3. • AWS Snow Family – Helps customers that need to run operations in austere, non-data center environments, and in locations where there's a lack of consistent network connectivity. You can use AWS Snow Family devices to locally and cost-effectively access the storage and compute power of the AWS Cloud in places where an internet connection might not be an option. • AWS Transfer Family – Provides fully managed support for file transfers directly into and out of Amazon S3 or Amazon Elastic File System (Amazon EFS) using Secure Shell (SSH) File Transfer Protocol (SFTP), File Transfer Protocol over SSL (FTPS), and File Transfer Protocol (FTP). Accessing Amazon S3 You can work with Amazon S3 in any of the following ways: AWS Management Console The console is a web-based user interface for managing Amazon S3 and AWS resources. If you've signed up for an AWS account, you can access the Amazon S3 console by signing into the AWS Management Console and choosing S3 from the AWS Management Console home page. AWS Command Line Interface You can use the AWS command line tools to issue commands or build scripts at your system's command line to perform AWS (including S3) tasks. The AWS Command Line Interface (AWS CLI) provides commands for a broad set of AWS services. The AWS CLI is supported on Windows, macOS, and Linux. To get started, see the AWS Command Line Interface User Guide. For more information about the commands for Amazon S3, see s3api and s3control in the AWS CLI Command Reference. AWS SDKs AWS provides SDKs (software development kits) that consist of libraries and sample code for various"} +{"global_id": 858, "doc_id": "s3", "chunk_id": "19", "question_id": 3, "question": "What protocols does the AWS Transfer Family support for file transfers?", "answer_span": "using Secure Shell (SSH) File Transfer Protocol (SFTP), File Transfer Protocol over SSL (FTPS), and File Transfer Protocol (FTP).", "chunk": "analysts, and developers easily and costeffectively process vast amounts of data. Amazon EMR uses a hosted Hadoop framework running on the web-scale infrastructure of Amazon EC2 and Amazon S3. • AWS Snow Family – Helps customers that need to run operations in austere, non-data center environments, and in locations where there's a lack of consistent network connectivity. You can use AWS Snow Family devices to locally and cost-effectively access the storage and compute power of the AWS Cloud in places where an internet connection might not be an option. • AWS Transfer Family – Provides fully managed support for file transfers directly into and out of Amazon S3 or Amazon Elastic File System (Amazon EFS) using Secure Shell (SSH) File Transfer Protocol (SFTP), File Transfer Protocol over SSL (FTPS), and File Transfer Protocol (FTP). Accessing Amazon S3 You can work with Amazon S3 in any of the following ways: AWS Management Console The console is a web-based user interface for managing Amazon S3 and AWS resources. If you've signed up for an AWS account, you can access the Amazon S3 console by signing into the AWS Management Console and choosing S3 from the AWS Management Console home page. AWS Command Line Interface You can use the AWS command line tools to issue commands or build scripts at your system's command line to perform AWS (including S3) tasks. The AWS Command Line Interface (AWS CLI) provides commands for a broad set of AWS services. The AWS CLI is supported on Windows, macOS, and Linux. To get started, see the AWS Command Line Interface User Guide. For more information about the commands for Amazon S3, see s3api and s3control in the AWS CLI Command Reference. AWS SDKs AWS provides SDKs (software development kits) that consist of libraries and sample code for various"} +{"global_id": 859, "doc_id": "s3", "chunk_id": "19", "question_id": 4, "question": "How can you access the Amazon S3 console?", "answer_span": "you can access the Amazon S3 console by signing into the AWS Management Console and choosing S3 from the AWS Management Console home page.", "chunk": "analysts, and developers easily and costeffectively process vast amounts of data. Amazon EMR uses a hosted Hadoop framework running on the web-scale infrastructure of Amazon EC2 and Amazon S3. • AWS Snow Family – Helps customers that need to run operations in austere, non-data center environments, and in locations where there's a lack of consistent network connectivity. You can use AWS Snow Family devices to locally and cost-effectively access the storage and compute power of the AWS Cloud in places where an internet connection might not be an option. • AWS Transfer Family – Provides fully managed support for file transfers directly into and out of Amazon S3 or Amazon Elastic File System (Amazon EFS) using Secure Shell (SSH) File Transfer Protocol (SFTP), File Transfer Protocol over SSL (FTPS), and File Transfer Protocol (FTP). Accessing Amazon S3 You can work with Amazon S3 in any of the following ways: AWS Management Console The console is a web-based user interface for managing Amazon S3 and AWS resources. If you've signed up for an AWS account, you can access the Amazon S3 console by signing into the AWS Management Console and choosing S3 from the AWS Management Console home page. AWS Command Line Interface You can use the AWS command line tools to issue commands or build scripts at your system's command line to perform AWS (including S3) tasks. The AWS Command Line Interface (AWS CLI) provides commands for a broad set of AWS services. The AWS CLI is supported on Windows, macOS, and Linux. To get started, see the AWS Command Line Interface User Guide. For more information about the commands for Amazon S3, see s3api and s3control in the AWS CLI Command Reference. AWS SDKs AWS provides SDKs (software development kits) that consist of libraries and sample code for various"} +{"global_id": 860, "doc_id": "s3", "chunk_id": "20", "question_id": 1, "question": "What is a bucket in Amazon S3?", "answer_span": "A bucket is a container for objects.", "chunk": "macOS, and Linux. To get started, see the AWS Command Line Interface User Guide. For more information about the commands for Amazon S3, see s3api and s3control in the AWS CLI Command Reference. AWS SDKs AWS provides SDKs (software development kits) that consist of libraries and sample code for various programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, and so on). Accessing Amazon S3 API Version 2006-03-01 15 Amazon Simple Storage Service User Guide Creating, configuring, and working with Amazon S3 general purpose buckets To store your data in Amazon S3, you work with resources known as buckets and objects. A bucket is a container for objects. An object is a file and any metadata that describes that file. To store an object in Amazon S3, you create a bucket and then upload the object to a bucket. When the object is in the bucket, you can open it, download it, and move it. When you no longer need an object or a bucket, you can clean up your resources. The topics in this section provide an overview of working with general purpose buckets in Amazon S3. They include information about naming, creating, accessing, and deleting general purpose buckets. For more information about viewing or listing objects in a bucket, see Organizing, listing, and working with your objects. There are several types of Amazon S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see Buckets. Note For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see S3 Express One Zone and Working with directory buckets. Note With Amazon S3, you pay only for"} +{"global_id": 861, "doc_id": "s3", "chunk_id": "20", "question_id": 2, "question": "What do you need to do to store an object in Amazon S3?", "answer_span": "To store an object in Amazon S3, you create a bucket and then upload the object to a bucket.", "chunk": "macOS, and Linux. To get started, see the AWS Command Line Interface User Guide. For more information about the commands for Amazon S3, see s3api and s3control in the AWS CLI Command Reference. AWS SDKs AWS provides SDKs (software development kits) that consist of libraries and sample code for various programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, and so on). Accessing Amazon S3 API Version 2006-03-01 15 Amazon Simple Storage Service User Guide Creating, configuring, and working with Amazon S3 general purpose buckets To store your data in Amazon S3, you work with resources known as buckets and objects. A bucket is a container for objects. An object is a file and any metadata that describes that file. To store an object in Amazon S3, you create a bucket and then upload the object to a bucket. When the object is in the bucket, you can open it, download it, and move it. When you no longer need an object or a bucket, you can clean up your resources. The topics in this section provide an overview of working with general purpose buckets in Amazon S3. They include information about naming, creating, accessing, and deleting general purpose buckets. For more information about viewing or listing objects in a bucket, see Organizing, listing, and working with your objects. There are several types of Amazon S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see Buckets. Note For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see S3 Express One Zone and Working with directory buckets. Note With Amazon S3, you pay only for"} +{"global_id": 862, "doc_id": "s3", "chunk_id": "20", "question_id": 3, "question": "What topics are covered in the section about working with general purpose buckets in Amazon S3?", "answer_span": "The topics in this section provide an overview of working with general purpose buckets in Amazon S3.", "chunk": "macOS, and Linux. To get started, see the AWS Command Line Interface User Guide. For more information about the commands for Amazon S3, see s3api and s3control in the AWS CLI Command Reference. AWS SDKs AWS provides SDKs (software development kits) that consist of libraries and sample code for various programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, and so on). Accessing Amazon S3 API Version 2006-03-01 15 Amazon Simple Storage Service User Guide Creating, configuring, and working with Amazon S3 general purpose buckets To store your data in Amazon S3, you work with resources known as buckets and objects. A bucket is a container for objects. An object is a file and any metadata that describes that file. To store an object in Amazon S3, you create a bucket and then upload the object to a bucket. When the object is in the bucket, you can open it, download it, and move it. When you no longer need an object or a bucket, you can clean up your resources. The topics in this section provide an overview of working with general purpose buckets in Amazon S3. They include information about naming, creating, accessing, and deleting general purpose buckets. For more information about viewing or listing objects in a bucket, see Organizing, listing, and working with your objects. There are several types of Amazon S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see Buckets. Note For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see S3 Express One Zone and Working with directory buckets. Note With Amazon S3, you pay only for"} +{"global_id": 863, "doc_id": "s3", "chunk_id": "20", "question_id": 4, "question": "What should you do before creating a bucket?", "answer_span": "Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements.", "chunk": "macOS, and Linux. To get started, see the AWS Command Line Interface User Guide. For more information about the commands for Amazon S3, see s3api and s3control in the AWS CLI Command Reference. AWS SDKs AWS provides SDKs (software development kits) that consist of libraries and sample code for various programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, and so on). Accessing Amazon S3 API Version 2006-03-01 15 Amazon Simple Storage Service User Guide Creating, configuring, and working with Amazon S3 general purpose buckets To store your data in Amazon S3, you work with resources known as buckets and objects. A bucket is a container for objects. An object is a file and any metadata that describes that file. To store an object in Amazon S3, you create a bucket and then upload the object to a bucket. When the object is in the bucket, you can open it, download it, and move it. When you no longer need an object or a bucket, you can clean up your resources. The topics in this section provide an overview of working with general purpose buckets in Amazon S3. They include information about naming, creating, accessing, and deleting general purpose buckets. For more information about viewing or listing objects in a bucket, see Organizing, listing, and working with your objects. There are several types of Amazon S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see Buckets. Note For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see S3 Express One Zone and Working with directory buckets. Note With Amazon S3, you pay only for"} +{"global_id": 864, "doc_id": "s3", "chunk_id": "21", "question_id": 1, "question": "What must you do first to upload your data to Amazon S3?", "answer_span": "you must first create an S3 bucket in one of the AWS Regions.", "chunk": "information about the various bucket types and the appropriate use cases for each, see Buckets. Note For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see S3 Express One Zone and Working with directory buckets. Note With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for free. For more information, see AWS Free Tier. Topics • General purpose buckets overview • Common general purpose bucket patterns for building applications on Amazon S3 • General purpose bucket naming rules API Version 2006-03-01 52 Amazon Simple Storage Service User Guide • General purpose bucket quotas, limitations, and restrictions • Accessing an Amazon S3 general purpose bucket • Creating a general purpose bucket • Viewing the properties for an S3 general purpose bucket • Listing Amazon S3 general purpose buckets • Emptying a general purpose bucket • Deleting a general purpose bucket • Mount an Amazon S3 bucket as a local file system • Working with Storage Browser for Amazon S3 • Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration • Using Requester Pays general purpose buckets for storage transfers and usage General purpose buckets overview To upload your data (photos, videos, documents, etc.) to Amazon S3, you must first create an S3 bucket in one of the AWS Regions. There are several types of Amazon S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see Buckets. The following sections provide more information about general purpose buckets, including"} +{"global_id": 865, "doc_id": "s3", "chunk_id": "21", "question_id": 2, "question": "What should you ensure before creating a bucket?", "answer_span": "make sure that you choose the bucket type that best fits your application and performance requirements.", "chunk": "information about the various bucket types and the appropriate use cases for each, see Buckets. Note For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see S3 Express One Zone and Working with directory buckets. Note With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for free. For more information, see AWS Free Tier. Topics • General purpose buckets overview • Common general purpose bucket patterns for building applications on Amazon S3 • General purpose bucket naming rules API Version 2006-03-01 52 Amazon Simple Storage Service User Guide • General purpose bucket quotas, limitations, and restrictions • Accessing an Amazon S3 general purpose bucket • Creating a general purpose bucket • Viewing the properties for an S3 general purpose bucket • Listing Amazon S3 general purpose buckets • Emptying a general purpose bucket • Deleting a general purpose bucket • Mount an Amazon S3 bucket as a local file system • Working with Storage Browser for Amazon S3 • Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration • Using Requester Pays general purpose buckets for storage transfers and usage General purpose buckets overview To upload your data (photos, videos, documents, etc.) to Amazon S3, you must first create an S3 bucket in one of the AWS Regions. There are several types of Amazon S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see Buckets. The following sections provide more information about general purpose buckets, including"} +{"global_id": 866, "doc_id": "s3", "chunk_id": "21", "question_id": 3, "question": "Where can you find more information about the various bucket types?", "answer_span": "For more information about the various bucket types and the appropriate use cases for each, see Buckets.", "chunk": "information about the various bucket types and the appropriate use cases for each, see Buckets. Note For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see S3 Express One Zone and Working with directory buckets. Note With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for free. For more information, see AWS Free Tier. Topics • General purpose buckets overview • Common general purpose bucket patterns for building applications on Amazon S3 • General purpose bucket naming rules API Version 2006-03-01 52 Amazon Simple Storage Service User Guide • General purpose bucket quotas, limitations, and restrictions • Accessing an Amazon S3 general purpose bucket • Creating a general purpose bucket • Viewing the properties for an S3 general purpose bucket • Listing Amazon S3 general purpose buckets • Emptying a general purpose bucket • Deleting a general purpose bucket • Mount an Amazon S3 bucket as a local file system • Working with Storage Browser for Amazon S3 • Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration • Using Requester Pays general purpose buckets for storage transfers and usage General purpose buckets overview To upload your data (photos, videos, documents, etc.) to Amazon S3, you must first create an S3 bucket in one of the AWS Regions. There are several types of Amazon S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see Buckets. The following sections provide more information about general purpose buckets, including"} +{"global_id": 867, "doc_id": "s3", "chunk_id": "21", "question_id": 4, "question": "What is the API version mentioned in the document?", "answer_span": "API Version 2006-03-01", "chunk": "information about the various bucket types and the appropriate use cases for each, see Buckets. Note For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see S3 Express One Zone and Working with directory buckets. Note With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and pricing, see Amazon S3. If you are a new Amazon S3 customer, you can get started with Amazon S3 for free. For more information, see AWS Free Tier. Topics • General purpose buckets overview • Common general purpose bucket patterns for building applications on Amazon S3 • General purpose bucket naming rules API Version 2006-03-01 52 Amazon Simple Storage Service User Guide • General purpose bucket quotas, limitations, and restrictions • Accessing an Amazon S3 general purpose bucket • Creating a general purpose bucket • Viewing the properties for an S3 general purpose bucket • Listing Amazon S3 general purpose buckets • Emptying a general purpose bucket • Deleting a general purpose bucket • Mount an Amazon S3 bucket as a local file system • Working with Storage Browser for Amazon S3 • Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration • Using Requester Pays general purpose buckets for storage transfers and usage General purpose buckets overview To upload your data (photos, videos, documents, etc.) to Amazon S3, you must first create an S3 bucket in one of the AWS Regions. There are several types of Amazon S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see Buckets. The following sections provide more information about general purpose buckets, including"} +{"global_id": 868, "doc_id": "s3", "chunk_id": "22", "question_id": 1, "question": "What should you do before creating a bucket?", "answer_span": "Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements.", "chunk": "S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see Buckets. The following sections provide more information about general purpose buckets, including bucket naming rules, quotas, and bucket configuration details. For a list of restriction and limitations related to Amazon S3 buckets see, General purpose bucket quotas, limitations, and restrictions. Topics • General purpose buckets overview • Common general purpose bucket patterns • Permissions • Managing public access to general purpose buckets • General purpose buckets configuration options • General purpose buckets operations General purpose buckets overview API Version 2006-03-01 53 Amazon Simple Storage Service User Guide • General purpose buckets performance monitoring General purpose buckets overview Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in the amzn-s3-demo-bucket general purpose bucket in the US West (Oregon) Region, then it is addressable by using the URL https://amzn-s3-demo-bucket.s3.uswest-2.amazonaws.com/photos/puppy.jpg. For more information, see Accessing a Bucket. • General purpose bucket quotas for commercial Regions can only be viewed and managed from US East (N. Virginia). • General purpose bucket quotas for AWS GovCloud (US) can only be viewed and managed from AWS GovCloud (US-West). In terms of implementation, buckets and objects are AWS resources, and Amazon S3 provides APIs for you to manage them. For example, you can create a bucket and upload objects using the Amazon S3 API. You can also use the Amazon S3 console to perform these operations. The console uses the Amazon S3 APIs to send requests to Amazon S3. This section describes how to work with general purpose buckets. For information about working with objects, see Amazon S3 objects overview. Amazon S3"} +{"global_id": 869, "doc_id": "s3", "chunk_id": "22", "question_id": 2, "question": "Where can you find more information about general purpose buckets?", "answer_span": "The following sections provide more information about general purpose buckets, including bucket naming rules, quotas, and bucket configuration details.", "chunk": "S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see Buckets. The following sections provide more information about general purpose buckets, including bucket naming rules, quotas, and bucket configuration details. For a list of restriction and limitations related to Amazon S3 buckets see, General purpose bucket quotas, limitations, and restrictions. Topics • General purpose buckets overview • Common general purpose bucket patterns • Permissions • Managing public access to general purpose buckets • General purpose buckets configuration options • General purpose buckets operations General purpose buckets overview API Version 2006-03-01 53 Amazon Simple Storage Service User Guide • General purpose buckets performance monitoring General purpose buckets overview Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in the amzn-s3-demo-bucket general purpose bucket in the US West (Oregon) Region, then it is addressable by using the URL https://amzn-s3-demo-bucket.s3.uswest-2.amazonaws.com/photos/puppy.jpg. For more information, see Accessing a Bucket. • General purpose bucket quotas for commercial Regions can only be viewed and managed from US East (N. Virginia). • General purpose bucket quotas for AWS GovCloud (US) can only be viewed and managed from AWS GovCloud (US-West). In terms of implementation, buckets and objects are AWS resources, and Amazon S3 provides APIs for you to manage them. For example, you can create a bucket and upload objects using the Amazon S3 API. You can also use the Amazon S3 console to perform these operations. The console uses the Amazon S3 APIs to send requests to Amazon S3. This section describes how to work with general purpose buckets. For information about working with objects, see Amazon S3 objects overview. Amazon S3"} +{"global_id": 870, "doc_id": "s3", "chunk_id": "22", "question_id": 3, "question": "How can you manage general purpose bucket quotas for commercial Regions?", "answer_span": "General purpose bucket quotas for commercial Regions can only be viewed and managed from US East (N. Virginia).", "chunk": "S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see Buckets. The following sections provide more information about general purpose buckets, including bucket naming rules, quotas, and bucket configuration details. For a list of restriction and limitations related to Amazon S3 buckets see, General purpose bucket quotas, limitations, and restrictions. Topics • General purpose buckets overview • Common general purpose bucket patterns • Permissions • Managing public access to general purpose buckets • General purpose buckets configuration options • General purpose buckets operations General purpose buckets overview API Version 2006-03-01 53 Amazon Simple Storage Service User Guide • General purpose buckets performance monitoring General purpose buckets overview Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in the amzn-s3-demo-bucket general purpose bucket in the US West (Oregon) Region, then it is addressable by using the URL https://amzn-s3-demo-bucket.s3.uswest-2.amazonaws.com/photos/puppy.jpg. For more information, see Accessing a Bucket. • General purpose bucket quotas for commercial Regions can only be viewed and managed from US East (N. Virginia). • General purpose bucket quotas for AWS GovCloud (US) can only be viewed and managed from AWS GovCloud (US-West). In terms of implementation, buckets and objects are AWS resources, and Amazon S3 provides APIs for you to manage them. For example, you can create a bucket and upload objects using the Amazon S3 API. You can also use the Amazon S3 console to perform these operations. The console uses the Amazon S3 APIs to send requests to Amazon S3. This section describes how to work with general purpose buckets. For information about working with objects, see Amazon S3 objects overview. Amazon S3"} +{"global_id": 871, "doc_id": "s3", "chunk_id": "22", "question_id": 4, "question": "What does Amazon S3 provide for managing buckets and objects?", "answer_span": "In terms of implementation, buckets and objects are AWS resources, and Amazon S3 provides APIs for you to manage them.", "chunk": "S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see Buckets. The following sections provide more information about general purpose buckets, including bucket naming rules, quotas, and bucket configuration details. For a list of restriction and limitations related to Amazon S3 buckets see, General purpose bucket quotas, limitations, and restrictions. Topics • General purpose buckets overview • Common general purpose bucket patterns • Permissions • Managing public access to general purpose buckets • General purpose buckets configuration options • General purpose buckets operations General purpose buckets overview API Version 2006-03-01 53 Amazon Simple Storage Service User Guide • General purpose buckets performance monitoring General purpose buckets overview Every object is contained in a bucket. For example, if the object named photos/puppy.jpg is stored in the amzn-s3-demo-bucket general purpose bucket in the US West (Oregon) Region, then it is addressable by using the URL https://amzn-s3-demo-bucket.s3.uswest-2.amazonaws.com/photos/puppy.jpg. For more information, see Accessing a Bucket. • General purpose bucket quotas for commercial Regions can only be viewed and managed from US East (N. Virginia). • General purpose bucket quotas for AWS GovCloud (US) can only be viewed and managed from AWS GovCloud (US-West). In terms of implementation, buckets and objects are AWS resources, and Amazon S3 provides APIs for you to manage them. For example, you can create a bucket and upload objects using the Amazon S3 API. You can also use the Amazon S3 console to perform these operations. The console uses the Amazon S3 APIs to send requests to Amazon S3. This section describes how to work with general purpose buckets. For information about working with objects, see Amazon S3 objects overview. Amazon S3"} +{"global_id": 872, "doc_id": "s3", "chunk_id": "23", "question_id": 1, "question": "What does the Amazon S3 console use to send requests to Amazon S3?", "answer_span": "The console uses the Amazon S3 APIs to send requests to Amazon S3.", "chunk": "S3 API. You can also use the Amazon S3 console to perform these operations. The console uses the Amazon S3 APIs to send requests to Amazon S3. This section describes how to work with general purpose buckets. For information about working with objects, see Amazon S3 objects overview. Amazon S3 supports global general purpose buckets, which means that each bucket name must be unique across all AWS accounts in all the AWS Regions within a partition. A partition is a grouping of Regions. AWS currently has three partitions: aws (Standard Regions), aws-cn (China Regions), and aws-us-gov (AWS GovCloud (US)). After a general purpose bucket is created, the name of that bucket cannot be used by another AWS account in the same partition until the bucket is deleted. You should not depend on specific bucket naming conventions for availability or security verification purposes. For bucket naming guidelines, see General purpose bucket naming rules. Amazon S3 creates buckets in a Region that you specify. To reduce latency, minimize costs, or address regulatory requirements, choose any AWS Region that is geographically close to you. For example, if you reside in Europe, you might find it advantageous to create buckets in the Europe (Ireland) or Europe (Frankfurt) Regions. For a list of Amazon S3 Regions, see Regions and Endpoints in the AWS General Reference. General purpose buckets overview API Version 2006-03-01 54"} +{"global_id": 873, "doc_id": "s3", "chunk_id": "23", "question_id": 2, "question": "What must be unique across all AWS accounts in all the AWS Regions within a partition?", "answer_span": "each bucket name must be unique across all AWS accounts in all the AWS Regions within a partition.", "chunk": "S3 API. You can also use the Amazon S3 console to perform these operations. The console uses the Amazon S3 APIs to send requests to Amazon S3. This section describes how to work with general purpose buckets. For information about working with objects, see Amazon S3 objects overview. Amazon S3 supports global general purpose buckets, which means that each bucket name must be unique across all AWS accounts in all the AWS Regions within a partition. A partition is a grouping of Regions. AWS currently has three partitions: aws (Standard Regions), aws-cn (China Regions), and aws-us-gov (AWS GovCloud (US)). After a general purpose bucket is created, the name of that bucket cannot be used by another AWS account in the same partition until the bucket is deleted. You should not depend on specific bucket naming conventions for availability or security verification purposes. For bucket naming guidelines, see General purpose bucket naming rules. Amazon S3 creates buckets in a Region that you specify. To reduce latency, minimize costs, or address regulatory requirements, choose any AWS Region that is geographically close to you. For example, if you reside in Europe, you might find it advantageous to create buckets in the Europe (Ireland) or Europe (Frankfurt) Regions. For a list of Amazon S3 Regions, see Regions and Endpoints in the AWS General Reference. General purpose buckets overview API Version 2006-03-01 54"} +{"global_id": 874, "doc_id": "s3", "chunk_id": "23", "question_id": 3, "question": "What should you not depend on for availability or security verification purposes?", "answer_span": "You should not depend on specific bucket naming conventions for availability or security verification purposes.", "chunk": "S3 API. You can also use the Amazon S3 console to perform these operations. The console uses the Amazon S3 APIs to send requests to Amazon S3. This section describes how to work with general purpose buckets. For information about working with objects, see Amazon S3 objects overview. Amazon S3 supports global general purpose buckets, which means that each bucket name must be unique across all AWS accounts in all the AWS Regions within a partition. A partition is a grouping of Regions. AWS currently has three partitions: aws (Standard Regions), aws-cn (China Regions), and aws-us-gov (AWS GovCloud (US)). After a general purpose bucket is created, the name of that bucket cannot be used by another AWS account in the same partition until the bucket is deleted. You should not depend on specific bucket naming conventions for availability or security verification purposes. For bucket naming guidelines, see General purpose bucket naming rules. Amazon S3 creates buckets in a Region that you specify. To reduce latency, minimize costs, or address regulatory requirements, choose any AWS Region that is geographically close to you. For example, if you reside in Europe, you might find it advantageous to create buckets in the Europe (Ireland) or Europe (Frankfurt) Regions. For a list of Amazon S3 Regions, see Regions and Endpoints in the AWS General Reference. General purpose buckets overview API Version 2006-03-01 54"} +{"global_id": 875, "doc_id": "s3", "chunk_id": "23", "question_id": 4, "question": "What is the API Version mentioned in the text?", "answer_span": "API Version 2006-03-01", "chunk": "S3 API. You can also use the Amazon S3 console to perform these operations. The console uses the Amazon S3 APIs to send requests to Amazon S3. This section describes how to work with general purpose buckets. For information about working with objects, see Amazon S3 objects overview. Amazon S3 supports global general purpose buckets, which means that each bucket name must be unique across all AWS accounts in all the AWS Regions within a partition. A partition is a grouping of Regions. AWS currently has three partitions: aws (Standard Regions), aws-cn (China Regions), and aws-us-gov (AWS GovCloud (US)). After a general purpose bucket is created, the name of that bucket cannot be used by another AWS account in the same partition until the bucket is deleted. You should not depend on specific bucket naming conventions for availability or security verification purposes. For bucket naming guidelines, see General purpose bucket naming rules. Amazon S3 creates buckets in a Region that you specify. To reduce latency, minimize costs, or address regulatory requirements, choose any AWS Region that is geographically close to you. For example, if you reside in Europe, you might find it advantageous to create buckets in the Europe (Ireland) or Europe (Frankfurt) Regions. For a list of Amazon S3 Regions, see Regions and Endpoints in the AWS General Reference. General purpose buckets overview API Version 2006-03-01 54"} +{"global_id": 876, "doc_id": "aurora", "chunk_id": "0", "question_id": 1, "question": "What is Amazon Aurora DSQL?", "answer_span": "Amazon Aurora DSQL is a serverless, distributed relational database service optimized for transactional workloads.", "chunk": "Amazon Aurora DSQL User Guide What is Amazon Aurora DSQL? Amazon Aurora DSQL is a serverless, distributed relational database service optimized for transactional workloads. Aurora DSQL offers virtually unlimited scale and doesn't require you to manage infrastructure. The active-active highly available architecture provides 99.99% singleRegion and 99.999% multi-Region availability. When to use Aurora DSQL Aurora DSQL is optimized for transactional workloads that benefit from ACID transactions and a relational data model. Because it's serverless, Aurora DSQL is ideal for application patterns of microservice, serverless, and event-driven architectures. Aurora DSQL is PostgreSQL-compatible, so you can use familiar drivers, object-relational mappings (ORMs), frameworks, and SQL features. Aurora DSQL automatically manages system infrastructure and scales compute, I/O, and storage based on your workload. Because you have no servers to provision or manage, you don't have to worry about maintenance downtime related to provisioning, patching, or infrastructure upgrades. Aurora DSQL helps you to build and maintain enterprise applications that are always available at any scale. The active-active serverless design automates failure recovery, so you don't need to worry about traditional database failover. Your applications benefit from Multi-AZ and multiRegion availability, and you don't have to be concerned about eventual consistency or missing data related to failovers. Key features in Aurora DSQL The following key features help you create a serverless distributed database to support your highavailability applications: Distributed architecture Aurora DSQL is composed of the following multi-tenant components: • Relay and connectivity • Compute and databases • Transaction log, concurrency control, and isolation • Storage When to use 1 Amazon Aurora DSQL User Guide A control plane coordinates the preceding components. Each component provide redundancy across three Availability Zones (AZs), with automatic cluster scaling and self-healing in case of component failures. To learn more about how this architecture supports high availability, see Resilience in"} +{"global_id": 877, "doc_id": "aurora", "chunk_id": "0", "question_id": 2, "question": "What availability does Aurora DSQL provide?", "answer_span": "The active-active highly available architecture provides 99.99% singleRegion and 99.999% multi-Region availability.", "chunk": "Amazon Aurora DSQL User Guide What is Amazon Aurora DSQL? Amazon Aurora DSQL is a serverless, distributed relational database service optimized for transactional workloads. Aurora DSQL offers virtually unlimited scale and doesn't require you to manage infrastructure. The active-active highly available architecture provides 99.99% singleRegion and 99.999% multi-Region availability. When to use Aurora DSQL Aurora DSQL is optimized for transactional workloads that benefit from ACID transactions and a relational data model. Because it's serverless, Aurora DSQL is ideal for application patterns of microservice, serverless, and event-driven architectures. Aurora DSQL is PostgreSQL-compatible, so you can use familiar drivers, object-relational mappings (ORMs), frameworks, and SQL features. Aurora DSQL automatically manages system infrastructure and scales compute, I/O, and storage based on your workload. Because you have no servers to provision or manage, you don't have to worry about maintenance downtime related to provisioning, patching, or infrastructure upgrades. Aurora DSQL helps you to build and maintain enterprise applications that are always available at any scale. The active-active serverless design automates failure recovery, so you don't need to worry about traditional database failover. Your applications benefit from Multi-AZ and multiRegion availability, and you don't have to be concerned about eventual consistency or missing data related to failovers. Key features in Aurora DSQL The following key features help you create a serverless distributed database to support your highavailability applications: Distributed architecture Aurora DSQL is composed of the following multi-tenant components: • Relay and connectivity • Compute and databases • Transaction log, concurrency control, and isolation • Storage When to use 1 Amazon Aurora DSQL User Guide A control plane coordinates the preceding components. Each component provide redundancy across three Availability Zones (AZs), with automatic cluster scaling and self-healing in case of component failures. To learn more about how this architecture supports high availability, see Resilience in"} +{"global_id": 878, "doc_id": "aurora", "chunk_id": "0", "question_id": 3, "question": "What is Aurora DSQL optimized for?", "answer_span": "Aurora DSQL is optimized for transactional workloads that benefit from ACID transactions and a relational data model.", "chunk": "Amazon Aurora DSQL User Guide What is Amazon Aurora DSQL? Amazon Aurora DSQL is a serverless, distributed relational database service optimized for transactional workloads. Aurora DSQL offers virtually unlimited scale and doesn't require you to manage infrastructure. The active-active highly available architecture provides 99.99% singleRegion and 99.999% multi-Region availability. When to use Aurora DSQL Aurora DSQL is optimized for transactional workloads that benefit from ACID transactions and a relational data model. Because it's serverless, Aurora DSQL is ideal for application patterns of microservice, serverless, and event-driven architectures. Aurora DSQL is PostgreSQL-compatible, so you can use familiar drivers, object-relational mappings (ORMs), frameworks, and SQL features. Aurora DSQL automatically manages system infrastructure and scales compute, I/O, and storage based on your workload. Because you have no servers to provision or manage, you don't have to worry about maintenance downtime related to provisioning, patching, or infrastructure upgrades. Aurora DSQL helps you to build and maintain enterprise applications that are always available at any scale. The active-active serverless design automates failure recovery, so you don't need to worry about traditional database failover. Your applications benefit from Multi-AZ and multiRegion availability, and you don't have to be concerned about eventual consistency or missing data related to failovers. Key features in Aurora DSQL The following key features help you create a serverless distributed database to support your highavailability applications: Distributed architecture Aurora DSQL is composed of the following multi-tenant components: • Relay and connectivity • Compute and databases • Transaction log, concurrency control, and isolation • Storage When to use 1 Amazon Aurora DSQL User Guide A control plane coordinates the preceding components. Each component provide redundancy across three Availability Zones (AZs), with automatic cluster scaling and self-healing in case of component failures. To learn more about how this architecture supports high availability, see Resilience in"} +{"global_id": 879, "doc_id": "aurora", "chunk_id": "0", "question_id": 4, "question": "What does Aurora DSQL automatically manage?", "answer_span": "Aurora DSQL automatically manages system infrastructure and scales compute, I/O, and storage based on your workload.", "chunk": "Amazon Aurora DSQL User Guide What is Amazon Aurora DSQL? Amazon Aurora DSQL is a serverless, distributed relational database service optimized for transactional workloads. Aurora DSQL offers virtually unlimited scale and doesn't require you to manage infrastructure. The active-active highly available architecture provides 99.99% singleRegion and 99.999% multi-Region availability. When to use Aurora DSQL Aurora DSQL is optimized for transactional workloads that benefit from ACID transactions and a relational data model. Because it's serverless, Aurora DSQL is ideal for application patterns of microservice, serverless, and event-driven architectures. Aurora DSQL is PostgreSQL-compatible, so you can use familiar drivers, object-relational mappings (ORMs), frameworks, and SQL features. Aurora DSQL automatically manages system infrastructure and scales compute, I/O, and storage based on your workload. Because you have no servers to provision or manage, you don't have to worry about maintenance downtime related to provisioning, patching, or infrastructure upgrades. Aurora DSQL helps you to build and maintain enterprise applications that are always available at any scale. The active-active serverless design automates failure recovery, so you don't need to worry about traditional database failover. Your applications benefit from Multi-AZ and multiRegion availability, and you don't have to be concerned about eventual consistency or missing data related to failovers. Key features in Aurora DSQL The following key features help you create a serverless distributed database to support your highavailability applications: Distributed architecture Aurora DSQL is composed of the following multi-tenant components: • Relay and connectivity • Compute and databases • Transaction log, concurrency control, and isolation • Storage When to use 1 Amazon Aurora DSQL User Guide A control plane coordinates the preceding components. Each component provide redundancy across three Availability Zones (AZs), with automatic cluster scaling and self-healing in case of component failures. To learn more about how this architecture supports high availability, see Resilience in"} +{"global_id": 880, "doc_id": "aurora", "chunk_id": "1", "question_id": 1, "question": "What does a control plane coordinate?", "answer_span": "A control plane coordinates the preceding components.", "chunk": "Storage When to use 1 Amazon Aurora DSQL User Guide A control plane coordinates the preceding components. Each component provide redundancy across three Availability Zones (AZs), with automatic cluster scaling and self-healing in case of component failures. To learn more about how this architecture supports high availability, see Resilience in Amazon Aurora DSQL. Single-Region and multi-Region clusters Aurora DSQL clusters provide the following benefits: • Synchronous data replication • Consistent read operations • Automatic failure recovery • Data consistency across multiple AZs or Regions If an infrastructure component fails, Aurora DSQL automatically routes requests to healthy infrastructure without manual intervention. Aurora DSQL provides atomicity, consistency, isolation, and durability (ACID) transactions with strong consistency, snapshot isolation, atomicity, and cross-AZ and cross-Region durability. Multi-Region peered clusters provide the same resilience and connectivity as single-Region clusters. But they improve availability by offering two Regional endpoints, one in each peered cluster Region. Both endpoints of a peered cluster present a single logical database. They are available for concurrent read and write operations, and provide strong data consistency. You can build applications that run in multiple Regions at the same time for performance and resilience—and know that readers always see the same data. Compatibility with PostgreSQL databases The distributed database layer (compute) in Aurora DSQL is based on a current major version of PostgreSQL. You can connect to Aurora DSQL with familiar PostgreSQL drivers and tools, such as psql. Aurora DSQL is currently compatible with PostgreSQL version 16 and supports a subset of PostgreSQL features, expressions, and data types. For more information about the supported SQL features, see SQL feature compatibility in Aurora DSQL. Region availability for Aurora DSQL With Amazon Aurora DSQL, you can deploy database instances across multiple AWS Regions to support global applications and meet data residency requirements. Region availability determines where"} +{"global_id": 881, "doc_id": "aurora", "chunk_id": "1", "question_id": 2, "question": "What are the benefits of Aurora DSQL clusters?", "answer_span": "Aurora DSQL clusters provide the following benefits: • Synchronous data replication • Consistent read operations • Automatic failure recovery • Data consistency across multiple AZs or Regions", "chunk": "Storage When to use 1 Amazon Aurora DSQL User Guide A control plane coordinates the preceding components. Each component provide redundancy across three Availability Zones (AZs), with automatic cluster scaling and self-healing in case of component failures. To learn more about how this architecture supports high availability, see Resilience in Amazon Aurora DSQL. Single-Region and multi-Region clusters Aurora DSQL clusters provide the following benefits: • Synchronous data replication • Consistent read operations • Automatic failure recovery • Data consistency across multiple AZs or Regions If an infrastructure component fails, Aurora DSQL automatically routes requests to healthy infrastructure without manual intervention. Aurora DSQL provides atomicity, consistency, isolation, and durability (ACID) transactions with strong consistency, snapshot isolation, atomicity, and cross-AZ and cross-Region durability. Multi-Region peered clusters provide the same resilience and connectivity as single-Region clusters. But they improve availability by offering two Regional endpoints, one in each peered cluster Region. Both endpoints of a peered cluster present a single logical database. They are available for concurrent read and write operations, and provide strong data consistency. You can build applications that run in multiple Regions at the same time for performance and resilience—and know that readers always see the same data. Compatibility with PostgreSQL databases The distributed database layer (compute) in Aurora DSQL is based on a current major version of PostgreSQL. You can connect to Aurora DSQL with familiar PostgreSQL drivers and tools, such as psql. Aurora DSQL is currently compatible with PostgreSQL version 16 and supports a subset of PostgreSQL features, expressions, and data types. For more information about the supported SQL features, see SQL feature compatibility in Aurora DSQL. Region availability for Aurora DSQL With Amazon Aurora DSQL, you can deploy database instances across multiple AWS Regions to support global applications and meet data residency requirements. Region availability determines where"} +{"global_id": 882, "doc_id": "aurora", "chunk_id": "1", "question_id": 3, "question": "What version of PostgreSQL is Aurora DSQL currently compatible with?", "answer_span": "Aurora DSQL is currently compatible with PostgreSQL version 16 and supports a subset of PostgreSQL features, expressions, and data types.", "chunk": "Storage When to use 1 Amazon Aurora DSQL User Guide A control plane coordinates the preceding components. Each component provide redundancy across three Availability Zones (AZs), with automatic cluster scaling and self-healing in case of component failures. To learn more about how this architecture supports high availability, see Resilience in Amazon Aurora DSQL. Single-Region and multi-Region clusters Aurora DSQL clusters provide the following benefits: • Synchronous data replication • Consistent read operations • Automatic failure recovery • Data consistency across multiple AZs or Regions If an infrastructure component fails, Aurora DSQL automatically routes requests to healthy infrastructure without manual intervention. Aurora DSQL provides atomicity, consistency, isolation, and durability (ACID) transactions with strong consistency, snapshot isolation, atomicity, and cross-AZ and cross-Region durability. Multi-Region peered clusters provide the same resilience and connectivity as single-Region clusters. But they improve availability by offering two Regional endpoints, one in each peered cluster Region. Both endpoints of a peered cluster present a single logical database. They are available for concurrent read and write operations, and provide strong data consistency. You can build applications that run in multiple Regions at the same time for performance and resilience—and know that readers always see the same data. Compatibility with PostgreSQL databases The distributed database layer (compute) in Aurora DSQL is based on a current major version of PostgreSQL. You can connect to Aurora DSQL with familiar PostgreSQL drivers and tools, such as psql. Aurora DSQL is currently compatible with PostgreSQL version 16 and supports a subset of PostgreSQL features, expressions, and data types. For more information about the supported SQL features, see SQL feature compatibility in Aurora DSQL. Region availability for Aurora DSQL With Amazon Aurora DSQL, you can deploy database instances across multiple AWS Regions to support global applications and meet data residency requirements. Region availability determines where"} +{"global_id": 883, "doc_id": "aurora", "chunk_id": "1", "question_id": 4, "question": "What does Region availability determine for Aurora DSQL?", "answer_span": "Region availability determines where", "chunk": "Storage When to use 1 Amazon Aurora DSQL User Guide A control plane coordinates the preceding components. Each component provide redundancy across three Availability Zones (AZs), with automatic cluster scaling and self-healing in case of component failures. To learn more about how this architecture supports high availability, see Resilience in Amazon Aurora DSQL. Single-Region and multi-Region clusters Aurora DSQL clusters provide the following benefits: • Synchronous data replication • Consistent read operations • Automatic failure recovery • Data consistency across multiple AZs or Regions If an infrastructure component fails, Aurora DSQL automatically routes requests to healthy infrastructure without manual intervention. Aurora DSQL provides atomicity, consistency, isolation, and durability (ACID) transactions with strong consistency, snapshot isolation, atomicity, and cross-AZ and cross-Region durability. Multi-Region peered clusters provide the same resilience and connectivity as single-Region clusters. But they improve availability by offering two Regional endpoints, one in each peered cluster Region. Both endpoints of a peered cluster present a single logical database. They are available for concurrent read and write operations, and provide strong data consistency. You can build applications that run in multiple Regions at the same time for performance and resilience—and know that readers always see the same data. Compatibility with PostgreSQL databases The distributed database layer (compute) in Aurora DSQL is based on a current major version of PostgreSQL. You can connect to Aurora DSQL with familiar PostgreSQL drivers and tools, such as psql. Aurora DSQL is currently compatible with PostgreSQL version 16 and supports a subset of PostgreSQL features, expressions, and data types. For more information about the supported SQL features, see SQL feature compatibility in Aurora DSQL. Region availability for Aurora DSQL With Amazon Aurora DSQL, you can deploy database instances across multiple AWS Regions to support global applications and meet data residency requirements. Region availability determines where"} +{"global_id": 884, "doc_id": "aurora", "chunk_id": "2", "question_id": 1, "question": "What is Aurora DSQL compatible with?", "answer_span": "Aurora DSQL is a PostgreSQL-compatible, distributed relational database designed for transactional workloads.", "chunk": "expressions, and data types. For more information about the supported SQL features, see SQL feature compatibility in Aurora DSQL. Region availability for Aurora DSQL With Amazon Aurora DSQL, you can deploy database instances across multiple AWS Regions to support global applications and meet data residency requirements. Region availability determines where you can create and manage Aurora DSQL database clusters. Database administrators and AWS Region availability 2 Amazon Aurora DSQL User Guide Aurora DSQL and PostgreSQL Aurora DSQL is a PostgreSQL-compatible, distributed relational database designed for transactional workloads. Aurora DSQL uses core PostgreSQL components such as the parser, planner, optimizer, and type system. The Aurora DSQL design ensures that all supported PostgreSQL syntax provides compatible behavior and yields identical query results. For example, Aurora DSQL provides type conversions, arithmetic operations, and numerical precision and scale that are identical to PostgreSQL. Any deviations are documented. Aurora DSQL also introduces advanced capabilities such as optimistic concurrency control and distributed schema management. With these features, you can use the familiar tooling of PostgreSQL while benefiting from the performance and scalability required for modern, cloudnative, distributed applications. PostgreSQL compatibility highlights Aurora DSQL is currently based on PostgreSQL version 16. Key compatibilities include the following: Wire protocol Aurora DSQL uses the standard PostgreSQL v3 wire protocol. This enables integration with standard PostgreSQL clients, drivers, and tools. For example, Aurora DSQL is compatible with psql, pgjdbc, and psycopg. SQL compatibility Aurora DSQL supports a wide range of standard PostgreSQL expressions and functions commonly used in transactional workloads. Supported SQL expressions yield identical results to PostgreSQL, including the following: • Handling of nulls • Sort order behavior • Scale and precision for numeric operations • Equivalence for string operations For more information, see SQL feature compatibility in Aurora DSQL. Compatibility highlights 39 Amazon Aurora DSQL User Guide Transaction"} +{"global_id": 885, "doc_id": "aurora", "chunk_id": "2", "question_id": 2, "question": "What version of PostgreSQL is Aurora DSQL currently based on?", "answer_span": "Aurora DSQL is currently based on PostgreSQL version 16.", "chunk": "expressions, and data types. For more information about the supported SQL features, see SQL feature compatibility in Aurora DSQL. Region availability for Aurora DSQL With Amazon Aurora DSQL, you can deploy database instances across multiple AWS Regions to support global applications and meet data residency requirements. Region availability determines where you can create and manage Aurora DSQL database clusters. Database administrators and AWS Region availability 2 Amazon Aurora DSQL User Guide Aurora DSQL and PostgreSQL Aurora DSQL is a PostgreSQL-compatible, distributed relational database designed for transactional workloads. Aurora DSQL uses core PostgreSQL components such as the parser, planner, optimizer, and type system. The Aurora DSQL design ensures that all supported PostgreSQL syntax provides compatible behavior and yields identical query results. For example, Aurora DSQL provides type conversions, arithmetic operations, and numerical precision and scale that are identical to PostgreSQL. Any deviations are documented. Aurora DSQL also introduces advanced capabilities such as optimistic concurrency control and distributed schema management. With these features, you can use the familiar tooling of PostgreSQL while benefiting from the performance and scalability required for modern, cloudnative, distributed applications. PostgreSQL compatibility highlights Aurora DSQL is currently based on PostgreSQL version 16. Key compatibilities include the following: Wire protocol Aurora DSQL uses the standard PostgreSQL v3 wire protocol. This enables integration with standard PostgreSQL clients, drivers, and tools. For example, Aurora DSQL is compatible with psql, pgjdbc, and psycopg. SQL compatibility Aurora DSQL supports a wide range of standard PostgreSQL expressions and functions commonly used in transactional workloads. Supported SQL expressions yield identical results to PostgreSQL, including the following: • Handling of nulls • Sort order behavior • Scale and precision for numeric operations • Equivalence for string operations For more information, see SQL feature compatibility in Aurora DSQL. Compatibility highlights 39 Amazon Aurora DSQL User Guide Transaction"} +{"global_id": 886, "doc_id": "aurora", "chunk_id": "2", "question_id": 3, "question": "What wire protocol does Aurora DSQL use?", "answer_span": "Aurora DSQL uses the standard PostgreSQL v3 wire protocol.", "chunk": "expressions, and data types. For more information about the supported SQL features, see SQL feature compatibility in Aurora DSQL. Region availability for Aurora DSQL With Amazon Aurora DSQL, you can deploy database instances across multiple AWS Regions to support global applications and meet data residency requirements. Region availability determines where you can create and manage Aurora DSQL database clusters. Database administrators and AWS Region availability 2 Amazon Aurora DSQL User Guide Aurora DSQL and PostgreSQL Aurora DSQL is a PostgreSQL-compatible, distributed relational database designed for transactional workloads. Aurora DSQL uses core PostgreSQL components such as the parser, planner, optimizer, and type system. The Aurora DSQL design ensures that all supported PostgreSQL syntax provides compatible behavior and yields identical query results. For example, Aurora DSQL provides type conversions, arithmetic operations, and numerical precision and scale that are identical to PostgreSQL. Any deviations are documented. Aurora DSQL also introduces advanced capabilities such as optimistic concurrency control and distributed schema management. With these features, you can use the familiar tooling of PostgreSQL while benefiting from the performance and scalability required for modern, cloudnative, distributed applications. PostgreSQL compatibility highlights Aurora DSQL is currently based on PostgreSQL version 16. Key compatibilities include the following: Wire protocol Aurora DSQL uses the standard PostgreSQL v3 wire protocol. This enables integration with standard PostgreSQL clients, drivers, and tools. For example, Aurora DSQL is compatible with psql, pgjdbc, and psycopg. SQL compatibility Aurora DSQL supports a wide range of standard PostgreSQL expressions and functions commonly used in transactional workloads. Supported SQL expressions yield identical results to PostgreSQL, including the following: • Handling of nulls • Sort order behavior • Scale and precision for numeric operations • Equivalence for string operations For more information, see SQL feature compatibility in Aurora DSQL. Compatibility highlights 39 Amazon Aurora DSQL User Guide Transaction"} +{"global_id": 887, "doc_id": "aurora", "chunk_id": "2", "question_id": 4, "question": "What types of operations does Aurora DSQL support that yield identical results to PostgreSQL?", "answer_span": "Supported SQL expressions yield identical results to PostgreSQL, including the following: • Handling of nulls • Sort order behavior • Scale and precision for numeric operations • Equivalence for string operations.", "chunk": "expressions, and data types. For more information about the supported SQL features, see SQL feature compatibility in Aurora DSQL. Region availability for Aurora DSQL With Amazon Aurora DSQL, you can deploy database instances across multiple AWS Regions to support global applications and meet data residency requirements. Region availability determines where you can create and manage Aurora DSQL database clusters. Database administrators and AWS Region availability 2 Amazon Aurora DSQL User Guide Aurora DSQL and PostgreSQL Aurora DSQL is a PostgreSQL-compatible, distributed relational database designed for transactional workloads. Aurora DSQL uses core PostgreSQL components such as the parser, planner, optimizer, and type system. The Aurora DSQL design ensures that all supported PostgreSQL syntax provides compatible behavior and yields identical query results. For example, Aurora DSQL provides type conversions, arithmetic operations, and numerical precision and scale that are identical to PostgreSQL. Any deviations are documented. Aurora DSQL also introduces advanced capabilities such as optimistic concurrency control and distributed schema management. With these features, you can use the familiar tooling of PostgreSQL while benefiting from the performance and scalability required for modern, cloudnative, distributed applications. PostgreSQL compatibility highlights Aurora DSQL is currently based on PostgreSQL version 16. Key compatibilities include the following: Wire protocol Aurora DSQL uses the standard PostgreSQL v3 wire protocol. This enables integration with standard PostgreSQL clients, drivers, and tools. For example, Aurora DSQL is compatible with psql, pgjdbc, and psycopg. SQL compatibility Aurora DSQL supports a wide range of standard PostgreSQL expressions and functions commonly used in transactional workloads. Supported SQL expressions yield identical results to PostgreSQL, including the following: • Handling of nulls • Sort order behavior • Scale and precision for numeric operations • Equivalence for string operations For more information, see SQL feature compatibility in Aurora DSQL. Compatibility highlights 39 Amazon Aurora DSQL User Guide Transaction"} +{"global_id": 888, "doc_id": "aurora", "chunk_id": "3", "question_id": 1, "question": "What SQL expressions yield identical results to PostgreSQL?", "answer_span": "Supported SQL expressions yield identical results to PostgreSQL, including the following: • Handling of nulls • Sort order behavior • Scale and precision for numeric operations • Equivalence for string operations", "chunk": "Supported SQL expressions yield identical results to PostgreSQL, including the following: • Handling of nulls • Sort order behavior • Scale and precision for numeric operations • Equivalence for string operations For more information, see SQL feature compatibility in Aurora DSQL. Compatibility highlights 39 Amazon Aurora DSQL User Guide Transaction management Aurora DSQL preserves the primary characteristics of PostgreSQL, such as ACID transactions and an isolation level equivalent to PostgreSQL Repeatable Read. For more information, see Concurrency control in Aurora DSQL. Key architectural differences The distributed, shared-nothing design of Aurora DSQL results in a few foundational differences from traditional PostgreSQL. These differences are integral to the Aurora DSQL architecture and provide many performance and scalability benefits. Key differences include the following: Optimistic Concurrency Control (OCC) Aurora DSQL uses an optimistic concurrency control model. This lock-free approach prevents transactions from blocking one another, eliminates deadlocks, and enables high-throughput parallel execution. These features make Aurora DSQL particularly valuable for applications requiring consistent performance at scale. For more example, see Concurrency control in Aurora DSQL. Asynchronous DDL operations Aurora DSQL runs DDL operations asynchronously, which allows uninterrupted reads and writes during schema changes. Its distributed architecture allows Aurora DSQL to perform the following actions: • Run DDL operations as background tasks, minimizing disruption. • Coordinate catalog changes as strongly consistent distributed transactions. This ensures atomic visibility across all nodes, even during failures or concurrent operations. • Operate in a fully distributed, leaderless manner across multiple Availability Zones with decoupled compute and storage layers. For more information, see DDL and distributed transactions in Aurora DSQL. SQL feature compatibility in Aurora DSQL Aurora DSQL and PostgreSQL return identical results for all SQL queries. Note that Aurora DSQL differs from PostgreSQL without an ORDER BY clause. In the following sections, learn about Aurora DSQL support for"} +{"global_id": 889, "doc_id": "aurora", "chunk_id": "3", "question_id": 2, "question": "What transaction management characteristics does Aurora DSQL preserve?", "answer_span": "Aurora DSQL preserves the primary characteristics of PostgreSQL, such as ACID transactions and an isolation level equivalent to PostgreSQL Repeatable Read.", "chunk": "Supported SQL expressions yield identical results to PostgreSQL, including the following: • Handling of nulls • Sort order behavior • Scale and precision for numeric operations • Equivalence for string operations For more information, see SQL feature compatibility in Aurora DSQL. Compatibility highlights 39 Amazon Aurora DSQL User Guide Transaction management Aurora DSQL preserves the primary characteristics of PostgreSQL, such as ACID transactions and an isolation level equivalent to PostgreSQL Repeatable Read. For more information, see Concurrency control in Aurora DSQL. Key architectural differences The distributed, shared-nothing design of Aurora DSQL results in a few foundational differences from traditional PostgreSQL. These differences are integral to the Aurora DSQL architecture and provide many performance and scalability benefits. Key differences include the following: Optimistic Concurrency Control (OCC) Aurora DSQL uses an optimistic concurrency control model. This lock-free approach prevents transactions from blocking one another, eliminates deadlocks, and enables high-throughput parallel execution. These features make Aurora DSQL particularly valuable for applications requiring consistent performance at scale. For more example, see Concurrency control in Aurora DSQL. Asynchronous DDL operations Aurora DSQL runs DDL operations asynchronously, which allows uninterrupted reads and writes during schema changes. Its distributed architecture allows Aurora DSQL to perform the following actions: • Run DDL operations as background tasks, minimizing disruption. • Coordinate catalog changes as strongly consistent distributed transactions. This ensures atomic visibility across all nodes, even during failures or concurrent operations. • Operate in a fully distributed, leaderless manner across multiple Availability Zones with decoupled compute and storage layers. For more information, see DDL and distributed transactions in Aurora DSQL. SQL feature compatibility in Aurora DSQL Aurora DSQL and PostgreSQL return identical results for all SQL queries. Note that Aurora DSQL differs from PostgreSQL without an ORDER BY clause. In the following sections, learn about Aurora DSQL support for"} +{"global_id": 890, "doc_id": "aurora", "chunk_id": "3", "question_id": 3, "question": "What concurrency control model does Aurora DSQL use?", "answer_span": "Aurora DSQL uses an optimistic concurrency control model.", "chunk": "Supported SQL expressions yield identical results to PostgreSQL, including the following: • Handling of nulls • Sort order behavior • Scale and precision for numeric operations • Equivalence for string operations For more information, see SQL feature compatibility in Aurora DSQL. Compatibility highlights 39 Amazon Aurora DSQL User Guide Transaction management Aurora DSQL preserves the primary characteristics of PostgreSQL, such as ACID transactions and an isolation level equivalent to PostgreSQL Repeatable Read. For more information, see Concurrency control in Aurora DSQL. Key architectural differences The distributed, shared-nothing design of Aurora DSQL results in a few foundational differences from traditional PostgreSQL. These differences are integral to the Aurora DSQL architecture and provide many performance and scalability benefits. Key differences include the following: Optimistic Concurrency Control (OCC) Aurora DSQL uses an optimistic concurrency control model. This lock-free approach prevents transactions from blocking one another, eliminates deadlocks, and enables high-throughput parallel execution. These features make Aurora DSQL particularly valuable for applications requiring consistent performance at scale. For more example, see Concurrency control in Aurora DSQL. Asynchronous DDL operations Aurora DSQL runs DDL operations asynchronously, which allows uninterrupted reads and writes during schema changes. Its distributed architecture allows Aurora DSQL to perform the following actions: • Run DDL operations as background tasks, minimizing disruption. • Coordinate catalog changes as strongly consistent distributed transactions. This ensures atomic visibility across all nodes, even during failures or concurrent operations. • Operate in a fully distributed, leaderless manner across multiple Availability Zones with decoupled compute and storage layers. For more information, see DDL and distributed transactions in Aurora DSQL. SQL feature compatibility in Aurora DSQL Aurora DSQL and PostgreSQL return identical results for all SQL queries. Note that Aurora DSQL differs from PostgreSQL without an ORDER BY clause. In the following sections, learn about Aurora DSQL support for"} +{"global_id": 891, "doc_id": "aurora", "chunk_id": "3", "question_id": 4, "question": "How does Aurora DSQL run DDL operations?", "answer_span": "Aurora DSQL runs DDL operations asynchronously, which allows uninterrupted reads and writes during schema changes.", "chunk": "Supported SQL expressions yield identical results to PostgreSQL, including the following: • Handling of nulls • Sort order behavior • Scale and precision for numeric operations • Equivalence for string operations For more information, see SQL feature compatibility in Aurora DSQL. Compatibility highlights 39 Amazon Aurora DSQL User Guide Transaction management Aurora DSQL preserves the primary characteristics of PostgreSQL, such as ACID transactions and an isolation level equivalent to PostgreSQL Repeatable Read. For more information, see Concurrency control in Aurora DSQL. Key architectural differences The distributed, shared-nothing design of Aurora DSQL results in a few foundational differences from traditional PostgreSQL. These differences are integral to the Aurora DSQL architecture and provide many performance and scalability benefits. Key differences include the following: Optimistic Concurrency Control (OCC) Aurora DSQL uses an optimistic concurrency control model. This lock-free approach prevents transactions from blocking one another, eliminates deadlocks, and enables high-throughput parallel execution. These features make Aurora DSQL particularly valuable for applications requiring consistent performance at scale. For more example, see Concurrency control in Aurora DSQL. Asynchronous DDL operations Aurora DSQL runs DDL operations asynchronously, which allows uninterrupted reads and writes during schema changes. Its distributed architecture allows Aurora DSQL to perform the following actions: • Run DDL operations as background tasks, minimizing disruption. • Coordinate catalog changes as strongly consistent distributed transactions. This ensures atomic visibility across all nodes, even during failures or concurrent operations. • Operate in a fully distributed, leaderless manner across multiple Availability Zones with decoupled compute and storage layers. For more information, see DDL and distributed transactions in Aurora DSQL. SQL feature compatibility in Aurora DSQL Aurora DSQL and PostgreSQL return identical results for all SQL queries. Note that Aurora DSQL differs from PostgreSQL without an ORDER BY clause. In the following sections, learn about Aurora DSQL support for"} +{"global_id": 892, "doc_id": "aurora", "chunk_id": "4", "question_id": 1, "question": "What does Aurora DSQL and PostgreSQL return for all SQL queries?", "answer_span": "Aurora DSQL and PostgreSQL return identical results for all SQL queries.", "chunk": "For more information, see DDL and distributed transactions in Aurora DSQL. SQL feature compatibility in Aurora DSQL Aurora DSQL and PostgreSQL return identical results for all SQL queries. Note that Aurora DSQL differs from PostgreSQL without an ORDER BY clause. In the following sections, learn about Aurora DSQL support for PostgreSQL data types and SQL commands. Key architectural differences 40 Amazon Aurora DSQL User Guide Topics • Supported data types in Aurora DSQL • Supported SQL for Aurora DSQL • Supported subsets of SQL commands in Aurora DSQL • Unsupported PostgreSQL features in Aurora DSQL Supported data types in Aurora DSQL Aurora DSQL supports a subset of the common PostgreSQL types. Topics • Numeric data types • Character data types • Date and time data types • Miscellaneous data types • Query runtime data types Numeric data types Aurora DSQL supports the following PostgreSQL numeric data types. Name Aliases Range and precision Storage size Index support smallint int2 -32768 to +32767 2 bytes Yes integer int, -2147483648 to +21474836 47 4 bytes Yes int4 bigint int8 -9223372036854775808 to +9223372036854775807 8 bytes Yes real float4 6 decimal digits precision 4 bytes Yes double float8 15 decimal digits precision 8 bytes Yes precision Supported data types 41"} +{"global_id": 893, "doc_id": "aurora", "chunk_id": "4", "question_id": 2, "question": "What is a key architectural difference mentioned in the text?", "answer_span": "Note that Aurora DSQL differs from PostgreSQL without an ORDER BY clause.", "chunk": "For more information, see DDL and distributed transactions in Aurora DSQL. SQL feature compatibility in Aurora DSQL Aurora DSQL and PostgreSQL return identical results for all SQL queries. Note that Aurora DSQL differs from PostgreSQL without an ORDER BY clause. In the following sections, learn about Aurora DSQL support for PostgreSQL data types and SQL commands. Key architectural differences 40 Amazon Aurora DSQL User Guide Topics • Supported data types in Aurora DSQL • Supported SQL for Aurora DSQL • Supported subsets of SQL commands in Aurora DSQL • Unsupported PostgreSQL features in Aurora DSQL Supported data types in Aurora DSQL Aurora DSQL supports a subset of the common PostgreSQL types. Topics • Numeric data types • Character data types • Date and time data types • Miscellaneous data types • Query runtime data types Numeric data types Aurora DSQL supports the following PostgreSQL numeric data types. Name Aliases Range and precision Storage size Index support smallint int2 -32768 to +32767 2 bytes Yes integer int, -2147483648 to +21474836 47 4 bytes Yes int4 bigint int8 -9223372036854775808 to +9223372036854775807 8 bytes Yes real float4 6 decimal digits precision 4 bytes Yes double float8 15 decimal digits precision 8 bytes Yes precision Supported data types 41"} +{"global_id": 894, "doc_id": "aurora", "chunk_id": "4", "question_id": 3, "question": "What types of data does Aurora DSQL support?", "answer_span": "Aurora DSQL supports a subset of the common PostgreSQL types.", "chunk": "For more information, see DDL and distributed transactions in Aurora DSQL. SQL feature compatibility in Aurora DSQL Aurora DSQL and PostgreSQL return identical results for all SQL queries. Note that Aurora DSQL differs from PostgreSQL without an ORDER BY clause. In the following sections, learn about Aurora DSQL support for PostgreSQL data types and SQL commands. Key architectural differences 40 Amazon Aurora DSQL User Guide Topics • Supported data types in Aurora DSQL • Supported SQL for Aurora DSQL • Supported subsets of SQL commands in Aurora DSQL • Unsupported PostgreSQL features in Aurora DSQL Supported data types in Aurora DSQL Aurora DSQL supports a subset of the common PostgreSQL types. Topics • Numeric data types • Character data types • Date and time data types • Miscellaneous data types • Query runtime data types Numeric data types Aurora DSQL supports the following PostgreSQL numeric data types. Name Aliases Range and precision Storage size Index support smallint int2 -32768 to +32767 2 bytes Yes integer int, -2147483648 to +21474836 47 4 bytes Yes int4 bigint int8 -9223372036854775808 to +9223372036854775807 8 bytes Yes real float4 6 decimal digits precision 4 bytes Yes double float8 15 decimal digits precision 8 bytes Yes precision Supported data types 41"} +{"global_id": 895, "doc_id": "aurora", "chunk_id": "4", "question_id": 4, "question": "What is the storage size for the integer data type in Aurora DSQL?", "answer_span": "4 bytes", "chunk": "For more information, see DDL and distributed transactions in Aurora DSQL. SQL feature compatibility in Aurora DSQL Aurora DSQL and PostgreSQL return identical results for all SQL queries. Note that Aurora DSQL differs from PostgreSQL without an ORDER BY clause. In the following sections, learn about Aurora DSQL support for PostgreSQL data types and SQL commands. Key architectural differences 40 Amazon Aurora DSQL User Guide Topics • Supported data types in Aurora DSQL • Supported SQL for Aurora DSQL • Supported subsets of SQL commands in Aurora DSQL • Unsupported PostgreSQL features in Aurora DSQL Supported data types in Aurora DSQL Aurora DSQL supports a subset of the common PostgreSQL types. Topics • Numeric data types • Character data types • Date and time data types • Miscellaneous data types • Query runtime data types Numeric data types Aurora DSQL supports the following PostgreSQL numeric data types. Name Aliases Range and precision Storage size Index support smallint int2 -32768 to +32767 2 bytes Yes integer int, -2147483648 to +21474836 47 4 bytes Yes int4 bigint int8 -9223372036854775808 to +9223372036854775807 8 bytes Yes real float4 6 decimal digits precision 4 bytes Yes double float8 15 decimal digits precision 8 bytes Yes precision Supported data types 41"} +{"global_id": 896, "doc_id": "batch", "chunk_id": "0", "question_id": 1, "question": "What does AWS Batch help you to run?", "answer_span": "AWS Batch helps you to run batch computing workloads on the AWS Cloud.", "chunk": "AWS Batch User Guide What is AWS Batch? AWS Batch helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. AWS Batch removes the undifferentiated heavy lifting of configuring and managing the required infrastructure, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to eliminate capacity constraints, reduce compute costs, and deliver results quickly. As a fully managed service, AWS Batch helps you to run batch computing workloads of any scale. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems. 1 AWS Batch User Guide AWS Batch provides all of the necessary functionality to run high-scale, compute-intensive workloads on top of AWS managed container orchestration services, Amazon ECS and Amazon EKS. AWS Batch is able to scale compute capacity on Amazon EC2 instances and Fargate resources. AWS Batch provides a fully managed service for batch workloads, and delivers the operational capabilities to optimize these types of workloads for throughput, speed, resource efficiency, and cost. AWS Batch also enables SageMaker Training job queuing, allowing data scientists and ML engineers to submit Training jobs with priorities to configurable queues. This capability ensures that ML workloads run automatically as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides"} +{"global_id": 897, "doc_id": "batch", "chunk_id": "0", "question_id": 2, "question": "What type of workloads can AWS Batch optimize?", "answer_span": "AWS Batch delivers the operational capabilities to optimize these types of workloads for throughput, speed, resource efficiency, and cost.", "chunk": "AWS Batch User Guide What is AWS Batch? AWS Batch helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. AWS Batch removes the undifferentiated heavy lifting of configuring and managing the required infrastructure, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to eliminate capacity constraints, reduce compute costs, and deliver results quickly. As a fully managed service, AWS Batch helps you to run batch computing workloads of any scale. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems. 1 AWS Batch User Guide AWS Batch provides all of the necessary functionality to run high-scale, compute-intensive workloads on top of AWS managed container orchestration services, Amazon ECS and Amazon EKS. AWS Batch is able to scale compute capacity on Amazon EC2 instances and Fargate resources. AWS Batch provides a fully managed service for batch workloads, and delivers the operational capabilities to optimize these types of workloads for throughput, speed, resource efficiency, and cost. AWS Batch also enables SageMaker Training job queuing, allowing data scientists and ML engineers to submit Training jobs with priorities to configurable queues. This capability ensures that ML workloads run automatically as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides"} +{"global_id": 898, "doc_id": "batch", "chunk_id": "0", "question_id": 3, "question": "What does AWS Batch automatically provision?", "answer_span": "AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads.", "chunk": "AWS Batch User Guide What is AWS Batch? AWS Batch helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. AWS Batch removes the undifferentiated heavy lifting of configuring and managing the required infrastructure, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to eliminate capacity constraints, reduce compute costs, and deliver results quickly. As a fully managed service, AWS Batch helps you to run batch computing workloads of any scale. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems. 1 AWS Batch User Guide AWS Batch provides all of the necessary functionality to run high-scale, compute-intensive workloads on top of AWS managed container orchestration services, Amazon ECS and Amazon EKS. AWS Batch is able to scale compute capacity on Amazon EC2 instances and Fargate resources. AWS Batch provides a fully managed service for batch workloads, and delivers the operational capabilities to optimize these types of workloads for throughput, speed, resource efficiency, and cost. AWS Batch also enables SageMaker Training job queuing, allowing data scientists and ML engineers to submit Training jobs with priorities to configurable queues. This capability ensures that ML workloads run automatically as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides"} +{"global_id": 899, "doc_id": "batch", "chunk_id": "0", "question_id": 4, "question": "What capability does AWS Batch enable for SageMaker Training jobs?", "answer_span": "AWS Batch also enables SageMaker Training job queuing, allowing data scientists and ML engineers to submit Training jobs with priorities to configurable queues.", "chunk": "AWS Batch User Guide What is AWS Batch? AWS Batch helps you to run batch computing workloads on the AWS Cloud. Batch computing is a common way for developers, scientists, and engineers to access large amounts of compute resources. AWS Batch removes the undifferentiated heavy lifting of configuring and managing the required infrastructure, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to eliminate capacity constraints, reduce compute costs, and deliver results quickly. As a fully managed service, AWS Batch helps you to run batch computing workloads of any scale. AWS Batch automatically provisions compute resources and optimizes the workload distribution based on the quantity and scale of the workloads. With AWS Batch, there's no need to install or manage batch computing software, so you can focus your time on analyzing results and solving problems. 1 AWS Batch User Guide AWS Batch provides all of the necessary functionality to run high-scale, compute-intensive workloads on top of AWS managed container orchestration services, Amazon ECS and Amazon EKS. AWS Batch is able to scale compute capacity on Amazon EC2 instances and Fargate resources. AWS Batch provides a fully managed service for batch workloads, and delivers the operational capabilities to optimize these types of workloads for throughput, speed, resource efficiency, and cost. AWS Batch also enables SageMaker Training job queuing, allowing data scientists and ML engineers to submit Training jobs with priorities to configurable queues. This capability ensures that ML workloads run automatically as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides"} +{"global_id": 900, "doc_id": "batch", "chunk_id": "1", "question_id": 1, "question": "What does AWS Batch provide for machine learning workloads?", "answer_span": "For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs.", "chunk": "as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads. Jobs are automatically queued and executed based on configured priorities and resource availability. 2 AWS Batch User Guide Are you a first-time AWS Batch user? If you are a first-time user of AWS Batch, we recommend that you begin by reading the following sections: • Components of AWS Batch • Create IAM account and administrative user • Setting up AWS Batch • Getting started with AWS Batch tutorials • Getting started with AWS Batch on SageMaker AI Related services AWS Batch is a fully managed batch computing service that plans, schedules, and runs your containerized batch ML, simulation, and analytics workloads across the full range of AWS compute offerings, such as Amazon ECS, Amazon EKS, AWS Fargate, and Spot or On-Demand Instances. For more information about each managed compute service, see: • Amazon EC2 User Guide • AWS Fargate Developer Guide • Amazon EKS User Guide • Amazon SageMaker AI Developer Guide Accessing AWS Batch You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources. AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the"} +{"global_id": 901, "doc_id": "batch", "chunk_id": "1", "question_id": 2, "question": "What model does AWS Batch provide for administrators and data scientists?", "answer_span": "This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads.", "chunk": "as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads. Jobs are automatically queued and executed based on configured priorities and resource availability. 2 AWS Batch User Guide Are you a first-time AWS Batch user? If you are a first-time user of AWS Batch, we recommend that you begin by reading the following sections: • Components of AWS Batch • Create IAM account and administrative user • Setting up AWS Batch • Getting started with AWS Batch tutorials • Getting started with AWS Batch on SageMaker AI Related services AWS Batch is a fully managed batch computing service that plans, schedules, and runs your containerized batch ML, simulation, and analytics workloads across the full range of AWS compute offerings, such as Amazon ECS, Amazon EKS, AWS Fargate, and Spot or On-Demand Instances. For more information about each managed compute service, see: • Amazon EC2 User Guide • AWS Fargate Developer Guide • Amazon EKS User Guide • Amazon SageMaker AI Developer Guide Accessing AWS Batch You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources. AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the"} +{"global_id": 902, "doc_id": "batch", "chunk_id": "1", "question_id": 3, "question": "What is AWS Batch?", "answer_span": "AWS Batch is a fully managed batch computing service that plans, schedules, and runs your containerized batch ML, simulation, and analytics workloads across the full range of AWS compute offerings.", "chunk": "as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads. Jobs are automatically queued and executed based on configured priorities and resource availability. 2 AWS Batch User Guide Are you a first-time AWS Batch user? If you are a first-time user of AWS Batch, we recommend that you begin by reading the following sections: • Components of AWS Batch • Create IAM account and administrative user • Setting up AWS Batch • Getting started with AWS Batch tutorials • Getting started with AWS Batch on SageMaker AI Related services AWS Batch is a fully managed batch computing service that plans, schedules, and runs your containerized batch ML, simulation, and analytics workloads across the full range of AWS compute offerings, such as Amazon ECS, Amazon EKS, AWS Fargate, and Spot or On-Demand Instances. For more information about each managed compute service, see: • Amazon EC2 User Guide • AWS Fargate Developer Guide • Amazon EKS User Guide • Amazon SageMaker AI Developer Guide Accessing AWS Batch You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources. AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the"} +{"global_id": 903, "doc_id": "batch", "chunk_id": "1", "question_id": 4, "question": "How can you access AWS Batch?", "answer_span": "You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources.", "chunk": "as soon as resources become available, eliminating the need for manual coordination and improving resource utilization. For machine learning workloads, AWS Batch provides queuing capabilities for SageMaker Training jobs. You can configure queues with specific policies to optimize cost, performance, and resource allocation for your ML Training workloads. This provides a shared responsibility model where administrators set up the infrastructure and permissions, while data scientists can focus on submitting and monitoring their ML training workloads. Jobs are automatically queued and executed based on configured priorities and resource availability. 2 AWS Batch User Guide Are you a first-time AWS Batch user? If you are a first-time user of AWS Batch, we recommend that you begin by reading the following sections: • Components of AWS Batch • Create IAM account and administrative user • Setting up AWS Batch • Getting started with AWS Batch tutorials • Getting started with AWS Batch on SageMaker AI Related services AWS Batch is a fully managed batch computing service that plans, schedules, and runs your containerized batch ML, simulation, and analytics workloads across the full range of AWS compute offerings, such as Amazon ECS, Amazon EKS, AWS Fargate, and Spot or On-Demand Instances. For more information about each managed compute service, see: • Amazon EC2 User Guide • AWS Fargate Developer Guide • Amazon EKS User Guide • Amazon SageMaker AI Developer Guide Accessing AWS Batch You can access AWS Batch using the following: AWS Batch console The web interface where you create and manage resources. AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the"} +{"global_id": 904, "doc_id": "batch", "chunk_id": "2", "question_id": 1, "question": "What is the AWS Command Line Interface used for?", "answer_span": "Interact with AWS services using commands in your command line shell.", "chunk": "AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the AWS CLI Command Reference. Are you a first-time AWS Batch user? 3 AWS Batch User Guide AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, use the libraries, sample code, tutorials, and other resources provided by AWS. These libraries provide basic functions that automate tasks, such as cryptographically signing your requests, retrying requests, and handling error responses. These functions make it more efficient for you to get started. For more information, see Tools to Build on AWS. Components of AWS Batch AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure. Compute environment A compute environment is a set of managed or unmanaged compute resources that are used to run jobs. With managed compute environments, you can specify desired compute type (Fargate or EC2) at several levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum"} +{"global_id": 905, "doc_id": "batch", "chunk_id": "2", "question_id": 2, "question": "On which operating systems is the AWS Command Line Interface supported?", "answer_span": "The AWS Command Line Interface is supported on Windows, macOS, and Linux.", "chunk": "AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the AWS CLI Command Reference. Are you a first-time AWS Batch user? 3 AWS Batch User Guide AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, use the libraries, sample code, tutorials, and other resources provided by AWS. These libraries provide basic functions that automate tasks, such as cryptographically signing your requests, retrying requests, and handling error responses. These functions make it more efficient for you to get started. For more information, see Tools to Build on AWS. Components of AWS Batch AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure. Compute environment A compute environment is a set of managed or unmanaged compute resources that are used to run jobs. With managed compute environments, you can specify desired compute type (Fargate or EC2) at several levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum"} +{"global_id": 906, "doc_id": "batch", "chunk_id": "2", "question_id": 3, "question": "What does AWS Batch simplify?", "answer_span": "AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region.", "chunk": "AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the AWS CLI Command Reference. Are you a first-time AWS Batch user? 3 AWS Batch User Guide AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, use the libraries, sample code, tutorials, and other resources provided by AWS. These libraries provide basic functions that automate tasks, such as cryptographically signing your requests, retrying requests, and handling error responses. These functions make it more efficient for you to get started. For more information, see Tools to Build on AWS. Components of AWS Batch AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure. Compute environment A compute environment is a set of managed or unmanaged compute resources that are used to run jobs. With managed compute environments, you can specify desired compute type (Fargate or EC2) at several levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum"} +{"global_id": 907, "doc_id": "batch", "chunk_id": "2", "question_id": 4, "question": "What is a compute environment in AWS Batch?", "answer_span": "A compute environment is a set of managed or unmanaged compute resources that are used to run jobs.", "chunk": "AWS Command Line Interface Interact with AWS services using commands in your command line shell. The AWS Command Line Interface is supported on Windows, macOS, and Linux. For more information about the AWS CLI, see AWS Command Line Interface User Guide. You can find the AWS Batch commands in the AWS CLI Command Reference. Are you a first-time AWS Batch user? 3 AWS Batch User Guide AWS SDKs If you prefer to build applications using language-specific APIs instead of submitting a request over HTTP or HTTPS, use the libraries, sample code, tutorials, and other resources provided by AWS. These libraries provide basic functions that automate tasks, such as cryptographically signing your requests, retrying requests, and handling error responses. These functions make it more efficient for you to get started. For more information, see Tools to Build on AWS. Components of AWS Batch AWS Batch simplifies running batch jobs across multiple Availability Zones within a Region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure. Compute environment A compute environment is a set of managed or unmanaged compute resources that are used to run jobs. With managed compute environments, you can specify desired compute type (Fargate or EC2) at several levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum"} +{"global_id": 908, "doc_id": "batch", "chunk_id": "3", "question_id": 1, "question": "What can you set up in AWS Batch regarding EC2 instances?", "answer_span": "You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge.", "chunk": "levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're Components of AWS Batch 4 AWS Batch User Guide willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets. AWS Batch efficiently launches, manages, and terminates compute types as needed. You can also manage your own compute environments. As such, you're responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch creates for you. For more information, see Compute environments for AWS Batch. Job queues When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment. You associate one or more compute environments with a job queue. You can also assign priority values for these compute environments and even across job queues themselves. For example, you can have a high priority queue that you submit time-sensitive jobs to, and a low priority queue for jobs that can run anytime when compute resources are cheaper. For more information, see Job queues. Job definitions A job definition specifies how jobs are to be run. You can think of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points"} +{"global_id": 909, "doc_id": "batch", "chunk_id": "3", "question_id": 2, "question": "What does a job definition specify in AWS Batch?", "answer_span": "A job definition specifies how jobs are to be run.", "chunk": "levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're Components of AWS Batch 4 AWS Batch User Guide willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets. AWS Batch efficiently launches, manages, and terminates compute types as needed. You can also manage your own compute environments. As such, you're responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch creates for you. For more information, see Compute environments for AWS Batch. Job queues When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment. You associate one or more compute environments with a job queue. You can also assign priority values for these compute environments and even across job queues themselves. For example, you can have a high priority queue that you submit time-sensitive jobs to, and a low priority queue for jobs that can run anytime when compute resources are cheaper. For more information, see Job queues. Job definitions A job definition specifies how jobs are to be run. You can think of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points"} +{"global_id": 910, "doc_id": "batch", "chunk_id": "3", "question_id": 3, "question": "What can you manage in your own compute environments?", "answer_span": "As such, you're responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch creates for you.", "chunk": "levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're Components of AWS Batch 4 AWS Batch User Guide willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets. AWS Batch efficiently launches, manages, and terminates compute types as needed. You can also manage your own compute environments. As such, you're responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch creates for you. For more information, see Compute environments for AWS Batch. Job queues When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment. You associate one or more compute environments with a job queue. You can also assign priority values for these compute environments and even across job queues themselves. For example, you can have a high priority queue that you submit time-sensitive jobs to, and a low priority queue for jobs that can run anytime when compute resources are cheaper. For more information, see Job queues. Job definitions A job definition specifies how jobs are to be run. You can think of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points"} +{"global_id": 911, "doc_id": "batch", "chunk_id": "3", "question_id": 4, "question": "What can you assign to compute environments in job queues?", "answer_span": "You can also assign priority values for these compute environments and even across job queues themselves.", "chunk": "levels of detail. You can set up compute environments that use a particular type of EC2 instance, a particular model such as c5.2xlarge or m5.10xlarge. Or, you can choose only to specify that you want to use the newest instance types. You can also specify the minimum, desired, and maximum number of vCPUs for the environment, along with the amount that you're Components of AWS Batch 4 AWS Batch User Guide willing to pay for a Spot Instance as a percentage of the On-Demand Instance price and a target set of VPC subnets. AWS Batch efficiently launches, manages, and terminates compute types as needed. You can also manage your own compute environments. As such, you're responsible for setting up and scaling the instances in an Amazon ECS cluster that AWS Batch creates for you. For more information, see Compute environments for AWS Batch. Job queues When you submit an AWS Batch job, you submit it to a particular job queue, where the job resides until it's scheduled onto a compute environment. You associate one or more compute environments with a job queue. You can also assign priority values for these compute environments and even across job queues themselves. For example, you can have a high priority queue that you submit time-sensitive jobs to, and a low priority queue for jobs that can run anytime when compute resources are cheaper. For more information, see Job queues. Job definitions A job definition specifies how jobs are to be run. You can think of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points"} +{"global_id": 912, "doc_id": "batch", "chunk_id": "4", "question_id": 1, "question": "What is a job definition in AWS Batch?", "answer_span": "of a job definition as a blueprint for the resources in your job.", "chunk": "of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points for persistent storage. Many of the specifications in a job definition can be overridden by specifying new values when submitting individual Jobs. For more information, see Job definitions Jobs A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment, using parameters that you specify in a job definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs or the availability of resources you specify. For more information, see Jobs. Scheduling policy You can use scheduling policies to configure how compute resources in a job queue are allocated between users or workloads. Using fair-share scheduling policies, you can assign different share identifiers to workloads or users. The AWS Batch job scheduler defaults to a first-in, first-out (FIFO) strategy. For more information, see Fair-share scheduling policies. Job queues 5 AWS Batch User Guide Consumable resources A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating"} +{"global_id": 913, "doc_id": "batch", "chunk_id": "4", "question_id": 2, "question": "What can you supply your job with to provide access to other AWS resources?", "answer_span": "You can supply your job with an IAM role to provide access to other AWS resources.", "chunk": "of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points for persistent storage. Many of the specifications in a job definition can be overridden by specifying new values when submitting individual Jobs. For more information, see Job definitions Jobs A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment, using parameters that you specify in a job definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs or the availability of resources you specify. For more information, see Jobs. Scheduling policy You can use scheduling policies to configure how compute resources in a job queue are allocated between users or workloads. Using fair-share scheduling policies, you can assign different share identifiers to workloads or users. The AWS Batch job scheduler defaults to a first-in, first-out (FIFO) strategy. For more information, see Fair-share scheduling policies. Job queues 5 AWS Batch User Guide Consumable resources A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating"} +{"global_id": 914, "doc_id": "batch", "chunk_id": "4", "question_id": 3, "question": "What does a job in AWS Batch run as?", "answer_span": "It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment.", "chunk": "of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points for persistent storage. Many of the specifications in a job definition can be overridden by specifying new values when submitting individual Jobs. For more information, see Job definitions Jobs A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment, using parameters that you specify in a job definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs or the availability of resources you specify. For more information, see Jobs. Scheduling policy You can use scheduling policies to configure how compute resources in a job queue are allocated between users or workloads. Using fair-share scheduling policies, you can assign different share identifiers to workloads or users. The AWS Batch job scheduler defaults to a first-in, first-out (FIFO) strategy. For more information, see Fair-share scheduling policies. Job queues 5 AWS Batch User Guide Consumable resources A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating"} +{"global_id": 915, "doc_id": "batch", "chunk_id": "4", "question_id": 4, "question": "What is a consumable resource?", "answer_span": "A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on.", "chunk": "of a job definition as a blueprint for the resources in your job. You can supply your job with an IAM role to provide access to other AWS resources. You also specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points for persistent storage. Many of the specifications in a job definition can be overridden by specifying new values when submitting individual Jobs. For more information, see Job definitions Jobs A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on AWS Fargate or Amazon EC2 resources in your compute environment, using parameters that you specify in a job definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs or the availability of resources you specify. For more information, see Jobs. Scheduling policy You can use scheduling policies to configure how compute resources in a job queue are allocated between users or workloads. Using fair-share scheduling policies, you can assign different share identifiers to workloads or users. The AWS Batch job scheduler defaults to a first-in, first-out (FIFO) strategy. For more information, see Fair-share scheduling policies. Job queues 5 AWS Batch User Guide Consumable resources A consumable resource is a resource that is needed to run your jobs, such as a 3rd party license token, database access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating"} +{"global_id": 916, "doc_id": "batch", "chunk_id": "5", "question_id": 1, "question": "What are consumable resources needed for?", "answer_span": "You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job.", "chunk": "access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating only the jobs that have all the required resources available. For more information, see Resource-aware scheduling . Service Environment A Service Environment define how AWS Batch integrates with SageMaker for job execution. Service Environments enable AWS Batch to submit and manage jobs on SageMaker while providing the queuing, scheduling, and priority management capabilities of AWS Batch. Service Environments define capacity limits for specific service types such as SageMaker Training jobs. The capacity limits control the maximum resources that can be used by service jobs in the environment. For more information, see Service environments for AWS Batch. Service job A service job is a unit of work that you submit to AWS Batch to run on a service environment. Service jobs leverage AWS Batch's queuing and scheduling capabilities while delegating actual execution to the external service. For example, SageMaker Training jobs submitted as service jobs are queued and prioritized by AWS Batch, but the SageMaker Training job execution occurs within SageMaker AI infrastructure. This integration enables data scientists and ML engineers to benefit from AWS Batch's automated workload management, and priority queuing, for their SageMaker AI Training workloads. Service jobs can reference other jobs by name or ID and support job dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon"} +{"global_id": 917, "doc_id": "batch", "chunk_id": "5", "question_id": 2, "question": "What do Service Environments define?", "answer_span": "A Service Environment define how AWS Batch integrates with SageMaker for job execution.", "chunk": "access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating only the jobs that have all the required resources available. For more information, see Resource-aware scheduling . Service Environment A Service Environment define how AWS Batch integrates with SageMaker for job execution. Service Environments enable AWS Batch to submit and manage jobs on SageMaker while providing the queuing, scheduling, and priority management capabilities of AWS Batch. Service Environments define capacity limits for specific service types such as SageMaker Training jobs. The capacity limits control the maximum resources that can be used by service jobs in the environment. For more information, see Service environments for AWS Batch. Service job A service job is a unit of work that you submit to AWS Batch to run on a service environment. Service jobs leverage AWS Batch's queuing and scheduling capabilities while delegating actual execution to the external service. For example, SageMaker Training jobs submitted as service jobs are queued and prioritized by AWS Batch, but the SageMaker Training job execution occurs within SageMaker AI infrastructure. This integration enables data scientists and ML engineers to benefit from AWS Batch's automated workload management, and priority queuing, for their SageMaker AI Training workloads. Service jobs can reference other jobs by name or ID and support job dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon"} +{"global_id": 918, "doc_id": "batch", "chunk_id": "5", "question_id": 3, "question": "What is a service job?", "answer_span": "A service job is a unit of work that you submit to AWS Batch to run on a service environment.", "chunk": "access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating only the jobs that have all the required resources available. For more information, see Resource-aware scheduling . Service Environment A Service Environment define how AWS Batch integrates with SageMaker for job execution. Service Environments enable AWS Batch to submit and manage jobs on SageMaker while providing the queuing, scheduling, and priority management capabilities of AWS Batch. Service Environments define capacity limits for specific service types such as SageMaker Training jobs. The capacity limits control the maximum resources that can be used by service jobs in the environment. For more information, see Service environments for AWS Batch. Service job A service job is a unit of work that you submit to AWS Batch to run on a service environment. Service jobs leverage AWS Batch's queuing and scheduling capabilities while delegating actual execution to the external service. For example, SageMaker Training jobs submitted as service jobs are queued and prioritized by AWS Batch, but the SageMaker Training job execution occurs within SageMaker AI infrastructure. This integration enables data scientists and ML engineers to benefit from AWS Batch's automated workload management, and priority queuing, for their SageMaker AI Training workloads. Service jobs can reference other jobs by name or ID and support job dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon"} +{"global_id": 919, "doc_id": "batch", "chunk_id": "5", "question_id": 4, "question": "What do service jobs leverage?", "answer_span": "Service jobs leverage AWS Batch's queuing and scheduling capabilities while delegating actual execution to the external service.", "chunk": "access bandwidth, the need to throttle calls to a third-party API, and so on. You specify the consumable resources which are needed for a job to run, and Batch takes these resource dependencies into account when it schedules a job. You can reduce the under-utilization of compute resources by allocating only the jobs that have all the required resources available. For more information, see Resource-aware scheduling . Service Environment A Service Environment define how AWS Batch integrates with SageMaker for job execution. Service Environments enable AWS Batch to submit and manage jobs on SageMaker while providing the queuing, scheduling, and priority management capabilities of AWS Batch. Service Environments define capacity limits for specific service types such as SageMaker Training jobs. The capacity limits control the maximum resources that can be used by service jobs in the environment. For more information, see Service environments for AWS Batch. Service job A service job is a unit of work that you submit to AWS Batch to run on a service environment. Service jobs leverage AWS Batch's queuing and scheduling capabilities while delegating actual execution to the external service. For example, SageMaker Training jobs submitted as service jobs are queued and prioritized by AWS Batch, but the SageMaker Training job execution occurs within SageMaker AI infrastructure. This integration enables data scientists and ML engineers to benefit from AWS Batch's automated workload management, and priority queuing, for their SageMaker AI Training workloads. Service jobs can reference other jobs by name or ID and support job dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon"} +{"global_id": 920, "doc_id": "batch", "chunk_id": "6", "question_id": 1, "question": "What must you use to work with AWS Batch?", "answer_span": "you must use a version of the AWS CLI that supports the latest AWS Batch features.", "chunk": "dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon use AWS Batch. The setup process for these services is similar. This is because AWS Batch uses Amazon ECS container instances in its compute environments. To use the AWS CLI with AWS Batch, you must use a version of the AWS CLI that supports the latest AWS Batch features. If you don't see support for an AWS Batch feature in the AWS CLI, upgrade to the latest version. For more information, see http://aws.amazon.com/cli/. Note Because AWS Batch uses components of Amazon EC2, you use the Amazon EC2 console for many of these steps. Complete the following tasks to get set up for AWS Batch. Topics • Create IAM account and administrative user • Create IAM roles for your compute environments and container instances • Create a key pair for your instances • Create a VPC • Create a security group • Install the AWS CLI Create IAM account and administrative user To get started, you need to create an AWS account and a single user that is typically granted administrative rights. To accomplish this, complete the following tutorials: Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. Create IAM account and administrative user 7 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition,"} +{"global_id": 921, "doc_id": "batch", "chunk_id": "6", "question_id": 2, "question": "What is the first step to get started with AWS Batch?", "answer_span": "To get started, you need to create an AWS account and a single user that is typically granted administrative rights.", "chunk": "dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon use AWS Batch. The setup process for these services is similar. This is because AWS Batch uses Amazon ECS container instances in its compute environments. To use the AWS CLI with AWS Batch, you must use a version of the AWS CLI that supports the latest AWS Batch features. If you don't see support for an AWS Batch feature in the AWS CLI, upgrade to the latest version. For more information, see http://aws.amazon.com/cli/. Note Because AWS Batch uses components of Amazon EC2, you use the Amazon EC2 console for many of these steps. Complete the following tasks to get set up for AWS Batch. Topics • Create IAM account and administrative user • Create IAM roles for your compute environments and container instances • Create a key pair for your instances • Create a VPC • Create a security group • Install the AWS CLI Create IAM account and administrative user To get started, you need to create an AWS account and a single user that is typically granted administrative rights. To accomplish this, complete the following tutorials: Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. Create IAM account and administrative user 7 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition,"} +{"global_id": 922, "doc_id": "batch", "chunk_id": "6", "question_id": 3, "question": "What does AWS Batch use in its compute environments?", "answer_span": "AWS Batch uses Amazon ECS container instances in its compute environments.", "chunk": "dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon use AWS Batch. The setup process for these services is similar. This is because AWS Batch uses Amazon ECS container instances in its compute environments. To use the AWS CLI with AWS Batch, you must use a version of the AWS CLI that supports the latest AWS Batch features. If you don't see support for an AWS Batch feature in the AWS CLI, upgrade to the latest version. For more information, see http://aws.amazon.com/cli/. Note Because AWS Batch uses components of Amazon EC2, you use the Amazon EC2 console for many of these steps. Complete the following tasks to get set up for AWS Batch. Topics • Create IAM account and administrative user • Create IAM roles for your compute environments and container instances • Create a key pair for your instances • Create a VPC • Create a security group • Install the AWS CLI Create IAM account and administrative user To get started, you need to create an AWS account and a single user that is typically granted administrative rights. To accomplish this, complete the following tutorials: Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. Create IAM account and administrative user 7 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition,"} +{"global_id": 923, "doc_id": "batch", "chunk_id": "6", "question_id": 4, "question": "Where can you find more information about the AWS CLI?", "answer_span": "For more information, see http://aws.amazon.com/cli/.", "chunk": "dependencies. For more information, see Service jobs in AWS Batch. Consumable resources 6 AWS Batch User Guide Setting up AWS Batch If you've already signed up for Amazon Web Services (AWS) and are using Amazon Elastic Compute Cloud (Amazon EC2) or Amazon Elastic Container Service (Amazon ECS), you can soon use AWS Batch. The setup process for these services is similar. This is because AWS Batch uses Amazon ECS container instances in its compute environments. To use the AWS CLI with AWS Batch, you must use a version of the AWS CLI that supports the latest AWS Batch features. If you don't see support for an AWS Batch feature in the AWS CLI, upgrade to the latest version. For more information, see http://aws.amazon.com/cli/. Note Because AWS Batch uses components of Amazon EC2, you use the Amazon EC2 console for many of these steps. Complete the following tasks to get set up for AWS Batch. Topics • Create IAM account and administrative user • Create IAM roles for your compute environments and container instances • Create a key pair for your instances • Create a VPC • Create a security group • Install the AWS CLI Create IAM account and administrative user To get started, you need to create an AWS account and a single user that is typically granted administrative rights. To accomplish this, complete the following tutorials: Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. Create IAM account and administrative user 7 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition,"} +{"global_id": 924, "doc_id": "batch", "chunk_id": "7", "question_id": 1, "question": "What can you use the AWS Batch first-run wizard for?", "answer_span": "You can use the AWS Batch first-run wizard to get started quickly with AWS Batch.", "chunk": "IAM account and administrative user 7 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition, and a job queue. You can also submit a sample \"Hello World\" job using the AWS Batch first-run wizard to test your configuration. If you already have a Docker image that you want to launch in AWS Batch, you can use that image to create a job definition. Afterward, you can use the AWS Batch first-run wizard to create a compute environment, job queue, and submit a sample Hello World job. Getting started with Amazon EC2 orchestration using the Wizard Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup AWS Batch with the Wizard to configure Amazon EC2 and run Hello World. Intended Audience This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition •"} +{"global_id": 925, "doc_id": "batch", "chunk_id": "7", "question_id": 2, "question": "What does Amazon EC2 provide?", "answer_span": "Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud.", "chunk": "IAM account and administrative user 7 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition, and a job queue. You can also submit a sample \"Hello World\" job using the AWS Batch first-run wizard to test your configuration. If you already have a Docker image that you want to launch in AWS Batch, you can use that image to create a job definition. Afterward, you can use the AWS Batch first-run wizard to create a compute environment, job queue, and submit a sample Hello World job. Getting started with Amazon EC2 orchestration using the Wizard Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup AWS Batch with the Wizard to configure Amazon EC2 and run Hello World. Intended Audience This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition •"} +{"global_id": 926, "doc_id": "batch", "chunk_id": "7", "question_id": 3, "question": "Who is the intended audience for the tutorial?", "answer_span": "This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch.", "chunk": "IAM account and administrative user 7 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition, and a job queue. You can also submit a sample \"Hello World\" job using the AWS Batch first-run wizard to test your configuration. If you already have a Docker image that you want to launch in AWS Batch, you can use that image to create a job definition. Afterward, you can use the AWS Batch first-run wizard to create a compute environment, job queue, and submit a sample Hello World job. Getting started with Amazon EC2 orchestration using the Wizard Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup AWS Batch with the Wizard to configure Amazon EC2 and run Hello World. Intended Audience This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition •"} +{"global_id": 927, "doc_id": "batch", "chunk_id": "7", "question_id": 4, "question": "What can you create using the AWS Batch console wizard?", "answer_span": "This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue.", "chunk": "IAM account and administrative user 7 AWS Batch User Guide Getting started with AWS Batch tutorials You can use the AWS Batch first-run wizard to get started quickly with AWS Batch. After you complete the Prerequisites, you can use the first-run wizard to create a compute environment, a job definition, and a job queue. You can also submit a sample \"Hello World\" job using the AWS Batch first-run wizard to test your configuration. If you already have a Docker image that you want to launch in AWS Batch, you can use that image to create a job definition. Afterward, you can use the AWS Batch first-run wizard to create a compute environment, job queue, and submit a sample Hello World job. Getting started with Amazon EC2 orchestration using the Wizard Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic. Overview This tutorial demonstrates how to setup AWS Batch with the Wizard to configure Amazon EC2 and run Hello World. Intended Audience This tutorial is designed for system administrators and developers responsible for setting up, testing, and deploying AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition •"} +{"global_id": 928, "doc_id": "batch", "chunk_id": "8", "question_id": 1, "question": "What does this tutorial show you how to use?", "answer_span": "This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue.", "chunk": "AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition • Create and submit a job to run • View the output of the job in CloudWatch Time Required It should take about 10–15 minutes to complete this tutorial. Regional Restrictions There are no country or regional restrictions associated with using this solution. Resource Usage Costs There's no charge for creating an AWS account. However, by implementing this solution, you might incur some or all of the costs that are listed in the following table. Description Cost (US dollars) Amazon EC2 instance You pay for each Amazon EC2 instance that is created. For more information about pricing, see Amazon EC2 Pricing. Prerequisites Before you begin: • Create an AWS account if you don't have one. • Create the ecsInstanceRole Instance role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a compute environment for an Amazon EC2 orchestration, do the following: Prerequisites 17 AWS Batch User Guide Best practices for AWS Batch You can use AWS Batch to run a variety of demanding computational workloads at scale without managing a complex architecture. AWS Batch jobs can be used in a wide range of use cases in areas such as epidemiology, gaming, and machine learning. This topic covers the best practices to consider while using AWS Batch and guidance on how to run"} +{"global_id": 929, "doc_id": "batch", "chunk_id": "8", "question_id": 2, "question": "How long should it take to complete this tutorial?", "answer_span": "It should take about 10–15 minutes to complete this tutorial.", "chunk": "AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition • Create and submit a job to run • View the output of the job in CloudWatch Time Required It should take about 10–15 minutes to complete this tutorial. Regional Restrictions There are no country or regional restrictions associated with using this solution. Resource Usage Costs There's no charge for creating an AWS account. However, by implementing this solution, you might incur some or all of the costs that are listed in the following table. Description Cost (US dollars) Amazon EC2 instance You pay for each Amazon EC2 instance that is created. For more information about pricing, see Amazon EC2 Pricing. Prerequisites Before you begin: • Create an AWS account if you don't have one. • Create the ecsInstanceRole Instance role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a compute environment for an Amazon EC2 orchestration, do the following: Prerequisites 17 AWS Batch User Guide Best practices for AWS Batch You can use AWS Batch to run a variety of demanding computational workloads at scale without managing a complex architecture. AWS Batch jobs can be used in a wide range of use cases in areas such as epidemiology, gaming, and machine learning. This topic covers the best practices to consider while using AWS Batch and guidance on how to run"} +{"global_id": 930, "doc_id": "batch", "chunk_id": "8", "question_id": 3, "question": "What are the prerequisites before you begin?", "answer_span": "• Create an AWS account if you don't have one. • Create the ecsInstanceRole Instance role.", "chunk": "AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition • Create and submit a job to run • View the output of the job in CloudWatch Time Required It should take about 10–15 minutes to complete this tutorial. Regional Restrictions There are no country or regional restrictions associated with using this solution. Resource Usage Costs There's no charge for creating an AWS account. However, by implementing this solution, you might incur some or all of the costs that are listed in the following table. Description Cost (US dollars) Amazon EC2 instance You pay for each Amazon EC2 instance that is created. For more information about pricing, see Amazon EC2 Pricing. Prerequisites Before you begin: • Create an AWS account if you don't have one. • Create the ecsInstanceRole Instance role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a compute environment for an Amazon EC2 orchestration, do the following: Prerequisites 17 AWS Batch User Guide Best practices for AWS Batch You can use AWS Batch to run a variety of demanding computational workloads at scale without managing a complex architecture. AWS Batch jobs can be used in a wide range of use cases in areas such as epidemiology, gaming, and machine learning. This topic covers the best practices to consider while using AWS Batch and guidance on how to run"} +{"global_id": 931, "doc_id": "batch", "chunk_id": "8", "question_id": 4, "question": "What can AWS Batch be used for?", "answer_span": "You can use AWS Batch to run a variety of demanding computational workloads at scale without managing a complex architecture.", "chunk": "AWS Batch. Features Used This tutorial shows you how to use the AWS Batch console wizard to: • Create and configure an Amazon EC2 compute environment • Create a job queue. Getting started with Amazon EC2 using the Wizard 16 AWS Batch User Guide • Create a job definition • Create and submit a job to run • View the output of the job in CloudWatch Time Required It should take about 10–15 minutes to complete this tutorial. Regional Restrictions There are no country or regional restrictions associated with using this solution. Resource Usage Costs There's no charge for creating an AWS account. However, by implementing this solution, you might incur some or all of the costs that are listed in the following table. Description Cost (US dollars) Amazon EC2 instance You pay for each Amazon EC2 instance that is created. For more information about pricing, see Amazon EC2 Pricing. Prerequisites Before you begin: • Create an AWS account if you don't have one. • Create the ecsInstanceRole Instance role. Step 1: Create a compute environment Important To get started as simply and quickly as possible, this tutorial includes steps with default settings. Before creating for production use, we recommend that you familiarize yourself with all settings and deploy with the settings that meet your requirements. To create a compute environment for an Amazon EC2 orchestration, do the following: Prerequisites 17 AWS Batch User Guide Best practices for AWS Batch You can use AWS Batch to run a variety of demanding computational workloads at scale without managing a complex architecture. AWS Batch jobs can be used in a wide range of use cases in areas such as epidemiology, gaming, and machine learning. This topic covers the best practices to consider while using AWS Batch and guidance on how to run"} +{"global_id": 932, "doc_id": "batch", "chunk_id": "9", "question_id": 1, "question": "What can AWS Batch jobs be used for?", "answer_span": "AWS Batch jobs can be used in a wide range of use cases in areas such as epidemiology, gaming, and machine learning.", "chunk": "demanding computational workloads at scale without managing a complex architecture. AWS Batch jobs can be used in a wide range of use cases in areas such as epidemiology, gaming, and machine learning. This topic covers the best practices to consider while using AWS Batch and guidance on how to run and optimize your workloads when using AWS Batch. Topics • When to use AWS Batch • Checklist to run at scale • Optimize containers and AMIs • Choose the right compute environment resource • Amazon EC2 On-Demand or Amazon EC2 Spot • Use Amazon EC2 Spot best practices for AWS Batch • Common errors and troubleshooting When to use AWS Batch AWS Batch runs jobs at scale and at low cost, and provides queuing services and cost-optimized scaling. However, not every workload is suitable to be run using AWS Batch. • Short jobs – If a job runs for only a few seconds, the overhead to schedule the batch job might take longer than the runtime of the job itself. As a workaround, binpack your tasks together before you submit them in AWS Batch. Then, configure your AWS Batch jobs to iterate over the tasks. For example, stage the individual task arguments into an Amazon DynamoDB table or as a file in an Amazon S3 bucket. Consider grouping tasks so the jobs run 3-5 minutes each. After you binpack the jobs, loop through your task groups within your AWS Batch job. • Jobs that must be run immediately – AWS Batch can process jobs quickly. However, AWS Batch is a scheduler and optimizes for cost performance, job priority, and throughput. AWS Batch might require time to process your requests. If you need a response in under a few seconds, then a service-based approach using Amazon ECS or Amazon EKS is"} +{"global_id": 933, "doc_id": "batch", "chunk_id": "9", "question_id": 2, "question": "What is a consideration for short jobs when using AWS Batch?", "answer_span": "If a job runs for only a few seconds, the overhead to schedule the batch job might take longer than the runtime of the job itself.", "chunk": "demanding computational workloads at scale without managing a complex architecture. AWS Batch jobs can be used in a wide range of use cases in areas such as epidemiology, gaming, and machine learning. This topic covers the best practices to consider while using AWS Batch and guidance on how to run and optimize your workloads when using AWS Batch. Topics • When to use AWS Batch • Checklist to run at scale • Optimize containers and AMIs • Choose the right compute environment resource • Amazon EC2 On-Demand or Amazon EC2 Spot • Use Amazon EC2 Spot best practices for AWS Batch • Common errors and troubleshooting When to use AWS Batch AWS Batch runs jobs at scale and at low cost, and provides queuing services and cost-optimized scaling. However, not every workload is suitable to be run using AWS Batch. • Short jobs – If a job runs for only a few seconds, the overhead to schedule the batch job might take longer than the runtime of the job itself. As a workaround, binpack your tasks together before you submit them in AWS Batch. Then, configure your AWS Batch jobs to iterate over the tasks. For example, stage the individual task arguments into an Amazon DynamoDB table or as a file in an Amazon S3 bucket. Consider grouping tasks so the jobs run 3-5 minutes each. After you binpack the jobs, loop through your task groups within your AWS Batch job. • Jobs that must be run immediately – AWS Batch can process jobs quickly. However, AWS Batch is a scheduler and optimizes for cost performance, job priority, and throughput. AWS Batch might require time to process your requests. If you need a response in under a few seconds, then a service-based approach using Amazon ECS or Amazon EKS is"} +{"global_id": 934, "doc_id": "batch", "chunk_id": "9", "question_id": 3, "question": "What does AWS Batch provide for jobs?", "answer_span": "AWS Batch runs jobs at scale and at low cost, and provides queuing services and cost-optimized scaling.", "chunk": "demanding computational workloads at scale without managing a complex architecture. AWS Batch jobs can be used in a wide range of use cases in areas such as epidemiology, gaming, and machine learning. This topic covers the best practices to consider while using AWS Batch and guidance on how to run and optimize your workloads when using AWS Batch. Topics • When to use AWS Batch • Checklist to run at scale • Optimize containers and AMIs • Choose the right compute environment resource • Amazon EC2 On-Demand or Amazon EC2 Spot • Use Amazon EC2 Spot best practices for AWS Batch • Common errors and troubleshooting When to use AWS Batch AWS Batch runs jobs at scale and at low cost, and provides queuing services and cost-optimized scaling. However, not every workload is suitable to be run using AWS Batch. • Short jobs – If a job runs for only a few seconds, the overhead to schedule the batch job might take longer than the runtime of the job itself. As a workaround, binpack your tasks together before you submit them in AWS Batch. Then, configure your AWS Batch jobs to iterate over the tasks. For example, stage the individual task arguments into an Amazon DynamoDB table or as a file in an Amazon S3 bucket. Consider grouping tasks so the jobs run 3-5 minutes each. After you binpack the jobs, loop through your task groups within your AWS Batch job. • Jobs that must be run immediately – AWS Batch can process jobs quickly. However, AWS Batch is a scheduler and optimizes for cost performance, job priority, and throughput. AWS Batch might require time to process your requests. If you need a response in under a few seconds, then a service-based approach using Amazon ECS or Amazon EKS is"} +{"global_id": 935, "doc_id": "batch", "chunk_id": "9", "question_id": 4, "question": "What should you consider if you need a response in under a few seconds?", "answer_span": "If you need a response in under a few seconds, then a service-based approach using Amazon ECS or Amazon EKS is.", "chunk": "demanding computational workloads at scale without managing a complex architecture. AWS Batch jobs can be used in a wide range of use cases in areas such as epidemiology, gaming, and machine learning. This topic covers the best practices to consider while using AWS Batch and guidance on how to run and optimize your workloads when using AWS Batch. Topics • When to use AWS Batch • Checklist to run at scale • Optimize containers and AMIs • Choose the right compute environment resource • Amazon EC2 On-Demand or Amazon EC2 Spot • Use Amazon EC2 Spot best practices for AWS Batch • Common errors and troubleshooting When to use AWS Batch AWS Batch runs jobs at scale and at low cost, and provides queuing services and cost-optimized scaling. However, not every workload is suitable to be run using AWS Batch. • Short jobs – If a job runs for only a few seconds, the overhead to schedule the batch job might take longer than the runtime of the job itself. As a workaround, binpack your tasks together before you submit them in AWS Batch. Then, configure your AWS Batch jobs to iterate over the tasks. For example, stage the individual task arguments into an Amazon DynamoDB table or as a file in an Amazon S3 bucket. Consider grouping tasks so the jobs run 3-5 minutes each. After you binpack the jobs, loop through your task groups within your AWS Batch job. • Jobs that must be run immediately – AWS Batch can process jobs quickly. However, AWS Batch is a scheduler and optimizes for cost performance, job priority, and throughput. AWS Batch might require time to process your requests. If you need a response in under a few seconds, then a service-based approach using Amazon ECS or Amazon EKS is"} +{"global_id": 936, "doc_id": "batch", "chunk_id": "10", "question_id": 1, "question": "What does AWS Batch optimize for?", "answer_span": "AWS Batch is a scheduler and optimizes for cost performance, job priority, and throughput.", "chunk": "Batch can process jobs quickly. However, AWS Batch is a scheduler and optimizes for cost performance, job priority, and throughput. AWS Batch might require time to process your requests. If you need a response in under a few seconds, then a service-based approach using Amazon ECS or Amazon EKS is more suitable. When to use AWS Batch 487 AWS Batch User Guide Checklist to run at scale Before you run a large workload on 50 thousand or more vCPUs, consider the following checklist. Note If you plan to run a large workload on a million or more vCPUs or need guidance running at large scale, contact your AWS team. • Check your Amazon EC2 quotas – Check your Amazon EC2 quotas (also known as limits) in the Service Quotas panel of the AWS Management Console. If necessary, request a quota increase for your peak number of Amazon EC2 instances. Remember that Amazon EC2 Spot and Amazon OnDemand instances have separate quotas. For more information, see Getting started with Service Quotas. • Verify your Amazon Elastic Block Store quota for each Region – Each instance uses a GP2 or GP3 volume for the operating system. By default, the quota for each AWS Region is 300 TiB. However, each instance uses counts as part of this quota. So, make sure to factor this in when you verify your Amazon Elastic Block Store quota for each Region. If your quota is reached, you can’t create more instances. For more information, see Amazon Elastic Block Store endpoints and quotas • Use Amazon S3 for storage – Amazon S3 provides high throughput and helps to eliminate the guesswork on how much storage to provision based on the number of jobs and instances in each Availability Zone. For more information, see Best practices design patterns: optimizing"} +{"global_id": 937, "doc_id": "batch", "chunk_id": "10", "question_id": 2, "question": "What is more suitable if you need a response in under a few seconds?", "answer_span": "a service-based approach using Amazon ECS or Amazon EKS is more suitable.", "chunk": "Batch can process jobs quickly. However, AWS Batch is a scheduler and optimizes for cost performance, job priority, and throughput. AWS Batch might require time to process your requests. If you need a response in under a few seconds, then a service-based approach using Amazon ECS or Amazon EKS is more suitable. When to use AWS Batch 487 AWS Batch User Guide Checklist to run at scale Before you run a large workload on 50 thousand or more vCPUs, consider the following checklist. Note If you plan to run a large workload on a million or more vCPUs or need guidance running at large scale, contact your AWS team. • Check your Amazon EC2 quotas – Check your Amazon EC2 quotas (also known as limits) in the Service Quotas panel of the AWS Management Console. If necessary, request a quota increase for your peak number of Amazon EC2 instances. Remember that Amazon EC2 Spot and Amazon OnDemand instances have separate quotas. For more information, see Getting started with Service Quotas. • Verify your Amazon Elastic Block Store quota for each Region – Each instance uses a GP2 or GP3 volume for the operating system. By default, the quota for each AWS Region is 300 TiB. However, each instance uses counts as part of this quota. So, make sure to factor this in when you verify your Amazon Elastic Block Store quota for each Region. If your quota is reached, you can’t create more instances. For more information, see Amazon Elastic Block Store endpoints and quotas • Use Amazon S3 for storage – Amazon S3 provides high throughput and helps to eliminate the guesswork on how much storage to provision based on the number of jobs and instances in each Availability Zone. For more information, see Best practices design patterns: optimizing"} +{"global_id": 938, "doc_id": "batch", "chunk_id": "10", "question_id": 3, "question": "What should you check before running a large workload on 50 thousand or more vCPUs?", "answer_span": "Check your Amazon EC2 quotas – Check your Amazon EC2 quotas (also known as limits) in the Service Quotas panel of the AWS Management Console.", "chunk": "Batch can process jobs quickly. However, AWS Batch is a scheduler and optimizes for cost performance, job priority, and throughput. AWS Batch might require time to process your requests. If you need a response in under a few seconds, then a service-based approach using Amazon ECS or Amazon EKS is more suitable. When to use AWS Batch 487 AWS Batch User Guide Checklist to run at scale Before you run a large workload on 50 thousand or more vCPUs, consider the following checklist. Note If you plan to run a large workload on a million or more vCPUs or need guidance running at large scale, contact your AWS team. • Check your Amazon EC2 quotas – Check your Amazon EC2 quotas (also known as limits) in the Service Quotas panel of the AWS Management Console. If necessary, request a quota increase for your peak number of Amazon EC2 instances. Remember that Amazon EC2 Spot and Amazon OnDemand instances have separate quotas. For more information, see Getting started with Service Quotas. • Verify your Amazon Elastic Block Store quota for each Region – Each instance uses a GP2 or GP3 volume for the operating system. By default, the quota for each AWS Region is 300 TiB. However, each instance uses counts as part of this quota. So, make sure to factor this in when you verify your Amazon Elastic Block Store quota for each Region. If your quota is reached, you can’t create more instances. For more information, see Amazon Elastic Block Store endpoints and quotas • Use Amazon S3 for storage – Amazon S3 provides high throughput and helps to eliminate the guesswork on how much storage to provision based on the number of jobs and instances in each Availability Zone. For more information, see Best practices design patterns: optimizing"} +{"global_id": 939, "doc_id": "batch", "chunk_id": "10", "question_id": 4, "question": "What is the default quota for each AWS Region for Amazon Elastic Block Store?", "answer_span": "By default, the quota for each AWS Region is 300 TiB.", "chunk": "Batch can process jobs quickly. However, AWS Batch is a scheduler and optimizes for cost performance, job priority, and throughput. AWS Batch might require time to process your requests. If you need a response in under a few seconds, then a service-based approach using Amazon ECS or Amazon EKS is more suitable. When to use AWS Batch 487 AWS Batch User Guide Checklist to run at scale Before you run a large workload on 50 thousand or more vCPUs, consider the following checklist. Note If you plan to run a large workload on a million or more vCPUs or need guidance running at large scale, contact your AWS team. • Check your Amazon EC2 quotas – Check your Amazon EC2 quotas (also known as limits) in the Service Quotas panel of the AWS Management Console. If necessary, request a quota increase for your peak number of Amazon EC2 instances. Remember that Amazon EC2 Spot and Amazon OnDemand instances have separate quotas. For more information, see Getting started with Service Quotas. • Verify your Amazon Elastic Block Store quota for each Region – Each instance uses a GP2 or GP3 volume for the operating system. By default, the quota for each AWS Region is 300 TiB. However, each instance uses counts as part of this quota. So, make sure to factor this in when you verify your Amazon Elastic Block Store quota for each Region. If your quota is reached, you can’t create more instances. For more information, see Amazon Elastic Block Store endpoints and quotas • Use Amazon S3 for storage – Amazon S3 provides high throughput and helps to eliminate the guesswork on how much storage to provision based on the number of jobs and instances in each Availability Zone. For more information, see Best practices design patterns: optimizing"} +{"global_id": 940, "doc_id": "batch", "chunk_id": "11", "question_id": 1, "question": "What storage solution does Amazon recommend for high throughput?", "answer_span": "Use Amazon S3 for storage", "chunk": "Block Store endpoints and quotas • Use Amazon S3 for storage – Amazon S3 provides high throughput and helps to eliminate the guesswork on how much storage to provision based on the number of jobs and instances in each Availability Zone. For more information, see Best practices design patterns: optimizing Amazon S3 performance. • Scale gradually to identify bottlenecks early – For a job that runs on a million or more vCPUs, start lower and gradually increase so that you can identify bottlenecks early. For example, start by running on 50 thousand vCPUs. Then, increase the count to 200 thousand vCPUs, and then 500 thousand vCPUs, and so on. In other words, continue to gradually increase the vCPU count until you reach the desired number of vCPUs. • Monitor to identify potential issues early – To avoid potential breaks and issues when running at scale, make sure to monitor both your application and architecture. Breaks might occur even when scaling from 1 thousand to 5 thousand vCPUs. You can use Amazon CloudWatch Logs to review log data or use CloudWatch Embedded Metrics using a client library. For more information, see CloudWatch Logs agent reference and aws-embedded-metrics Checklist to run at scale 488 AWS Batch User Guide Optimize containers and AMIs Container size and structure are important for the first set of jobs that you run. This is especially true if the container is larger than 4 GB. Container images are built in layers. The layers are retrieved in parallel by Docker using three concurrent threads. You can increase the number of concurrent threads using the max-concurrent-downloads parameter. For more information, see the Dockerd documentation. Although you can use larger containers, we recommend that you optimize container structure and size for faster startup times. • Smaller containers are fetched faster –"} +{"global_id": 941, "doc_id": "batch", "chunk_id": "11", "question_id": 2, "question": "How should you start scaling vCPUs for a job that runs on a million or more vCPUs?", "answer_span": "start lower and gradually increase", "chunk": "Block Store endpoints and quotas • Use Amazon S3 for storage – Amazon S3 provides high throughput and helps to eliminate the guesswork on how much storage to provision based on the number of jobs and instances in each Availability Zone. For more information, see Best practices design patterns: optimizing Amazon S3 performance. • Scale gradually to identify bottlenecks early – For a job that runs on a million or more vCPUs, start lower and gradually increase so that you can identify bottlenecks early. For example, start by running on 50 thousand vCPUs. Then, increase the count to 200 thousand vCPUs, and then 500 thousand vCPUs, and so on. In other words, continue to gradually increase the vCPU count until you reach the desired number of vCPUs. • Monitor to identify potential issues early – To avoid potential breaks and issues when running at scale, make sure to monitor both your application and architecture. Breaks might occur even when scaling from 1 thousand to 5 thousand vCPUs. You can use Amazon CloudWatch Logs to review log data or use CloudWatch Embedded Metrics using a client library. For more information, see CloudWatch Logs agent reference and aws-embedded-metrics Checklist to run at scale 488 AWS Batch User Guide Optimize containers and AMIs Container size and structure are important for the first set of jobs that you run. This is especially true if the container is larger than 4 GB. Container images are built in layers. The layers are retrieved in parallel by Docker using three concurrent threads. You can increase the number of concurrent threads using the max-concurrent-downloads parameter. For more information, see the Dockerd documentation. Although you can use larger containers, we recommend that you optimize container structure and size for faster startup times. • Smaller containers are fetched faster –"} +{"global_id": 942, "doc_id": "batch", "chunk_id": "11", "question_id": 3, "question": "What should you monitor to identify potential issues early when running at scale?", "answer_span": "monitor both your application and architecture", "chunk": "Block Store endpoints and quotas • Use Amazon S3 for storage – Amazon S3 provides high throughput and helps to eliminate the guesswork on how much storage to provision based on the number of jobs and instances in each Availability Zone. For more information, see Best practices design patterns: optimizing Amazon S3 performance. • Scale gradually to identify bottlenecks early – For a job that runs on a million or more vCPUs, start lower and gradually increase so that you can identify bottlenecks early. For example, start by running on 50 thousand vCPUs. Then, increase the count to 200 thousand vCPUs, and then 500 thousand vCPUs, and so on. In other words, continue to gradually increase the vCPU count until you reach the desired number of vCPUs. • Monitor to identify potential issues early – To avoid potential breaks and issues when running at scale, make sure to monitor both your application and architecture. Breaks might occur even when scaling from 1 thousand to 5 thousand vCPUs. You can use Amazon CloudWatch Logs to review log data or use CloudWatch Embedded Metrics using a client library. For more information, see CloudWatch Logs agent reference and aws-embedded-metrics Checklist to run at scale 488 AWS Batch User Guide Optimize containers and AMIs Container size and structure are important for the first set of jobs that you run. This is especially true if the container is larger than 4 GB. Container images are built in layers. The layers are retrieved in parallel by Docker using three concurrent threads. You can increase the number of concurrent threads using the max-concurrent-downloads parameter. For more information, see the Dockerd documentation. Although you can use larger containers, we recommend that you optimize container structure and size for faster startup times. • Smaller containers are fetched faster –"} +{"global_id": 943, "doc_id": "batch", "chunk_id": "11", "question_id": 4, "question": "What is recommended for faster startup times regarding container size?", "answer_span": "we recommend that you optimize container structure and size for faster startup times", "chunk": "Block Store endpoints and quotas • Use Amazon S3 for storage – Amazon S3 provides high throughput and helps to eliminate the guesswork on how much storage to provision based on the number of jobs and instances in each Availability Zone. For more information, see Best practices design patterns: optimizing Amazon S3 performance. • Scale gradually to identify bottlenecks early – For a job that runs on a million or more vCPUs, start lower and gradually increase so that you can identify bottlenecks early. For example, start by running on 50 thousand vCPUs. Then, increase the count to 200 thousand vCPUs, and then 500 thousand vCPUs, and so on. In other words, continue to gradually increase the vCPU count until you reach the desired number of vCPUs. • Monitor to identify potential issues early – To avoid potential breaks and issues when running at scale, make sure to monitor both your application and architecture. Breaks might occur even when scaling from 1 thousand to 5 thousand vCPUs. You can use Amazon CloudWatch Logs to review log data or use CloudWatch Embedded Metrics using a client library. For more information, see CloudWatch Logs agent reference and aws-embedded-metrics Checklist to run at scale 488 AWS Batch User Guide Optimize containers and AMIs Container size and structure are important for the first set of jobs that you run. This is especially true if the container is larger than 4 GB. Container images are built in layers. The layers are retrieved in parallel by Docker using three concurrent threads. You can increase the number of concurrent threads using the max-concurrent-downloads parameter. For more information, see the Dockerd documentation. Although you can use larger containers, we recommend that you optimize container structure and size for faster startup times. • Smaller containers are fetched faster –"} +{"global_id": 944, "doc_id": "batch", "chunk_id": "12", "question_id": 1, "question": "How can you increase the number of concurrent threads in Docker?", "answer_span": "You can increase the number of concurrent threads using the max-concurrent-downloads parameter.", "chunk": "Docker using three concurrent threads. You can increase the number of concurrent threads using the max-concurrent-downloads parameter. For more information, see the Dockerd documentation. Although you can use larger containers, we recommend that you optimize container structure and size for faster startup times. • Smaller containers are fetched faster – Smaller containers can lead to faster application start times. To decrease container size, offload libraries or files that are updated infrequently to the Amazon Machine Image (AMI). You can also use bind mounts to give access to your containers. For more information, see Bind mounts. • Create layers that are even in size and break up large layers – Each layer is retrieved by one thread. So, a large layer might significantly impact your job startup time. We recommend a maximum layer size of 2 GB as a good tradeoff between larger container size and faster startup times. You can run the docker history your_image_id command to check your container image structure and layer size. For more information, see the Docker documentation. • Use Amazon Elastic Container Registry as your container repository – When you run thousands of jobs in parallel, a self-managed repository can fail or throttle throughput. Amazon ECR works at scale and can handle workloads with up to over a million vCPUs. Optimize containers and AMIs 489 AWS Batch User Guide Choose the right compute environment resource AWS Fargate requires less initial setup and configuration than Amazon EC2 and is likely easier to use, particularly if it's your first time. With Fargate, you don't need to manage servers, handle capacity planning, or isolate container workloads for security. If you have the following requirements, we recommend you use Fargate instances: • Your jobs must start quickly, specifically less than 30 seconds. • The requirements of your jobs are"} +{"global_id": 945, "doc_id": "batch", "chunk_id": "12", "question_id": 2, "question": "What is recommended to optimize for faster startup times?", "answer_span": "we recommend that you optimize container structure and size for faster startup times.", "chunk": "Docker using three concurrent threads. You can increase the number of concurrent threads using the max-concurrent-downloads parameter. For more information, see the Dockerd documentation. Although you can use larger containers, we recommend that you optimize container structure and size for faster startup times. • Smaller containers are fetched faster – Smaller containers can lead to faster application start times. To decrease container size, offload libraries or files that are updated infrequently to the Amazon Machine Image (AMI). You can also use bind mounts to give access to your containers. For more information, see Bind mounts. • Create layers that are even in size and break up large layers – Each layer is retrieved by one thread. So, a large layer might significantly impact your job startup time. We recommend a maximum layer size of 2 GB as a good tradeoff between larger container size and faster startup times. You can run the docker history your_image_id command to check your container image structure and layer size. For more information, see the Docker documentation. • Use Amazon Elastic Container Registry as your container repository – When you run thousands of jobs in parallel, a self-managed repository can fail or throttle throughput. Amazon ECR works at scale and can handle workloads with up to over a million vCPUs. Optimize containers and AMIs 489 AWS Batch User Guide Choose the right compute environment resource AWS Fargate requires less initial setup and configuration than Amazon EC2 and is likely easier to use, particularly if it's your first time. With Fargate, you don't need to manage servers, handle capacity planning, or isolate container workloads for security. If you have the following requirements, we recommend you use Fargate instances: • Your jobs must start quickly, specifically less than 30 seconds. • The requirements of your jobs are"} +{"global_id": 946, "doc_id": "batch", "chunk_id": "12", "question_id": 3, "question": "What is a good maximum layer size for faster startup times?", "answer_span": "We recommend a maximum layer size of 2 GB as a good tradeoff between larger container size and faster startup times.", "chunk": "Docker using three concurrent threads. You can increase the number of concurrent threads using the max-concurrent-downloads parameter. For more information, see the Dockerd documentation. Although you can use larger containers, we recommend that you optimize container structure and size for faster startup times. • Smaller containers are fetched faster – Smaller containers can lead to faster application start times. To decrease container size, offload libraries or files that are updated infrequently to the Amazon Machine Image (AMI). You can also use bind mounts to give access to your containers. For more information, see Bind mounts. • Create layers that are even in size and break up large layers – Each layer is retrieved by one thread. So, a large layer might significantly impact your job startup time. We recommend a maximum layer size of 2 GB as a good tradeoff between larger container size and faster startup times. You can run the docker history your_image_id command to check your container image structure and layer size. For more information, see the Docker documentation. • Use Amazon Elastic Container Registry as your container repository – When you run thousands of jobs in parallel, a self-managed repository can fail or throttle throughput. Amazon ECR works at scale and can handle workloads with up to over a million vCPUs. Optimize containers and AMIs 489 AWS Batch User Guide Choose the right compute environment resource AWS Fargate requires less initial setup and configuration than Amazon EC2 and is likely easier to use, particularly if it's your first time. With Fargate, you don't need to manage servers, handle capacity planning, or isolate container workloads for security. If you have the following requirements, we recommend you use Fargate instances: • Your jobs must start quickly, specifically less than 30 seconds. • The requirements of your jobs are"} +{"global_id": 947, "doc_id": "batch", "chunk_id": "12", "question_id": 4, "question": "What does AWS Fargate require compared to Amazon EC2?", "answer_span": "AWS Fargate requires less initial setup and configuration than Amazon EC2 and is likely easier to use, particularly if it's your first time.", "chunk": "Docker using three concurrent threads. You can increase the number of concurrent threads using the max-concurrent-downloads parameter. For more information, see the Dockerd documentation. Although you can use larger containers, we recommend that you optimize container structure and size for faster startup times. • Smaller containers are fetched faster – Smaller containers can lead to faster application start times. To decrease container size, offload libraries or files that are updated infrequently to the Amazon Machine Image (AMI). You can also use bind mounts to give access to your containers. For more information, see Bind mounts. • Create layers that are even in size and break up large layers – Each layer is retrieved by one thread. So, a large layer might significantly impact your job startup time. We recommend a maximum layer size of 2 GB as a good tradeoff between larger container size and faster startup times. You can run the docker history your_image_id command to check your container image structure and layer size. For more information, see the Docker documentation. • Use Amazon Elastic Container Registry as your container repository – When you run thousands of jobs in parallel, a self-managed repository can fail or throttle throughput. Amazon ECR works at scale and can handle workloads with up to over a million vCPUs. Optimize containers and AMIs 489 AWS Batch User Guide Choose the right compute environment resource AWS Fargate requires less initial setup and configuration than Amazon EC2 and is likely easier to use, particularly if it's your first time. With Fargate, you don't need to manage servers, handle capacity planning, or isolate container workloads for security. If you have the following requirements, we recommend you use Fargate instances: • Your jobs must start quickly, specifically less than 30 seconds. • The requirements of your jobs are"} +{"global_id": 948, "doc_id": "batch", "chunk_id": "13", "question_id": 1, "question": "What do you not need to manage with Fargate?", "answer_span": "With Fargate, you don't need to manage servers, handle capacity planning, or isolate container workloads for security.", "chunk": "your first time. With Fargate, you don't need to manage servers, handle capacity planning, or isolate container workloads for security. If you have the following requirements, we recommend you use Fargate instances: • Your jobs must start quickly, specifically less than 30 seconds. • The requirements of your jobs are 16 vCPUs or less, no GPUs, and 120 GiB of memory or less. For more information, see When to use Fargate. If you have the following requirements, we recommend that you use Amazon EC2 instances: • You require increased control over the instance selection or require using specific instance types. • Your jobs require resources that AWS Fargate can’t provide, such as GPUs, more memory, a custom AMI, or the Amazon Elastic Fabric Adapter. • You require a high level of throughput or concurrency. • You need to customize your AMI, Amazon EC2 Launch Template, or access to special Linux parameters. With Amazon EC2, you can more finely tune your workload to your specific requirements and run at scale if needed. Amazon EC2 On-Demand or Amazon EC2 Spot Most AWS Batch customers use Amazon EC2 Spot instances because of the savings over OnDemand instances. However, if your workload runs for multiple hours and can't be interrupted, On-Demand instances might be more suitable for you. You can always try Spot instances first and switch to On-Demand if necessary. If you have the following requirements and expectations, use Amazon EC2 On-Demand instances: • The runtime of your jobs is more than an hour, and you can't tolerate interruptions to your workload. Choose the right compute environment resource 490 AWS Batch User Guide • You have a strict SLO (service-level objective) for your overall workload and can’t increase computational time. • The instances that you require are more likely to see interruptions. If"} +{"global_id": 949, "doc_id": "batch", "chunk_id": "13", "question_id": 2, "question": "What is a requirement for using Fargate instances?", "answer_span": "Your jobs must start quickly, specifically less than 30 seconds.", "chunk": "your first time. With Fargate, you don't need to manage servers, handle capacity planning, or isolate container workloads for security. If you have the following requirements, we recommend you use Fargate instances: • Your jobs must start quickly, specifically less than 30 seconds. • The requirements of your jobs are 16 vCPUs or less, no GPUs, and 120 GiB of memory or less. For more information, see When to use Fargate. If you have the following requirements, we recommend that you use Amazon EC2 instances: • You require increased control over the instance selection or require using specific instance types. • Your jobs require resources that AWS Fargate can’t provide, such as GPUs, more memory, a custom AMI, or the Amazon Elastic Fabric Adapter. • You require a high level of throughput or concurrency. • You need to customize your AMI, Amazon EC2 Launch Template, or access to special Linux parameters. With Amazon EC2, you can more finely tune your workload to your specific requirements and run at scale if needed. Amazon EC2 On-Demand or Amazon EC2 Spot Most AWS Batch customers use Amazon EC2 Spot instances because of the savings over OnDemand instances. However, if your workload runs for multiple hours and can't be interrupted, On-Demand instances might be more suitable for you. You can always try Spot instances first and switch to On-Demand if necessary. If you have the following requirements and expectations, use Amazon EC2 On-Demand instances: • The runtime of your jobs is more than an hour, and you can't tolerate interruptions to your workload. Choose the right compute environment resource 490 AWS Batch User Guide • You have a strict SLO (service-level objective) for your overall workload and can’t increase computational time. • The instances that you require are more likely to see interruptions. If"} +{"global_id": 950, "doc_id": "batch", "chunk_id": "13", "question_id": 3, "question": "Why might you choose Amazon EC2 Spot instances?", "answer_span": "Most AWS Batch customers use Amazon EC2 Spot instances because of the savings over On-Demand instances.", "chunk": "your first time. With Fargate, you don't need to manage servers, handle capacity planning, or isolate container workloads for security. If you have the following requirements, we recommend you use Fargate instances: • Your jobs must start quickly, specifically less than 30 seconds. • The requirements of your jobs are 16 vCPUs or less, no GPUs, and 120 GiB of memory or less. For more information, see When to use Fargate. If you have the following requirements, we recommend that you use Amazon EC2 instances: • You require increased control over the instance selection or require using specific instance types. • Your jobs require resources that AWS Fargate can’t provide, such as GPUs, more memory, a custom AMI, or the Amazon Elastic Fabric Adapter. • You require a high level of throughput or concurrency. • You need to customize your AMI, Amazon EC2 Launch Template, or access to special Linux parameters. With Amazon EC2, you can more finely tune your workload to your specific requirements and run at scale if needed. Amazon EC2 On-Demand or Amazon EC2 Spot Most AWS Batch customers use Amazon EC2 Spot instances because of the savings over OnDemand instances. However, if your workload runs for multiple hours and can't be interrupted, On-Demand instances might be more suitable for you. You can always try Spot instances first and switch to On-Demand if necessary. If you have the following requirements and expectations, use Amazon EC2 On-Demand instances: • The runtime of your jobs is more than an hour, and you can't tolerate interruptions to your workload. Choose the right compute environment resource 490 AWS Batch User Guide • You have a strict SLO (service-level objective) for your overall workload and can’t increase computational time. • The instances that you require are more likely to see interruptions. If"} +{"global_id": 951, "doc_id": "batch", "chunk_id": "13", "question_id": 4, "question": "When should you use Amazon EC2 On-Demand instances?", "answer_span": "If you have the following requirements and expectations, use Amazon EC2 On-Demand instances: • The runtime of your jobs is more than an hour, and you can't tolerate interruptions to your workload.", "chunk": "your first time. With Fargate, you don't need to manage servers, handle capacity planning, or isolate container workloads for security. If you have the following requirements, we recommend you use Fargate instances: • Your jobs must start quickly, specifically less than 30 seconds. • The requirements of your jobs are 16 vCPUs or less, no GPUs, and 120 GiB of memory or less. For more information, see When to use Fargate. If you have the following requirements, we recommend that you use Amazon EC2 instances: • You require increased control over the instance selection or require using specific instance types. • Your jobs require resources that AWS Fargate can’t provide, such as GPUs, more memory, a custom AMI, or the Amazon Elastic Fabric Adapter. • You require a high level of throughput or concurrency. • You need to customize your AMI, Amazon EC2 Launch Template, or access to special Linux parameters. With Amazon EC2, you can more finely tune your workload to your specific requirements and run at scale if needed. Amazon EC2 On-Demand or Amazon EC2 Spot Most AWS Batch customers use Amazon EC2 Spot instances because of the savings over OnDemand instances. However, if your workload runs for multiple hours and can't be interrupted, On-Demand instances might be more suitable for you. You can always try Spot instances first and switch to On-Demand if necessary. If you have the following requirements and expectations, use Amazon EC2 On-Demand instances: • The runtime of your jobs is more than an hour, and you can't tolerate interruptions to your workload. Choose the right compute environment resource 490 AWS Batch User Guide • You have a strict SLO (service-level objective) for your overall workload and can’t increase computational time. • The instances that you require are more likely to see interruptions. If"} +{"global_id": 952, "doc_id": "batch", "chunk_id": "14", "question_id": 1, "question": "What is a requirement for using Amazon EC2 Spot instances regarding job runtime?", "answer_span": "The runtime for your jobs is typically 30 minutes or less.", "chunk": "hour, and you can't tolerate interruptions to your workload. Choose the right compute environment resource 490 AWS Batch User Guide • You have a strict SLO (service-level objective) for your overall workload and can’t increase computational time. • The instances that you require are more likely to see interruptions. If you have the following requirements and expectations, use Amazon EC2 Spot instances: • The runtime for your jobs is typically 30 minutes or less. • You can tolerate potential interruptions and job rescheduling as a part of your workload. For more information, see Spot Instance advisor. • Long running jobs can be restarted from a checkpoint if interrupted. You can mix both purchasing models by submitting on Spot instance first and then use On-Demand instance as a fallback option. For example, submit your jobs on a queue that's connected to compute environments that are running on Amazon EC2 Spot instances. If a job gets interrupted, catch the event from Amazon EventBridge and correlate it to a Spot instance reclamation. Then, resubmit the job to an On-Demand queue using an AWS Lambda function or AWS Step Functions. For more information, see Tutorial: Sending Amazon Simple Notification Service alerts for failed job events, Best practices for handling Amazon EC2 Spot Instance interruptions and Manage AWS Batch with Step Functions. Important Use different instance types, sizes, and Availability Zones for your On-Demand compute environment to maintain Amazon EC2 Spot instance pool availability and decrease the interruption rate. Use Amazon EC2 Spot best practices for AWS Batch When you choose Amazon Elastic Compute Cloud (EC2) Spot instances, you likely can optimize your workflow to save costs, sometimes significantly. For more information, see Best practices for Amazon EC2 Spot. To optimize your workflow to save costs, consider the following Amazon EC2 Spot best practices for"} +{"global_id": 953, "doc_id": "batch", "chunk_id": "14", "question_id": 2, "question": "What should you do if a job gets interrupted while using Spot instances?", "answer_span": "Then, resubmit the job to an On-Demand queue using an AWS Lambda function or AWS Step Functions.", "chunk": "hour, and you can't tolerate interruptions to your workload. Choose the right compute environment resource 490 AWS Batch User Guide • You have a strict SLO (service-level objective) for your overall workload and can’t increase computational time. • The instances that you require are more likely to see interruptions. If you have the following requirements and expectations, use Amazon EC2 Spot instances: • The runtime for your jobs is typically 30 minutes or less. • You can tolerate potential interruptions and job rescheduling as a part of your workload. For more information, see Spot Instance advisor. • Long running jobs can be restarted from a checkpoint if interrupted. You can mix both purchasing models by submitting on Spot instance first and then use On-Demand instance as a fallback option. For example, submit your jobs on a queue that's connected to compute environments that are running on Amazon EC2 Spot instances. If a job gets interrupted, catch the event from Amazon EventBridge and correlate it to a Spot instance reclamation. Then, resubmit the job to an On-Demand queue using an AWS Lambda function or AWS Step Functions. For more information, see Tutorial: Sending Amazon Simple Notification Service alerts for failed job events, Best practices for handling Amazon EC2 Spot Instance interruptions and Manage AWS Batch with Step Functions. Important Use different instance types, sizes, and Availability Zones for your On-Demand compute environment to maintain Amazon EC2 Spot instance pool availability and decrease the interruption rate. Use Amazon EC2 Spot best practices for AWS Batch When you choose Amazon Elastic Compute Cloud (EC2) Spot instances, you likely can optimize your workflow to save costs, sometimes significantly. For more information, see Best practices for Amazon EC2 Spot. To optimize your workflow to save costs, consider the following Amazon EC2 Spot best practices for"} +{"global_id": 954, "doc_id": "batch", "chunk_id": "14", "question_id": 3, "question": "What is important to maintain Amazon EC2 Spot instance pool availability?", "answer_span": "Use different instance types, sizes, and Availability Zones for your On-Demand compute environment to maintain Amazon EC2 Spot instance pool availability and decrease the interruption rate.", "chunk": "hour, and you can't tolerate interruptions to your workload. Choose the right compute environment resource 490 AWS Batch User Guide • You have a strict SLO (service-level objective) for your overall workload and can’t increase computational time. • The instances that you require are more likely to see interruptions. If you have the following requirements and expectations, use Amazon EC2 Spot instances: • The runtime for your jobs is typically 30 minutes or less. • You can tolerate potential interruptions and job rescheduling as a part of your workload. For more information, see Spot Instance advisor. • Long running jobs can be restarted from a checkpoint if interrupted. You can mix both purchasing models by submitting on Spot instance first and then use On-Demand instance as a fallback option. For example, submit your jobs on a queue that's connected to compute environments that are running on Amazon EC2 Spot instances. If a job gets interrupted, catch the event from Amazon EventBridge and correlate it to a Spot instance reclamation. Then, resubmit the job to an On-Demand queue using an AWS Lambda function or AWS Step Functions. For more information, see Tutorial: Sending Amazon Simple Notification Service alerts for failed job events, Best practices for handling Amazon EC2 Spot Instance interruptions and Manage AWS Batch with Step Functions. Important Use different instance types, sizes, and Availability Zones for your On-Demand compute environment to maintain Amazon EC2 Spot instance pool availability and decrease the interruption rate. Use Amazon EC2 Spot best practices for AWS Batch When you choose Amazon Elastic Compute Cloud (EC2) Spot instances, you likely can optimize your workflow to save costs, sometimes significantly. For more information, see Best practices for Amazon EC2 Spot. To optimize your workflow to save costs, consider the following Amazon EC2 Spot best practices for"} +{"global_id": 955, "doc_id": "batch", "chunk_id": "14", "question_id": 4, "question": "What can you do to optimize your workflow when using Amazon EC2 Spot instances?", "answer_span": "When you choose Amazon Elastic Compute Cloud (EC2) Spot instances, you likely can optimize your workflow to save costs, sometimes significantly.", "chunk": "hour, and you can't tolerate interruptions to your workload. Choose the right compute environment resource 490 AWS Batch User Guide • You have a strict SLO (service-level objective) for your overall workload and can’t increase computational time. • The instances that you require are more likely to see interruptions. If you have the following requirements and expectations, use Amazon EC2 Spot instances: • The runtime for your jobs is typically 30 minutes or less. • You can tolerate potential interruptions and job rescheduling as a part of your workload. For more information, see Spot Instance advisor. • Long running jobs can be restarted from a checkpoint if interrupted. You can mix both purchasing models by submitting on Spot instance first and then use On-Demand instance as a fallback option. For example, submit your jobs on a queue that's connected to compute environments that are running on Amazon EC2 Spot instances. If a job gets interrupted, catch the event from Amazon EventBridge and correlate it to a Spot instance reclamation. Then, resubmit the job to an On-Demand queue using an AWS Lambda function or AWS Step Functions. For more information, see Tutorial: Sending Amazon Simple Notification Service alerts for failed job events, Best practices for handling Amazon EC2 Spot Instance interruptions and Manage AWS Batch with Step Functions. Important Use different instance types, sizes, and Availability Zones for your On-Demand compute environment to maintain Amazon EC2 Spot instance pool availability and decrease the interruption rate. Use Amazon EC2 Spot best practices for AWS Batch When you choose Amazon Elastic Compute Cloud (EC2) Spot instances, you likely can optimize your workflow to save costs, sometimes significantly. For more information, see Best practices for Amazon EC2 Spot. To optimize your workflow to save costs, consider the following Amazon EC2 Spot best practices for"} +{"global_id": 956, "doc_id": "batch", "chunk_id": "15", "question_id": 1, "question": "What allocation strategy should you choose for AWS Batch when using EC2 Spot instances?", "answer_span": "Choose the SPOT_CAPACITY_OPTIMIZED allocation strategy – AWS Batch chooses Amazon EC2 instances from the deepest Amazon EC2 Spot capacity pools.", "chunk": "for AWS Batch When you choose Amazon Elastic Compute Cloud (EC2) Spot instances, you likely can optimize your workflow to save costs, sometimes significantly. For more information, see Best practices for Amazon EC2 Spot. To optimize your workflow to save costs, consider the following Amazon EC2 Spot best practices for AWS Batch: Use Amazon EC2 Spot best practices for AWS Batch 491 AWS Batch User Guide • Choose the SPOT_CAPACITY_OPTIMIZED allocation strategy – AWS Batch chooses Amazon EC2 instances from the deepest Amazon EC2 Spot capacity pools. If you’re concerned about interruptions, this is a suitable choice. For more information, see Instance type allocation strategies for AWS Batch. • Diversify instance types – To diversify your instance types, consider compatible sizes and families, then let AWS Batch choose based on price or availability. For example, consider c5.24xlarge as an alternative to c5.12xlarge or c5a, c5n, c5d, m5, and m5d families. For more information, see Be flexible about instance types and Availability Zones. • Reduce job runtime or checkpoint – We advise against running jobs that take an hour or more when using Amazon EC2 Spot instances to avoid interruptions. If you divide or checkpoint your jobs into smaller parts that consist of 30 minutes or less, you can significantly reduce the possibility of interruptions. • Use automated retries – To avoid disruptions to AWS Batch jobs, set automated retries for jobs. Batch jobs can be disrupted for any of the following reasons: a non-zero exit code is returned, a service error occurs, or an instance reclamation occurs. You can set up to 10 automated retries. For a start, we recommend that you set at least 1-3 automated retries. For information about tracking Amazon EC2 Spot interruptions, see Spot Interruption Dashboard. For AWS Batch, if you set the retry parameter, the"} +{"global_id": 957, "doc_id": "batch", "chunk_id": "15", "question_id": 2, "question": "What is advised against when running jobs with Amazon EC2 Spot instances?", "answer_span": "We advise against running jobs that take an hour or more when using Amazon EC2 Spot instances to avoid interruptions.", "chunk": "for AWS Batch When you choose Amazon Elastic Compute Cloud (EC2) Spot instances, you likely can optimize your workflow to save costs, sometimes significantly. For more information, see Best practices for Amazon EC2 Spot. To optimize your workflow to save costs, consider the following Amazon EC2 Spot best practices for AWS Batch: Use Amazon EC2 Spot best practices for AWS Batch 491 AWS Batch User Guide • Choose the SPOT_CAPACITY_OPTIMIZED allocation strategy – AWS Batch chooses Amazon EC2 instances from the deepest Amazon EC2 Spot capacity pools. If you’re concerned about interruptions, this is a suitable choice. For more information, see Instance type allocation strategies for AWS Batch. • Diversify instance types – To diversify your instance types, consider compatible sizes and families, then let AWS Batch choose based on price or availability. For example, consider c5.24xlarge as an alternative to c5.12xlarge or c5a, c5n, c5d, m5, and m5d families. For more information, see Be flexible about instance types and Availability Zones. • Reduce job runtime or checkpoint – We advise against running jobs that take an hour or more when using Amazon EC2 Spot instances to avoid interruptions. If you divide or checkpoint your jobs into smaller parts that consist of 30 minutes or less, you can significantly reduce the possibility of interruptions. • Use automated retries – To avoid disruptions to AWS Batch jobs, set automated retries for jobs. Batch jobs can be disrupted for any of the following reasons: a non-zero exit code is returned, a service error occurs, or an instance reclamation occurs. You can set up to 10 automated retries. For a start, we recommend that you set at least 1-3 automated retries. For information about tracking Amazon EC2 Spot interruptions, see Spot Interruption Dashboard. For AWS Batch, if you set the retry parameter, the"} +{"global_id": 958, "doc_id": "batch", "chunk_id": "15", "question_id": 3, "question": "How many automated retries can you set for AWS Batch jobs?", "answer_span": "You can set up to 10 automated retries.", "chunk": "for AWS Batch When you choose Amazon Elastic Compute Cloud (EC2) Spot instances, you likely can optimize your workflow to save costs, sometimes significantly. For more information, see Best practices for Amazon EC2 Spot. To optimize your workflow to save costs, consider the following Amazon EC2 Spot best practices for AWS Batch: Use Amazon EC2 Spot best practices for AWS Batch 491 AWS Batch User Guide • Choose the SPOT_CAPACITY_OPTIMIZED allocation strategy – AWS Batch chooses Amazon EC2 instances from the deepest Amazon EC2 Spot capacity pools. If you’re concerned about interruptions, this is a suitable choice. For more information, see Instance type allocation strategies for AWS Batch. • Diversify instance types – To diversify your instance types, consider compatible sizes and families, then let AWS Batch choose based on price or availability. For example, consider c5.24xlarge as an alternative to c5.12xlarge or c5a, c5n, c5d, m5, and m5d families. For more information, see Be flexible about instance types and Availability Zones. • Reduce job runtime or checkpoint – We advise against running jobs that take an hour or more when using Amazon EC2 Spot instances to avoid interruptions. If you divide or checkpoint your jobs into smaller parts that consist of 30 minutes or less, you can significantly reduce the possibility of interruptions. • Use automated retries – To avoid disruptions to AWS Batch jobs, set automated retries for jobs. Batch jobs can be disrupted for any of the following reasons: a non-zero exit code is returned, a service error occurs, or an instance reclamation occurs. You can set up to 10 automated retries. For a start, we recommend that you set at least 1-3 automated retries. For information about tracking Amazon EC2 Spot interruptions, see Spot Interruption Dashboard. For AWS Batch, if you set the retry parameter, the"} +{"global_id": 959, "doc_id": "batch", "chunk_id": "15", "question_id": 4, "question": "What should you consider to diversify your instance types?", "answer_span": "To diversify your instance types, consider compatible sizes and families, then let AWS Batch choose based on price or availability.", "chunk": "for AWS Batch When you choose Amazon Elastic Compute Cloud (EC2) Spot instances, you likely can optimize your workflow to save costs, sometimes significantly. For more information, see Best practices for Amazon EC2 Spot. To optimize your workflow to save costs, consider the following Amazon EC2 Spot best practices for AWS Batch: Use Amazon EC2 Spot best practices for AWS Batch 491 AWS Batch User Guide • Choose the SPOT_CAPACITY_OPTIMIZED allocation strategy – AWS Batch chooses Amazon EC2 instances from the deepest Amazon EC2 Spot capacity pools. If you’re concerned about interruptions, this is a suitable choice. For more information, see Instance type allocation strategies for AWS Batch. • Diversify instance types – To diversify your instance types, consider compatible sizes and families, then let AWS Batch choose based on price or availability. For example, consider c5.24xlarge as an alternative to c5.12xlarge or c5a, c5n, c5d, m5, and m5d families. For more information, see Be flexible about instance types and Availability Zones. • Reduce job runtime or checkpoint – We advise against running jobs that take an hour or more when using Amazon EC2 Spot instances to avoid interruptions. If you divide or checkpoint your jobs into smaller parts that consist of 30 minutes or less, you can significantly reduce the possibility of interruptions. • Use automated retries – To avoid disruptions to AWS Batch jobs, set automated retries for jobs. Batch jobs can be disrupted for any of the following reasons: a non-zero exit code is returned, a service error occurs, or an instance reclamation occurs. You can set up to 10 automated retries. For a start, we recommend that you set at least 1-3 automated retries. For information about tracking Amazon EC2 Spot interruptions, see Spot Interruption Dashboard. For AWS Batch, if you set the retry parameter, the"} +{"global_id": 960, "doc_id": "batch", "chunk_id": "16", "question_id": 1, "question": "How many automated retries can you set up to?", "answer_span": "You can set up to 10 automated retries.", "chunk": "error occurs, or an instance reclamation occurs. You can set up to 10 automated retries. For a start, we recommend that you set at least 1-3 automated retries. For information about tracking Amazon EC2 Spot interruptions, see Spot Interruption Dashboard. For AWS Batch, if you set the retry parameter, the job is placed at the front of the job queue. That is, the job is given priority. When you create the job definition or you submit the job in the AWS CLI, you can configure a retry strategy. For more information, see submit-job. $ aws batch submit-job --job-name MyJob \\ --job-queue MyJQ \\ --job-definition MyJD \\ --retry-strategy attempts=2 • Use custom retries – You can configure a job retry strategy to a specific application exit code or instance reclamation. In the following example, if the host causes the failure, the job can be retried up to five times. However, if the job fails for a different reason, the job exits and the status is set to FAILED. \"retryStrategy\": { \"attempts\": 5, \"evaluateOnExit\": [{ \"onStatusReason\" :\"Host EC2*\", \"action\": \"RETRY\" Use Amazon EC2 Spot best practices for AWS Batch 492 AWS Batch User Guide },{ \"onReason\" : \"*\", \"action\": \"EXIT\" }] } • Use the Spot Interruption Dashboard – You can use the Spot Interruption Dashboard to track Spot interruptions. The application provides metrics on Amazon EC2 Spot instances that are reclaimed and which Availability Zones that Spot instances are in. For more information, see Spot Interruption Dashboard Common errors and troubleshooting Errors in AWS Batch often occur at the application level or are caused by instance configurations that don’t meet your specific job requirements. Other issues include jobs getting stuck in the RUNNABLE status or compute environments getting stuck in an INVALID state. For more information about troubleshooting jobs getting stuck"} +{"global_id": 961, "doc_id": "batch", "chunk_id": "16", "question_id": 2, "question": "What is recommended for the number of automated retries?", "answer_span": "we recommend that you set at least 1-3 automated retries.", "chunk": "error occurs, or an instance reclamation occurs. You can set up to 10 automated retries. For a start, we recommend that you set at least 1-3 automated retries. For information about tracking Amazon EC2 Spot interruptions, see Spot Interruption Dashboard. For AWS Batch, if you set the retry parameter, the job is placed at the front of the job queue. That is, the job is given priority. When you create the job definition or you submit the job in the AWS CLI, you can configure a retry strategy. For more information, see submit-job. $ aws batch submit-job --job-name MyJob \\ --job-queue MyJQ \\ --job-definition MyJD \\ --retry-strategy attempts=2 • Use custom retries – You can configure a job retry strategy to a specific application exit code or instance reclamation. In the following example, if the host causes the failure, the job can be retried up to five times. However, if the job fails for a different reason, the job exits and the status is set to FAILED. \"retryStrategy\": { \"attempts\": 5, \"evaluateOnExit\": [{ \"onStatusReason\" :\"Host EC2*\", \"action\": \"RETRY\" Use Amazon EC2 Spot best practices for AWS Batch 492 AWS Batch User Guide },{ \"onReason\" : \"*\", \"action\": \"EXIT\" }] } • Use the Spot Interruption Dashboard – You can use the Spot Interruption Dashboard to track Spot interruptions. The application provides metrics on Amazon EC2 Spot instances that are reclaimed and which Availability Zones that Spot instances are in. For more information, see Spot Interruption Dashboard Common errors and troubleshooting Errors in AWS Batch often occur at the application level or are caused by instance configurations that don’t meet your specific job requirements. Other issues include jobs getting stuck in the RUNNABLE status or compute environments getting stuck in an INVALID state. For more information about troubleshooting jobs getting stuck"} +{"global_id": 962, "doc_id": "batch", "chunk_id": "16", "question_id": 3, "question": "What happens when you set the retry parameter in AWS Batch?", "answer_span": "the job is placed at the front of the job queue.", "chunk": "error occurs, or an instance reclamation occurs. You can set up to 10 automated retries. For a start, we recommend that you set at least 1-3 automated retries. For information about tracking Amazon EC2 Spot interruptions, see Spot Interruption Dashboard. For AWS Batch, if you set the retry parameter, the job is placed at the front of the job queue. That is, the job is given priority. When you create the job definition or you submit the job in the AWS CLI, you can configure a retry strategy. For more information, see submit-job. $ aws batch submit-job --job-name MyJob \\ --job-queue MyJQ \\ --job-definition MyJD \\ --retry-strategy attempts=2 • Use custom retries – You can configure a job retry strategy to a specific application exit code or instance reclamation. In the following example, if the host causes the failure, the job can be retried up to five times. However, if the job fails for a different reason, the job exits and the status is set to FAILED. \"retryStrategy\": { \"attempts\": 5, \"evaluateOnExit\": [{ \"onStatusReason\" :\"Host EC2*\", \"action\": \"RETRY\" Use Amazon EC2 Spot best practices for AWS Batch 492 AWS Batch User Guide },{ \"onReason\" : \"*\", \"action\": \"EXIT\" }] } • Use the Spot Interruption Dashboard – You can use the Spot Interruption Dashboard to track Spot interruptions. The application provides metrics on Amazon EC2 Spot instances that are reclaimed and which Availability Zones that Spot instances are in. For more information, see Spot Interruption Dashboard Common errors and troubleshooting Errors in AWS Batch often occur at the application level or are caused by instance configurations that don’t meet your specific job requirements. Other issues include jobs getting stuck in the RUNNABLE status or compute environments getting stuck in an INVALID state. For more information about troubleshooting jobs getting stuck"} +{"global_id": 963, "doc_id": "batch", "chunk_id": "16", "question_id": 4, "question": "What can you use to track Spot interruptions?", "answer_span": "You can use the Spot Interruption Dashboard to track Spot interruptions.", "chunk": "error occurs, or an instance reclamation occurs. You can set up to 10 automated retries. For a start, we recommend that you set at least 1-3 automated retries. For information about tracking Amazon EC2 Spot interruptions, see Spot Interruption Dashboard. For AWS Batch, if you set the retry parameter, the job is placed at the front of the job queue. That is, the job is given priority. When you create the job definition or you submit the job in the AWS CLI, you can configure a retry strategy. For more information, see submit-job. $ aws batch submit-job --job-name MyJob \\ --job-queue MyJQ \\ --job-definition MyJD \\ --retry-strategy attempts=2 • Use custom retries – You can configure a job retry strategy to a specific application exit code or instance reclamation. In the following example, if the host causes the failure, the job can be retried up to five times. However, if the job fails for a different reason, the job exits and the status is set to FAILED. \"retryStrategy\": { \"attempts\": 5, \"evaluateOnExit\": [{ \"onStatusReason\" :\"Host EC2*\", \"action\": \"RETRY\" Use Amazon EC2 Spot best practices for AWS Batch 492 AWS Batch User Guide },{ \"onReason\" : \"*\", \"action\": \"EXIT\" }] } • Use the Spot Interruption Dashboard – You can use the Spot Interruption Dashboard to track Spot interruptions. The application provides metrics on Amazon EC2 Spot instances that are reclaimed and which Availability Zones that Spot instances are in. For more information, see Spot Interruption Dashboard Common errors and troubleshooting Errors in AWS Batch often occur at the application level or are caused by instance configurations that don’t meet your specific job requirements. Other issues include jobs getting stuck in the RUNNABLE status or compute environments getting stuck in an INVALID state. For more information about troubleshooting jobs getting stuck"} +{"global_id": 964, "doc_id": "batch", "chunk_id": "17", "question_id": 1, "question": "What often causes errors in AWS Batch?", "answer_span": "Errors in AWS Batch often occur at the application level or are caused by instance configurations that don’t meet your specific job requirements.", "chunk": "Errors in AWS Batch often occur at the application level or are caused by instance configurations that don’t meet your specific job requirements. Other issues include jobs getting stuck in the RUNNABLE status or compute environments getting stuck in an INVALID state. For more information about troubleshooting jobs getting stuck in RUNNABLE status, see Jobs stuck in a RUNNABLE status. For information about troubleshooting compute environments in an INVALID state, see INVALID compute environment. • Check Amazon EC2 Spot vCPU quotas – Verify that your current service quotas meet the job requirements. For example, suppose that your current service quota is 256 vCPUs and the job requires 10,000 vCPUs. Then, the service quota doesn't meet the job requirement. For more information and troubleshooting instructions, see Amazon EC2 service quotas and How do I increase the service quota of my Amazon EC2resources?. • Jobs fail before the application runs – Some jobs might fail because of a DockerTimeoutError error or a CannotPullContainerError error. For troubleshooting information, see How do I resolve the \"DockerTimeoutError\" error in AWS Batch?. • Insufficient IP addresses – The number of IP addresses in your VPC and subnets can limit the number of instances that you can create. Use Classless Inter-Domain Routings (CIDRs) to provide more IP addresses than are required to run your workloads. If necessary, you can also build a dedicated VPC with a large address space. For example, you can create a VPC with multiple CIDRs in 10.x.0.0/16 and a subnet in every Availability Zone with a CIDR of 10.x.y.0/17. In this example, x is between 1-4 and y is either 0 or 128. This configuration provides 36,000 IP addresses in every subnet. Common errors and troubleshooting 493"} +{"global_id": 965, "doc_id": "batch", "chunk_id": "17", "question_id": 2, "question": "What should you check regarding Amazon EC2 Spot vCPU quotas?", "answer_span": "Check Amazon EC2 Spot vCPU quotas – Verify that your current service quotas meet the job requirements.", "chunk": "Errors in AWS Batch often occur at the application level or are caused by instance configurations that don’t meet your specific job requirements. Other issues include jobs getting stuck in the RUNNABLE status or compute environments getting stuck in an INVALID state. For more information about troubleshooting jobs getting stuck in RUNNABLE status, see Jobs stuck in a RUNNABLE status. For information about troubleshooting compute environments in an INVALID state, see INVALID compute environment. • Check Amazon EC2 Spot vCPU quotas – Verify that your current service quotas meet the job requirements. For example, suppose that your current service quota is 256 vCPUs and the job requires 10,000 vCPUs. Then, the service quota doesn't meet the job requirement. For more information and troubleshooting instructions, see Amazon EC2 service quotas and How do I increase the service quota of my Amazon EC2resources?. • Jobs fail before the application runs – Some jobs might fail because of a DockerTimeoutError error or a CannotPullContainerError error. For troubleshooting information, see How do I resolve the \"DockerTimeoutError\" error in AWS Batch?. • Insufficient IP addresses – The number of IP addresses in your VPC and subnets can limit the number of instances that you can create. Use Classless Inter-Domain Routings (CIDRs) to provide more IP addresses than are required to run your workloads. If necessary, you can also build a dedicated VPC with a large address space. For example, you can create a VPC with multiple CIDRs in 10.x.0.0/16 and a subnet in every Availability Zone with a CIDR of 10.x.y.0/17. In this example, x is between 1-4 and y is either 0 or 128. This configuration provides 36,000 IP addresses in every subnet. Common errors and troubleshooting 493"} +{"global_id": 966, "doc_id": "batch", "chunk_id": "17", "question_id": 3, "question": "What might cause jobs to fail before the application runs?", "answer_span": "Some jobs might fail because of a DockerTimeoutError error or a CannotPullContainerError error.", "chunk": "Errors in AWS Batch often occur at the application level or are caused by instance configurations that don’t meet your specific job requirements. Other issues include jobs getting stuck in the RUNNABLE status or compute environments getting stuck in an INVALID state. For more information about troubleshooting jobs getting stuck in RUNNABLE status, see Jobs stuck in a RUNNABLE status. For information about troubleshooting compute environments in an INVALID state, see INVALID compute environment. • Check Amazon EC2 Spot vCPU quotas – Verify that your current service quotas meet the job requirements. For example, suppose that your current service quota is 256 vCPUs and the job requires 10,000 vCPUs. Then, the service quota doesn't meet the job requirement. For more information and troubleshooting instructions, see Amazon EC2 service quotas and How do I increase the service quota of my Amazon EC2resources?. • Jobs fail before the application runs – Some jobs might fail because of a DockerTimeoutError error or a CannotPullContainerError error. For troubleshooting information, see How do I resolve the \"DockerTimeoutError\" error in AWS Batch?. • Insufficient IP addresses – The number of IP addresses in your VPC and subnets can limit the number of instances that you can create. Use Classless Inter-Domain Routings (CIDRs) to provide more IP addresses than are required to run your workloads. If necessary, you can also build a dedicated VPC with a large address space. For example, you can create a VPC with multiple CIDRs in 10.x.0.0/16 and a subnet in every Availability Zone with a CIDR of 10.x.y.0/17. In this example, x is between 1-4 and y is either 0 or 128. This configuration provides 36,000 IP addresses in every subnet. Common errors and troubleshooting 493"} +{"global_id": 967, "doc_id": "batch", "chunk_id": "17", "question_id": 4, "question": "How can you provide more IP addresses than are required to run your workloads?", "answer_span": "Use Classless Inter-Domain Routings (CIDRs) to provide more IP addresses than are required to run your workloads.", "chunk": "Errors in AWS Batch often occur at the application level or are caused by instance configurations that don’t meet your specific job requirements. Other issues include jobs getting stuck in the RUNNABLE status or compute environments getting stuck in an INVALID state. For more information about troubleshooting jobs getting stuck in RUNNABLE status, see Jobs stuck in a RUNNABLE status. For information about troubleshooting compute environments in an INVALID state, see INVALID compute environment. • Check Amazon EC2 Spot vCPU quotas – Verify that your current service quotas meet the job requirements. For example, suppose that your current service quota is 256 vCPUs and the job requires 10,000 vCPUs. Then, the service quota doesn't meet the job requirement. For more information and troubleshooting instructions, see Amazon EC2 service quotas and How do I increase the service quota of my Amazon EC2resources?. • Jobs fail before the application runs – Some jobs might fail because of a DockerTimeoutError error or a CannotPullContainerError error. For troubleshooting information, see How do I resolve the \"DockerTimeoutError\" error in AWS Batch?. • Insufficient IP addresses – The number of IP addresses in your VPC and subnets can limit the number of instances that you can create. Use Classless Inter-Domain Routings (CIDRs) to provide more IP addresses than are required to run your workloads. If necessary, you can also build a dedicated VPC with a large address space. For example, you can create a VPC with multiple CIDRs in 10.x.0.0/16 and a subnet in every Availability Zone with a CIDR of 10.x.y.0/17. In this example, x is between 1-4 and y is either 0 or 128. This configuration provides 36,000 IP addresses in every subnet. Common errors and troubleshooting 493"} +{"global_id": 968, "doc_id": "batch", "chunk_id": "18", "question_id": 1, "question": "What is the CIDR of the zone?", "answer_span": "Zone with a CIDR of 10.x.y.0/17.", "chunk": "Zone with a CIDR of 10.x.y.0/17. In this example, x is between 1-4 and y is either 0 or 128. This configuration provides 36,000 IP addresses in every subnet. Common errors and troubleshooting 493"} +{"global_id": 969, "doc_id": "batch", "chunk_id": "18", "question_id": 2, "question": "What range can x be in the CIDR?", "answer_span": "x is between 1-4", "chunk": "Zone with a CIDR of 10.x.y.0/17. In this example, x is between 1-4 and y is either 0 or 128. This configuration provides 36,000 IP addresses in every subnet. Common errors and troubleshooting 493"} +{"global_id": 970, "doc_id": "batch", "chunk_id": "18", "question_id": 3, "question": "What values can y take in the CIDR?", "answer_span": "y is either 0 or 128.", "chunk": "Zone with a CIDR of 10.x.y.0/17. In this example, x is between 1-4 and y is either 0 or 128. This configuration provides 36,000 IP addresses in every subnet. Common errors and troubleshooting 493"} +{"global_id": 971, "doc_id": "batch", "chunk_id": "18", "question_id": 4, "question": "How many IP addresses does this configuration provide in every subnet?", "answer_span": "This configuration provides 36,000 IP addresses in every subnet.", "chunk": "Zone with a CIDR of 10.x.y.0/17. In this example, x is between 1-4 and y is either 0 or 128. This configuration provides 36,000 IP addresses in every subnet. Common errors and troubleshooting 493"} +{"global_id": 972, "doc_id": "eks", "chunk_id": "0", "question_id": 1, "question": "What does Amazon EKS provide?", "answer_span": "Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters.", "chunk": "Amazon EKS User Guide What is Amazon EKS? Amazon EKS: Simplified Kubernetes Management Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters. With EKS, you can: • Deploy applications faster with less operational overhead • Scale seamlessly to meet changing workload demands • Improve security through AWS integration and automated updates • Choose between standard EKS or fully automated EKS Auto Mode Amazon Elastic Kubernetes Service (Amazon EKS) is the premiere platform for running Kubernetes clusters, both in the Amazon Web Services (AWS) cloud and in your own data centers (EKS Anywhere and Amazon EKS Hybrid Nodes). Amazon EKS simplifies building, securing, and maintaining Kubernetes clusters. It can be more cost effective at providing enough resources to meet peak demand than maintaining your own data centers. Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. Components that manage nodes, schedule workloads, integrate with the AWS cloud, and store and scale control plane information to keep your clusters up and running, are handled for you automatically. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well. It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services. The following diagram illustrates how Amazon EKS integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic"} +{"global_id": 973, "doc_id": "eks", "chunk_id": "0", "question_id": 2, "question": "What are two main approaches to using Amazon EKS?", "answer_span": "Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well.", "chunk": "Amazon EKS User Guide What is Amazon EKS? Amazon EKS: Simplified Kubernetes Management Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters. With EKS, you can: • Deploy applications faster with less operational overhead • Scale seamlessly to meet changing workload demands • Improve security through AWS integration and automated updates • Choose between standard EKS or fully automated EKS Auto Mode Amazon Elastic Kubernetes Service (Amazon EKS) is the premiere platform for running Kubernetes clusters, both in the Amazon Web Services (AWS) cloud and in your own data centers (EKS Anywhere and Amazon EKS Hybrid Nodes). Amazon EKS simplifies building, securing, and maintaining Kubernetes clusters. It can be more cost effective at providing enough resources to meet peak demand than maintaining your own data centers. Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. Components that manage nodes, schedule workloads, integrate with the AWS cloud, and store and scale control plane information to keep your clusters up and running, are handled for you automatically. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well. It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services. The following diagram illustrates how Amazon EKS integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic"} +{"global_id": 974, "doc_id": "eks", "chunk_id": "0", "question_id": 3, "question": "How does Amazon EKS help with application deployment?", "answer_span": "With EKS, you can: • Deploy applications faster with less operational overhead.", "chunk": "Amazon EKS User Guide What is Amazon EKS? Amazon EKS: Simplified Kubernetes Management Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters. With EKS, you can: • Deploy applications faster with less operational overhead • Scale seamlessly to meet changing workload demands • Improve security through AWS integration and automated updates • Choose between standard EKS or fully automated EKS Auto Mode Amazon Elastic Kubernetes Service (Amazon EKS) is the premiere platform for running Kubernetes clusters, both in the Amazon Web Services (AWS) cloud and in your own data centers (EKS Anywhere and Amazon EKS Hybrid Nodes). Amazon EKS simplifies building, securing, and maintaining Kubernetes clusters. It can be more cost effective at providing enough resources to meet peak demand than maintaining your own data centers. Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. Components that manage nodes, schedule workloads, integrate with the AWS cloud, and store and scale control plane information to keep your clusters up and running, are handled for you automatically. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well. It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services. The following diagram illustrates how Amazon EKS integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic"} +{"global_id": 975, "doc_id": "eks", "chunk_id": "0", "question_id": 4, "question": "What does EKS Auto Mode simplify?", "answer_span": "It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services.", "chunk": "Amazon EKS User Guide What is Amazon EKS? Amazon EKS: Simplified Kubernetes Management Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service that eliminates the complexity of operating Kubernetes clusters. With EKS, you can: • Deploy applications faster with less operational overhead • Scale seamlessly to meet changing workload demands • Improve security through AWS integration and automated updates • Choose between standard EKS or fully automated EKS Auto Mode Amazon Elastic Kubernetes Service (Amazon EKS) is the premiere platform for running Kubernetes clusters, both in the Amazon Web Services (AWS) cloud and in your own data centers (EKS Anywhere and Amazon EKS Hybrid Nodes). Amazon EKS simplifies building, securing, and maintaining Kubernetes clusters. It can be more cost effective at providing enough resources to meet peak demand than maintaining your own data centers. Two of the main approaches to using Amazon EKS are as follows: • EKS standard: AWS manages the Kubernetes control plane when you create a cluster with EKS. Components that manage nodes, schedule workloads, integrate with the AWS cloud, and store and scale control plane information to keep your clusters up and running, are handled for you automatically. • EKS Auto Mode: Using the EKS Auto Mode feature, EKS extends its control to manage Nodes (Kubernetes data plane) as well. It simplifies Kubernetes management by automatically provisioning infrastructure, selecting optimal compute instances, dynamically scaling resources, continuously optimizing costs, patching operating systems, and integrating with AWS security services. The following diagram illustrates how Amazon EKS integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic"} +{"global_id": 976, "doc_id": "eks", "chunk_id": "1", "question_id": 1, "question": "What does Amazon EKS help you accelerate?", "answer_span": "Amazon EKS helps you accelerate time to production", "chunk": "integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic Kubernetes Service. Features of Amazon EKS Amazon EKS provides the following high-level features: Management interfaces EKS offers multiple interfaces to provision, manage, and maintain clusters, including AWS Management Console, Amazon EKS API/SDKs, CDK, AWS CLI, eksctl CLI, AWS CloudFormation, and Terraform. For more information, see Get started and Configure clusters. Features of Amazon EKS 2 Amazon EKS User Guide Access control tools EKS relies on both Kubernetes and AWS Identity and Access Management (AWS IAM) features to manage access from users and workloads. For more information, see the section called “Kubernetes API access” and the section called “Workload access to AWS ”. Compute resources For compute resources, EKS allows the full range of Amazon EC2 instance types and AWS innovations such as Nitro and Graviton with Amazon EKS for you to optimize the compute for your workloads. For more information, see Manage compute. Storage EKS Auto Mode automatically creates storage classes using EBS volumes. Using Container Storage Interface (CSI) drivers, you can also use Amazon S3, Amazon EFS, Amazon FSX, and Amazon File Cache for your application storage needs. For more information, see App data storage. Security The shared responsibility model is employed as it relates to Security in Amazon EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics"} +{"global_id": 977, "doc_id": "eks", "chunk_id": "1", "question_id": 2, "question": "What type of resources does EKS allow for compute?", "answer_span": "EKS allows the full range of Amazon EC2 instance types", "chunk": "integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic Kubernetes Service. Features of Amazon EKS Amazon EKS provides the following high-level features: Management interfaces EKS offers multiple interfaces to provision, manage, and maintain clusters, including AWS Management Console, Amazon EKS API/SDKs, CDK, AWS CLI, eksctl CLI, AWS CloudFormation, and Terraform. For more information, see Get started and Configure clusters. Features of Amazon EKS 2 Amazon EKS User Guide Access control tools EKS relies on both Kubernetes and AWS Identity and Access Management (AWS IAM) features to manage access from users and workloads. For more information, see the section called “Kubernetes API access” and the section called “Workload access to AWS ”. Compute resources For compute resources, EKS allows the full range of Amazon EC2 instance types and AWS innovations such as Nitro and Graviton with Amazon EKS for you to optimize the compute for your workloads. For more information, see Manage compute. Storage EKS Auto Mode automatically creates storage classes using EBS volumes. Using Container Storage Interface (CSI) drivers, you can also use Amazon S3, Amazon EFS, Amazon FSX, and Amazon File Cache for your application storage needs. For more information, see App data storage. Security The shared responsibility model is employed as it relates to Security in Amazon EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics"} +{"global_id": 978, "doc_id": "eks", "chunk_id": "1", "question_id": 3, "question": "What does EKS Auto Mode automatically create?", "answer_span": "EKS Auto Mode automatically creates storage classes using EBS volumes", "chunk": "integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic Kubernetes Service. Features of Amazon EKS Amazon EKS provides the following high-level features: Management interfaces EKS offers multiple interfaces to provision, manage, and maintain clusters, including AWS Management Console, Amazon EKS API/SDKs, CDK, AWS CLI, eksctl CLI, AWS CloudFormation, and Terraform. For more information, see Get started and Configure clusters. Features of Amazon EKS 2 Amazon EKS User Guide Access control tools EKS relies on both Kubernetes and AWS Identity and Access Management (AWS IAM) features to manage access from users and workloads. For more information, see the section called “Kubernetes API access” and the section called “Workload access to AWS ”. Compute resources For compute resources, EKS allows the full range of Amazon EC2 instance types and AWS innovations such as Nitro and Graviton with Amazon EKS for you to optimize the compute for your workloads. For more information, see Manage compute. Storage EKS Auto Mode automatically creates storage classes using EBS volumes. Using Container Storage Interface (CSI) drivers, you can also use Amazon S3, Amazon EFS, Amazon FSX, and Amazon File Cache for your application storage needs. For more information, see App data storage. Security The shared responsibility model is employed as it relates to Security in Amazon EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics"} +{"global_id": 979, "doc_id": "eks", "chunk_id": "1", "question_id": 4, "question": "What monitoring tools are included for Amazon EKS clusters?", "answer_span": "Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator", "chunk": "integrates your Kubernetes clusters with the AWS cloud, depending on which method of cluster creation you choose: Amazon EKS: Simplified Kubernetes Management 1 Amazon EKS User Guide Amazon EKS helps you accelerate time to production, improve performance, availability and resiliency, and enhance system security. For more information, see Amazon Elastic Kubernetes Service. Features of Amazon EKS Amazon EKS provides the following high-level features: Management interfaces EKS offers multiple interfaces to provision, manage, and maintain clusters, including AWS Management Console, Amazon EKS API/SDKs, CDK, AWS CLI, eksctl CLI, AWS CloudFormation, and Terraform. For more information, see Get started and Configure clusters. Features of Amazon EKS 2 Amazon EKS User Guide Access control tools EKS relies on both Kubernetes and AWS Identity and Access Management (AWS IAM) features to manage access from users and workloads. For more information, see the section called “Kubernetes API access” and the section called “Workload access to AWS ”. Compute resources For compute resources, EKS allows the full range of Amazon EC2 instance types and AWS innovations such as Nitro and Graviton with Amazon EKS for you to optimize the compute for your workloads. For more information, see Manage compute. Storage EKS Auto Mode automatically creates storage classes using EBS volumes. Using Container Storage Interface (CSI) drivers, you can also use Amazon S3, Amazon EFS, Amazon FSX, and Amazon File Cache for your application storage needs. For more information, see App data storage. Security The shared responsibility model is employed as it relates to Security in Amazon EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics"} +{"global_id": 980, "doc_id": "eks", "chunk_id": "2", "question_id": 1, "question": "What monitoring tools are mentioned for Amazon EKS clusters?", "answer_span": "Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator.", "chunk": "EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics Server. Kubernetes compatibility and support Amazon EKS is certified Kubernetes-conformant, so you can deploy Kubernetes-compatible applications without refactoring and use Kubernetes community tooling and plugins. EKS offers both standard support and eks/latest/userguide/kubernetes-versions-extended.html[extended support,type=\"documentation\"] for Kubernetes. For more information, see eks/latest/ userguide/kubernetes-versions.html[Understand the Kubernetes version lifecycle on EKS,type=\"documentation\"]. Related services Services to use with Amazon EKS Related services 3 Amazon EKS User Guide You can use other AWS services with the clusters that you deploy using Amazon EKS: Amazon EC2 Obtain on-demand, scalable compute capacity with Amazon EC2. Amazon EBS Attach scalable, high-performance block storage resources with Amazon EBS. Amazon ECR Store container images securely with Amazon ECR. Amazon CloudWatch Monitor AWS resources and applications in real time with Amazon CloudWatch. Amazon Prometheus Track metrics for containerized applications with Amazon Managed Service for Prometheus. Elastic Load Balancing Distribute incoming traffic across multiple targets with Elastic Load Balancing. Amazon GuardDuty Detect threats to EKS clusters with Amazon GuardDuty. AWS Resilience Hub Assess EKS cluster resiliency with AWS Resilience Hub. Amazon EKS Pricing Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes. When using Amazon EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the"} +{"global_id": 981, "doc_id": "eks", "chunk_id": "2", "question_id": 2, "question": "What type of support does EKS offer for Kubernetes?", "answer_span": "EKS offers both standard support and extended support for Kubernetes.", "chunk": "EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics Server. Kubernetes compatibility and support Amazon EKS is certified Kubernetes-conformant, so you can deploy Kubernetes-compatible applications without refactoring and use Kubernetes community tooling and plugins. EKS offers both standard support and eks/latest/userguide/kubernetes-versions-extended.html[extended support,type=\"documentation\"] for Kubernetes. For more information, see eks/latest/ userguide/kubernetes-versions.html[Understand the Kubernetes version lifecycle on EKS,type=\"documentation\"]. Related services Services to use with Amazon EKS Related services 3 Amazon EKS User Guide You can use other AWS services with the clusters that you deploy using Amazon EKS: Amazon EC2 Obtain on-demand, scalable compute capacity with Amazon EC2. Amazon EBS Attach scalable, high-performance block storage resources with Amazon EBS. Amazon ECR Store container images securely with Amazon ECR. Amazon CloudWatch Monitor AWS resources and applications in real time with Amazon CloudWatch. Amazon Prometheus Track metrics for containerized applications with Amazon Managed Service for Prometheus. Elastic Load Balancing Distribute incoming traffic across multiple targets with Elastic Load Balancing. Amazon GuardDuty Detect threats to EKS clusters with Amazon GuardDuty. AWS Resilience Hub Assess EKS cluster resiliency with AWS Resilience Hub. Amazon EKS Pricing Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes. When using Amazon EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the"} +{"global_id": 982, "doc_id": "eks", "chunk_id": "2", "question_id": 3, "question": "Which service is used to monitor AWS resources and applications in real time?", "answer_span": "Amazon CloudWatch Monitor AWS resources and applications in real time with Amazon CloudWatch.", "chunk": "EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics Server. Kubernetes compatibility and support Amazon EKS is certified Kubernetes-conformant, so you can deploy Kubernetes-compatible applications without refactoring and use Kubernetes community tooling and plugins. EKS offers both standard support and eks/latest/userguide/kubernetes-versions-extended.html[extended support,type=\"documentation\"] for Kubernetes. For more information, see eks/latest/ userguide/kubernetes-versions.html[Understand the Kubernetes version lifecycle on EKS,type=\"documentation\"]. Related services Services to use with Amazon EKS Related services 3 Amazon EKS User Guide You can use other AWS services with the clusters that you deploy using Amazon EKS: Amazon EC2 Obtain on-demand, scalable compute capacity with Amazon EC2. Amazon EBS Attach scalable, high-performance block storage resources with Amazon EBS. Amazon ECR Store container images securely with Amazon ECR. Amazon CloudWatch Monitor AWS resources and applications in real time with Amazon CloudWatch. Amazon Prometheus Track metrics for containerized applications with Amazon Managed Service for Prometheus. Elastic Load Balancing Distribute incoming traffic across multiple targets with Elastic Load Balancing. Amazon GuardDuty Detect threats to EKS clusters with Amazon GuardDuty. AWS Resilience Hub Assess EKS cluster resiliency with AWS Resilience Hub. Amazon EKS Pricing Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes. When using Amazon EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the"} +{"global_id": 983, "doc_id": "eks", "chunk_id": "2", "question_id": 4, "question": "What is the basis for Amazon EKS pricing?", "answer_span": "Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes.", "chunk": "EKS. For more information, see Security best practices, Infrastructure security, and Kubernetes security. Monitoring tools Use the observability dashboard to monitor Amazon EKS clusters. Monitoring tools include Prometheus, CloudWatch, Cloudtrail, and ADOT Operator. For more information on dashboards, metrics servers, and other tools, see EKS cluster costs and Kubernetes Metrics Server. Kubernetes compatibility and support Amazon EKS is certified Kubernetes-conformant, so you can deploy Kubernetes-compatible applications without refactoring and use Kubernetes community tooling and plugins. EKS offers both standard support and eks/latest/userguide/kubernetes-versions-extended.html[extended support,type=\"documentation\"] for Kubernetes. For more information, see eks/latest/ userguide/kubernetes-versions.html[Understand the Kubernetes version lifecycle on EKS,type=\"documentation\"]. Related services Services to use with Amazon EKS Related services 3 Amazon EKS User Guide You can use other AWS services with the clusters that you deploy using Amazon EKS: Amazon EC2 Obtain on-demand, scalable compute capacity with Amazon EC2. Amazon EBS Attach scalable, high-performance block storage resources with Amazon EBS. Amazon ECR Store container images securely with Amazon ECR. Amazon CloudWatch Monitor AWS resources and applications in real time with Amazon CloudWatch. Amazon Prometheus Track metrics for containerized applications with Amazon Managed Service for Prometheus. Elastic Load Balancing Distribute incoming traffic across multiple targets with Elastic Load Balancing. Amazon GuardDuty Detect threats to EKS clusters with Amazon GuardDuty. AWS Resilience Hub Assess EKS cluster resiliency with AWS Resilience Hub. Amazon EKS Pricing Amazon EKS has per cluster pricing based on Kubernetes cluster version support, pricing for Amazon EKS Auto Mode, and per vCPU pricing for Amazon EKS Hybrid Nodes. When using Amazon EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the"} +{"global_id": 984, "doc_id": "eks", "chunk_id": "3", "question_id": 1, "question": "What do you pay separately for when using EKS?", "answer_span": "you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes.", "chunk": "EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the volume capacity through Amazon EBS, and the IPv4 address through Amazon VPC. Amazon EKS Pricing 4 Amazon EKS User Guide Visit the respective pricing pages of the AWS services you are using with your Kubernetes applications for detailed pricing information. • For Amazon EKS cluster, Amazon EKS Auto Mode, and Amazon EKS Hybrid Nodes pricing, see Amazon EKS Pricing. • For Amazon EC2 pricing, see Amazon EC2 On-Demand Pricing and Amazon EC2 Spot Pricing. • For AWS Fargate pricing, see AWS Fargate Pricing. • You can use your savings plans for compute used in Amazon EKS clusters. For more information, see Pricing with Savings Plans. Common use cases in Amazon EKS Amazon EKS offers robust managed Kubernetes services on AWS, designed to optimize containerized applications. The following are a few of the most common use cases of Amazon EKS, helping you leverage its strengths for your specific needs. Deploying high-availability applications Using Elastic Load Balancing, you can make sure that your applications are highly available across multiple Availability Zones. Building microservices architectures Use Kubernetes service discovery features with AWS Cloud Map or Amazon VPC Lattice to build resilient systems. Automating software release process Manage continuous integration and continuous deployment (CICD) pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS"} +{"global_id": 985, "doc_id": "eks", "chunk_id": "3", "question_id": 2, "question": "What are you charged for when running Kubernetes worker nodes as Amazon EC2 instances?", "answer_span": "you are charged for the instance capacity through Amazon EC2, the volume capacity through Amazon EBS, and the IPv4 address through Amazon VPC.", "chunk": "EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the volume capacity through Amazon EBS, and the IPv4 address through Amazon VPC. Amazon EKS Pricing 4 Amazon EKS User Guide Visit the respective pricing pages of the AWS services you are using with your Kubernetes applications for detailed pricing information. • For Amazon EKS cluster, Amazon EKS Auto Mode, and Amazon EKS Hybrid Nodes pricing, see Amazon EKS Pricing. • For Amazon EC2 pricing, see Amazon EC2 On-Demand Pricing and Amazon EC2 Spot Pricing. • For AWS Fargate pricing, see AWS Fargate Pricing. • You can use your savings plans for compute used in Amazon EKS clusters. For more information, see Pricing with Savings Plans. Common use cases in Amazon EKS Amazon EKS offers robust managed Kubernetes services on AWS, designed to optimize containerized applications. The following are a few of the most common use cases of Amazon EKS, helping you leverage its strengths for your specific needs. Deploying high-availability applications Using Elastic Load Balancing, you can make sure that your applications are highly available across multiple Availability Zones. Building microservices architectures Use Kubernetes service discovery features with AWS Cloud Map or Amazon VPC Lattice to build resilient systems. Automating software release process Manage continuous integration and continuous deployment (CICD) pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS"} +{"global_id": 986, "doc_id": "eks", "chunk_id": "3", "question_id": 3, "question": "What can you use to run serverless applications with Amazon EKS?", "answer_span": "Use AWS Fargate with Amazon EKS to run serverless applications.", "chunk": "EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the volume capacity through Amazon EBS, and the IPv4 address through Amazon VPC. Amazon EKS Pricing 4 Amazon EKS User Guide Visit the respective pricing pages of the AWS services you are using with your Kubernetes applications for detailed pricing information. • For Amazon EKS cluster, Amazon EKS Auto Mode, and Amazon EKS Hybrid Nodes pricing, see Amazon EKS Pricing. • For Amazon EC2 pricing, see Amazon EC2 On-Demand Pricing and Amazon EC2 Spot Pricing. • For AWS Fargate pricing, see AWS Fargate Pricing. • You can use your savings plans for compute used in Amazon EKS clusters. For more information, see Pricing with Savings Plans. Common use cases in Amazon EKS Amazon EKS offers robust managed Kubernetes services on AWS, designed to optimize containerized applications. The following are a few of the most common use cases of Amazon EKS, helping you leverage its strengths for your specific needs. Deploying high-availability applications Using Elastic Load Balancing, you can make sure that your applications are highly available across multiple Availability Zones. Building microservices architectures Use Kubernetes service discovery features with AWS Cloud Map or Amazon VPC Lattice to build resilient systems. Automating software release process Manage continuous integration and continuous deployment (CICD) pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS"} +{"global_id": 987, "doc_id": "eks", "chunk_id": "3", "question_id": 4, "question": "What does Amazon EKS offer?", "answer_span": "Amazon EKS offers robust managed Kubernetes services on AWS, designed to optimize containerized applications.", "chunk": "EKS, you pay separately for the AWS resources you use to run your applications on Kubernetes worker nodes. For example, if you are running Kubernetes worker nodes as Amazon EC2 instances with Amazon EBS volumes and public IPv4 addresses, you are charged for the instance capacity through Amazon EC2, the volume capacity through Amazon EBS, and the IPv4 address through Amazon VPC. Amazon EKS Pricing 4 Amazon EKS User Guide Visit the respective pricing pages of the AWS services you are using with your Kubernetes applications for detailed pricing information. • For Amazon EKS cluster, Amazon EKS Auto Mode, and Amazon EKS Hybrid Nodes pricing, see Amazon EKS Pricing. • For Amazon EC2 pricing, see Amazon EC2 On-Demand Pricing and Amazon EC2 Spot Pricing. • For AWS Fargate pricing, see AWS Fargate Pricing. • You can use your savings plans for compute used in Amazon EKS clusters. For more information, see Pricing with Savings Plans. Common use cases in Amazon EKS Amazon EKS offers robust managed Kubernetes services on AWS, designed to optimize containerized applications. The following are a few of the most common use cases of Amazon EKS, helping you leverage its strengths for your specific needs. Deploying high-availability applications Using Elastic Load Balancing, you can make sure that your applications are highly available across multiple Availability Zones. Building microservices architectures Use Kubernetes service discovery features with AWS Cloud Map or Amazon VPC Lattice to build resilient systems. Automating software release process Manage continuous integration and continuous deployment (CICD) pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS"} +{"global_id": 988, "doc_id": "eks", "chunk_id": "4", "question_id": 1, "question": "What does Amazon EKS and Fargate handle while you focus on application development?", "answer_span": "Amazon EKS and Fargate handle the underlying infrastructure.", "chunk": "pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch. With GPU support, you can handle even complex machine learning tasks effectively. Common use cases 5 Amazon EKS User Guide Deploying consistently on premises and in the cloud To simplify running Kubernetes in on-premises environments, you can use the same Amazon EKS clusters, features, and tools to run self-managed nodes on AWS Outposts or can use Amazon EKS Hybrid Nodes with your own infrastructure. For self-contained, air-gapped environments, you can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure. Running cost-effective batch processing and big data workloads Utilize Spot Instances to run your batch processing and big data workloads such as Apache Hadoop and Spark, at a fraction of the cost. This lets you take advantage of unused Amazon EC2 capacity at discounted prices. Securing application and ensuring compliance Implement strong security practices and maintain compliance with Amazon EKS, which integrates with AWS security services such as AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (Amazon VPC), and AWS Key Management Service (AWS KMS). This ensures data privacy and protection as per industry standards. Amazon EKS architecture Amazon EKS aligns with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with"} +{"global_id": 989, "doc_id": "eks", "chunk_id": "4", "question_id": 2, "question": "Which machine learning frameworks is Amazon EKS compatible with?", "answer_span": "Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch.", "chunk": "pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch. With GPU support, you can handle even complex machine learning tasks effectively. Common use cases 5 Amazon EKS User Guide Deploying consistently on premises and in the cloud To simplify running Kubernetes in on-premises environments, you can use the same Amazon EKS clusters, features, and tools to run self-managed nodes on AWS Outposts or can use Amazon EKS Hybrid Nodes with your own infrastructure. For self-contained, air-gapped environments, you can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure. Running cost-effective batch processing and big data workloads Utilize Spot Instances to run your batch processing and big data workloads such as Apache Hadoop and Spark, at a fraction of the cost. This lets you take advantage of unused Amazon EC2 capacity at discounted prices. Securing application and ensuring compliance Implement strong security practices and maintain compliance with Amazon EKS, which integrates with AWS security services such as AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (Amazon VPC), and AWS Key Management Service (AWS KMS). This ensures data privacy and protection as per industry standards. Amazon EKS architecture Amazon EKS aligns with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with"} +{"global_id": 990, "doc_id": "eks", "chunk_id": "4", "question_id": 3, "question": "What can you use to automate Kubernetes cluster lifecycle management on your own infrastructure?", "answer_span": "You can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure.", "chunk": "pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch. With GPU support, you can handle even complex machine learning tasks effectively. Common use cases 5 Amazon EKS User Guide Deploying consistently on premises and in the cloud To simplify running Kubernetes in on-premises environments, you can use the same Amazon EKS clusters, features, and tools to run self-managed nodes on AWS Outposts or can use Amazon EKS Hybrid Nodes with your own infrastructure. For self-contained, air-gapped environments, you can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure. Running cost-effective batch processing and big data workloads Utilize Spot Instances to run your batch processing and big data workloads such as Apache Hadoop and Spark, at a fraction of the cost. This lets you take advantage of unused Amazon EC2 capacity at discounted prices. Securing application and ensuring compliance Implement strong security practices and maintain compliance with Amazon EKS, which integrates with AWS security services such as AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (Amazon VPC), and AWS Key Management Service (AWS KMS). This ensures data privacy and protection as per industry standards. Amazon EKS architecture Amazon EKS aligns with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with"} +{"global_id": 991, "doc_id": "eks", "chunk_id": "4", "question_id": 4, "question": "What practices should you implement to ensure compliance with Amazon EKS?", "answer_span": "Implement strong security practices and maintain compliance with Amazon EKS.", "chunk": "pipelines that simplify the process of automated building, testing, and deployment of applications. Running serverless applications Use AWS Fargate with Amazon EKS to run serverless applications. This means you can focus solely on application development, while Amazon EKS and Fargate handle the underlying infrastructure. Executing machine learning workloads Amazon EKS is compatible with popular machine learning frameworks such as TensorFlow, MXNet, and PyTorch. With GPU support, you can handle even complex machine learning tasks effectively. Common use cases 5 Amazon EKS User Guide Deploying consistently on premises and in the cloud To simplify running Kubernetes in on-premises environments, you can use the same Amazon EKS clusters, features, and tools to run self-managed nodes on AWS Outposts or can use Amazon EKS Hybrid Nodes with your own infrastructure. For self-contained, air-gapped environments, you can use Amazon EKS Anywhere to automate Kubernetes cluster lifecycle management on your own infrastructure. Running cost-effective batch processing and big data workloads Utilize Spot Instances to run your batch processing and big data workloads such as Apache Hadoop and Spark, at a fraction of the cost. This lets you take advantage of unused Amazon EC2 capacity at discounted prices. Securing application and ensuring compliance Implement strong security practices and maintain compliance with Amazon EKS, which integrates with AWS security services such as AWS Identity and Access Management (IAM), Amazon Virtual Private Cloud (Amazon VPC), and AWS Key Management Service (AWS KMS). This ensures data privacy and protection as per industry standards. Amazon EKS architecture Amazon EKS aligns with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with"} +{"global_id": 992, "doc_id": "eks", "chunk_id": "5", "question_id": 1, "question": "What ensures every cluster has its own unique Kubernetes control plane?", "answer_span": "Amazon EKS ensures every cluster has its own unique Kubernetes control plane.", "chunk": "with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with no overlaps between clusters or AWS accounts. The setup includes: Distributed components The control plane positions at least two API server instances and three etcd instances across three AWS Availability Zones within an AWS Region. Optimal performance Amazon EKS actively monitors and adjusts control plane instances to maintain peak performance. Architecture 6 Amazon EKS User Guide Resilience If a control plane instance falters, Amazon EKS quickly replaces it, using different Availability Zone if needed. Consistent uptime By running clusters across multiple Availability Zones, a reliable API server endpoint availability Service Level Agreement (SLA) is achieved. Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster. Cluster components can’t view or receive communication from other clusters or AWS accounts, except when authorized by Kubernetes role-based access control (RBAC) policies. Compute In addition to the control plane, an Amazon EKS cluster has a set of worker machines called nodes. Selecting the appropriate Amazon EKS cluster node type is crucial for meeting your specific requirements and optimizing resource utilization. Amazon EKS offers the following primary node types: EKS Auto Mode EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod"} +{"global_id": 993, "doc_id": "eks", "chunk_id": "5", "question_id": 2, "question": "What does Amazon EKS use to limit traffic between control plane components within a single cluster?", "answer_span": "Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster.", "chunk": "with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with no overlaps between clusters or AWS accounts. The setup includes: Distributed components The control plane positions at least two API server instances and three etcd instances across three AWS Availability Zones within an AWS Region. Optimal performance Amazon EKS actively monitors and adjusts control plane instances to maintain peak performance. Architecture 6 Amazon EKS User Guide Resilience If a control plane instance falters, Amazon EKS quickly replaces it, using different Availability Zone if needed. Consistent uptime By running clusters across multiple Availability Zones, a reliable API server endpoint availability Service Level Agreement (SLA) is achieved. Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster. Cluster components can’t view or receive communication from other clusters or AWS accounts, except when authorized by Kubernetes role-based access control (RBAC) policies. Compute In addition to the control plane, an Amazon EKS cluster has a set of worker machines called nodes. Selecting the appropriate Amazon EKS cluster node type is crucial for meeting your specific requirements and optimizing resource utilization. Amazon EKS offers the following primary node types: EKS Auto Mode EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod"} +{"global_id": 994, "doc_id": "eks", "chunk_id": "5", "question_id": 3, "question": "What is crucial for meeting specific requirements and optimizing resource utilization in an Amazon EKS cluster?", "answer_span": "Selecting the appropriate Amazon EKS cluster node type is crucial for meeting your specific requirements and optimizing resource utilization.", "chunk": "with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with no overlaps between clusters or AWS accounts. The setup includes: Distributed components The control plane positions at least two API server instances and three etcd instances across three AWS Availability Zones within an AWS Region. Optimal performance Amazon EKS actively monitors and adjusts control plane instances to maintain peak performance. Architecture 6 Amazon EKS User Guide Resilience If a control plane instance falters, Amazon EKS quickly replaces it, using different Availability Zone if needed. Consistent uptime By running clusters across multiple Availability Zones, a reliable API server endpoint availability Service Level Agreement (SLA) is achieved. Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster. Cluster components can’t view or receive communication from other clusters or AWS accounts, except when authorized by Kubernetes role-based access control (RBAC) policies. Compute In addition to the control plane, an Amazon EKS cluster has a set of worker machines called nodes. Selecting the appropriate Amazon EKS cluster node type is crucial for meeting your specific requirements and optimizing resource utilization. Amazon EKS offers the following primary node types: EKS Auto Mode EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod"} +{"global_id": 995, "doc_id": "eks", "chunk_id": "5", "question_id": 4, "question": "What does EKS Auto Mode extend beyond the control plane?", "answer_span": "EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management.", "chunk": "with the general cluster architecture of Kubernetes. For more information, see Kubernetes Components in the Kubernetes documentation. The following sections summarize some extra architecture details for Amazon EKS. Control plane Amazon EKS ensures every cluster has its own unique Kubernetes control plane. This design keeps each cluster’s infrastructure separate, with no overlaps between clusters or AWS accounts. The setup includes: Distributed components The control plane positions at least two API server instances and three etcd instances across three AWS Availability Zones within an AWS Region. Optimal performance Amazon EKS actively monitors and adjusts control plane instances to maintain peak performance. Architecture 6 Amazon EKS User Guide Resilience If a control plane instance falters, Amazon EKS quickly replaces it, using different Availability Zone if needed. Consistent uptime By running clusters across multiple Availability Zones, a reliable API server endpoint availability Service Level Agreement (SLA) is achieved. Amazon EKS uses Amazon Virtual Private Cloud (Amazon VPC) to limit traffic between control plane components within a single cluster. Cluster components can’t view or receive communication from other clusters or AWS accounts, except when authorized by Kubernetes role-based access control (RBAC) policies. Compute In addition to the control plane, an Amazon EKS cluster has a set of worker machines called nodes. Selecting the appropriate Amazon EKS cluster node type is crucial for meeting your specific requirements and optimizing resource utilization. Amazon EKS offers the following primary node types: EKS Auto Mode EKS Auto Mode extends AWS management beyond the control plane to include the data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod"} +{"global_id": 996, "doc_id": "eks", "chunk_id": "6", "question_id": 1, "question": "What does EKS Auto Mode dynamically manage based on workload demands?", "answer_span": "EKS Auto Mode dynamically manages nodes based on workload demands", "chunk": "data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod Disruption Budgets, and includes managed components that would otherwise require add-on management. This option is ideal for users who want to leverage AWS expertise for day-to-day operations, minimize operational overhead, and focus on application development rather than infrastructure management. AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances. With Fargate, you specify your application’s resource needs, and AWS automatically provisions, scales, and maintains the infrastructure. This option is ideal for users who prioritize ease-of-use and want to concentrate on application development and deployment rather than managing infrastructure. Compute 7 Amazon EKS User Guide Karpenter Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency. Karpenter launches right-sized compute resources in response to changing application load. This option can provision just-in-time compute resources that meet the requirements of your workload. Managed node groups Managed node groups are a blend of automation and customization for managing a collection of Amazon EC2 instances within an Amazon EKS cluster. AWS takes care of tasks like patching, updating, and scaling nodes, easing operational aspects. In parallel, custom kubelet arguments are supported, opening up possibilities for advanced CPU and memory management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling,"} +{"global_id": 997, "doc_id": "eks", "chunk_id": "6", "question_id": 2, "question": "What is AWS Fargate?", "answer_span": "AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances", "chunk": "data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod Disruption Budgets, and includes managed components that would otherwise require add-on management. This option is ideal for users who want to leverage AWS expertise for day-to-day operations, minimize operational overhead, and focus on application development rather than infrastructure management. AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances. With Fargate, you specify your application’s resource needs, and AWS automatically provisions, scales, and maintains the infrastructure. This option is ideal for users who prioritize ease-of-use and want to concentrate on application development and deployment rather than managing infrastructure. Compute 7 Amazon EKS User Guide Karpenter Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency. Karpenter launches right-sized compute resources in response to changing application load. This option can provision just-in-time compute resources that meet the requirements of your workload. Managed node groups Managed node groups are a blend of automation and customization for managing a collection of Amazon EC2 instances within an Amazon EKS cluster. AWS takes care of tasks like patching, updating, and scaling nodes, easing operational aspects. In parallel, custom kubelet arguments are supported, opening up possibilities for advanced CPU and memory management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling,"} +{"global_id": 998, "doc_id": "eks", "chunk_id": "6", "question_id": 3, "question": "What does Karpenter help improve?", "answer_span": "Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency", "chunk": "data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod Disruption Budgets, and includes managed components that would otherwise require add-on management. This option is ideal for users who want to leverage AWS expertise for day-to-day operations, minimize operational overhead, and focus on application development rather than infrastructure management. AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances. With Fargate, you specify your application’s resource needs, and AWS automatically provisions, scales, and maintains the infrastructure. This option is ideal for users who prioritize ease-of-use and want to concentrate on application development and deployment rather than managing infrastructure. Compute 7 Amazon EKS User Guide Karpenter Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency. Karpenter launches right-sized compute resources in response to changing application load. This option can provision just-in-time compute resources that meet the requirements of your workload. Managed node groups Managed node groups are a blend of automation and customization for managing a collection of Amazon EC2 instances within an Amazon EKS cluster. AWS takes care of tasks like patching, updating, and scaling nodes, easing operational aspects. In parallel, custom kubelet arguments are supported, opening up possibilities for advanced CPU and memory management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling,"} +{"global_id": 999, "doc_id": "eks", "chunk_id": "6", "question_id": 4, "question": "What do managed node groups ease in operational aspects?", "answer_span": "AWS takes care of tasks like patching, updating, and scaling nodes, easing operational aspects", "chunk": "data plane, automating cluster infrastructure management. It integrates core Kubernetes capabilities as built-in components, including compute autoscaling, networking, load balancing, DNS, storage, and GPU support. EKS Auto Mode dynamically manages nodes based on workload demands, using immutable AMIs with enhanced security features. It automates updates and upgrades while respecting Pod Disruption Budgets, and includes managed components that would otherwise require add-on management. This option is ideal for users who want to leverage AWS expertise for day-to-day operations, minimize operational overhead, and focus on application development rather than infrastructure management. AWS Fargate Fargate is a serverless compute engine for containers that eliminates the need to manage the underlying instances. With Fargate, you specify your application’s resource needs, and AWS automatically provisions, scales, and maintains the infrastructure. This option is ideal for users who prioritize ease-of-use and want to concentrate on application development and deployment rather than managing infrastructure. Compute 7 Amazon EKS User Guide Karpenter Karpenter is a flexible, high-performance Kubernetes cluster autoscaler that helps improve application availability and cluster efficiency. Karpenter launches right-sized compute resources in response to changing application load. This option can provision just-in-time compute resources that meet the requirements of your workload. Managed node groups Managed node groups are a blend of automation and customization for managing a collection of Amazon EC2 instances within an Amazon EKS cluster. AWS takes care of tasks like patching, updating, and scaling nodes, easing operational aspects. In parallel, custom kubelet arguments are supported, opening up possibilities for advanced CPU and memory management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling,"} +{"global_id": 1000, "doc_id": "eks", "chunk_id": "7", "question_id": 1, "question": "What do self-managed nodes offer within an Amazon EKS cluster?", "answer_span": "Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster.", "chunk": "management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling, and maintaining the nodes, giving you total control over the underlying infrastructure. This option is suitable for users who need granular control and customization of their nodes and are ready to invest time in managing and maintaining their infrastructure. Amazon EKS Hybrid Nodes With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your onpremises and edge applications. Kubernetes concepts Amazon Elastic Kubernetes Service (Amazon EKS) is an AWS managed service based on the open source Kubernetes project. While there are things you need to know about how the Amazon EKS service integrates with AWS Cloud (particularly when you first create an Amazon EKS cluster), once it’s up and running, you use your Amazon EKS cluster in much that same way as you would any other Kubernetes cluster. So to begin managing Kubernetes clusters and deploying workloads, you need at least a basic understanding of Kubernetes concepts. Kubernetes concepts 8 Amazon EKS User Guide This page divides Kubernetes concepts into three sections: the section called “Why Kubernetes?”, the section called “Clusters”, and the section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and"} +{"global_id": 1001, "doc_id": "eks", "chunk_id": "7", "question_id": 2, "question": "What is the purpose of Amazon EKS Hybrid Nodes?", "answer_span": "With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters.", "chunk": "management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling, and maintaining the nodes, giving you total control over the underlying infrastructure. This option is suitable for users who need granular control and customization of their nodes and are ready to invest time in managing and maintaining their infrastructure. Amazon EKS Hybrid Nodes With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your onpremises and edge applications. Kubernetes concepts Amazon Elastic Kubernetes Service (Amazon EKS) is an AWS managed service based on the open source Kubernetes project. While there are things you need to know about how the Amazon EKS service integrates with AWS Cloud (particularly when you first create an Amazon EKS cluster), once it’s up and running, you use your Amazon EKS cluster in much that same way as you would any other Kubernetes cluster. So to begin managing Kubernetes clusters and deploying workloads, you need at least a basic understanding of Kubernetes concepts. Kubernetes concepts 8 Amazon EKS User Guide This page divides Kubernetes concepts into three sections: the section called “Why Kubernetes?”, the section called “Clusters”, and the section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and"} +{"global_id": 1002, "doc_id": "eks", "chunk_id": "7", "question_id": 3, "question": "What is Amazon Elastic Kubernetes Service based on?", "answer_span": "Amazon Elastic Kubernetes Service (Amazon EKS) is an AWS managed service based on the open source Kubernetes project.", "chunk": "management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling, and maintaining the nodes, giving you total control over the underlying infrastructure. This option is suitable for users who need granular control and customization of their nodes and are ready to invest time in managing and maintaining their infrastructure. Amazon EKS Hybrid Nodes With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your onpremises and edge applications. Kubernetes concepts Amazon Elastic Kubernetes Service (Amazon EKS) is an AWS managed service based on the open source Kubernetes project. While there are things you need to know about how the Amazon EKS service integrates with AWS Cloud (particularly when you first create an Amazon EKS cluster), once it’s up and running, you use your Amazon EKS cluster in much that same way as you would any other Kubernetes cluster. So to begin managing Kubernetes clusters and deploying workloads, you need at least a basic understanding of Kubernetes concepts. Kubernetes concepts 8 Amazon EKS User Guide This page divides Kubernetes concepts into three sections: the section called “Why Kubernetes?”, the section called “Clusters”, and the section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and"} +{"global_id": 1003, "doc_id": "eks", "chunk_id": "7", "question_id": 4, "question": "What are the three sections that Kubernetes concepts are divided into?", "answer_span": "This page divides Kubernetes concepts into three sections: the section called “Why Kubernetes?”, the section called “Clusters”, and the section called “Workloads”.", "chunk": "management policies. Moreover, they enhance security via AWS Identity and Access Management (IAM) roles for service accounts, while curbing the need for separate permissions per cluster. Self-managed nodes Self-managed nodes offer full control over your Amazon EC2 instances within an Amazon EKS cluster. You are in charge of managing, scaling, and maintaining the nodes, giving you total control over the underlying infrastructure. This option is suitable for users who need granular control and customization of their nodes and are ready to invest time in managing and maintaining their infrastructure. Amazon EKS Hybrid Nodes With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your onpremises and edge applications. Kubernetes concepts Amazon Elastic Kubernetes Service (Amazon EKS) is an AWS managed service based on the open source Kubernetes project. While there are things you need to know about how the Amazon EKS service integrates with AWS Cloud (particularly when you first create an Amazon EKS cluster), once it’s up and running, you use your Amazon EKS cluster in much that same way as you would any other Kubernetes cluster. So to begin managing Kubernetes clusters and deploying workloads, you need at least a basic understanding of Kubernetes concepts. Kubernetes concepts 8 Amazon EKS User Guide This page divides Kubernetes concepts into three sections: the section called “Why Kubernetes?”, the section called “Clusters”, and the section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and"} +{"global_id": 1004, "doc_id": "eks", "chunk_id": "8", "question_id": 1, "question": "What is the value of running a Kubernetes service?", "answer_span": "the value of running a Kubernetes service, in particular as a managed service like Amazon EKS.", "chunk": "section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and what your responsibilities are for creating and maintaining Kubernetes clusters. Topics • Why Kubernetes? • Clusters • Workloads • Next steps As you go through this content, links will lead you to further descriptions of Kubernetes concepts in both Amazon EKS and Kubernetes documentation, in case you want to take deep dives into any of the topics we cover here. For details about how Amazon EKS implements Kubernetes control plane and compute features, see the section called “Architecture”. Why Kubernetes? Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications. Rather than just running Kubernetes on a single machine (although that is possible), Kubernetes achieves those goals by allowing you to run applications across sets of computers that can expand or contract to meet demand. Kubernetes includes features that make it easier for you to: • Deploy applications on multiple machines (using containers deployed in Pods) • Monitor container health and restart failed containers • Scale containers up and down based on load • Update containers with new versions • Allocate resources between containers • Balance traffic across machines Having Kubernetes automate these types of complex tasks allows an application developer to focus on building and improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity"} +{"global_id": 1005, "doc_id": "eks", "chunk_id": "8", "question_id": 2, "question": "What does the Workloads section cover?", "answer_span": "The Workloads section covers how Kubernetes applications are built, stored, run, and managed.", "chunk": "section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and what your responsibilities are for creating and maintaining Kubernetes clusters. Topics • Why Kubernetes? • Clusters • Workloads • Next steps As you go through this content, links will lead you to further descriptions of Kubernetes concepts in both Amazon EKS and Kubernetes documentation, in case you want to take deep dives into any of the topics we cover here. For details about how Amazon EKS implements Kubernetes control plane and compute features, see the section called “Architecture”. Why Kubernetes? Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications. Rather than just running Kubernetes on a single machine (although that is possible), Kubernetes achieves those goals by allowing you to run applications across sets of computers that can expand or contract to meet demand. Kubernetes includes features that make it easier for you to: • Deploy applications on multiple machines (using containers deployed in Pods) • Monitor container health and restart failed containers • Scale containers up and down based on load • Update containers with new versions • Allocate resources between containers • Balance traffic across machines Having Kubernetes automate these types of complex tasks allows an application developer to focus on building and improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity"} +{"global_id": 1006, "doc_id": "eks", "chunk_id": "8", "question_id": 3, "question": "What was Kubernetes designed to improve?", "answer_span": "Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications.", "chunk": "section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and what your responsibilities are for creating and maintaining Kubernetes clusters. Topics • Why Kubernetes? • Clusters • Workloads • Next steps As you go through this content, links will lead you to further descriptions of Kubernetes concepts in both Amazon EKS and Kubernetes documentation, in case you want to take deep dives into any of the topics we cover here. For details about how Amazon EKS implements Kubernetes control plane and compute features, see the section called “Architecture”. Why Kubernetes? Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications. Rather than just running Kubernetes on a single machine (although that is possible), Kubernetes achieves those goals by allowing you to run applications across sets of computers that can expand or contract to meet demand. Kubernetes includes features that make it easier for you to: • Deploy applications on multiple machines (using containers deployed in Pods) • Monitor container health and restart failed containers • Scale containers up and down based on load • Update containers with new versions • Allocate resources between containers • Balance traffic across machines Having Kubernetes automate these types of complex tasks allows an application developer to focus on building and improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity"} +{"global_id": 1007, "doc_id": "eks", "chunk_id": "8", "question_id": 4, "question": "What do developers typically create to describe the desired state of the application?", "answer_span": "The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application.", "chunk": "section called “Workloads”. The first section describes the value of running a Kubernetes service, in particular as a managed service like Amazon EKS. The Workloads section covers how Kubernetes applications are built, stored, run, and managed. The Clusters section lays out the different components that make up Kubernetes clusters and what your responsibilities are for creating and maintaining Kubernetes clusters. Topics • Why Kubernetes? • Clusters • Workloads • Next steps As you go through this content, links will lead you to further descriptions of Kubernetes concepts in both Amazon EKS and Kubernetes documentation, in case you want to take deep dives into any of the topics we cover here. For details about how Amazon EKS implements Kubernetes control plane and compute features, see the section called “Architecture”. Why Kubernetes? Kubernetes was designed to improve availability and scalability when running mission-critical, production-quality containerized applications. Rather than just running Kubernetes on a single machine (although that is possible), Kubernetes achieves those goals by allowing you to run applications across sets of computers that can expand or contract to meet demand. Kubernetes includes features that make it easier for you to: • Deploy applications on multiple machines (using containers deployed in Pods) • Monitor container health and restart failed containers • Scale containers up and down based on load • Update containers with new versions • Allocate resources between containers • Balance traffic across machines Having Kubernetes automate these types of complex tasks allows an application developer to focus on building and improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity"} +{"global_id": 1008, "doc_id": "eks", "chunk_id": "9", "question_id": 1, "question": "What format do configuration files typically have in Kubernetes?", "answer_span": "formatted as YAML files", "chunk": "improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity rules, and more. Attributes of Kubernetes To achieve its goals, Kubernetes has the following attributes: • Containerized — Kubernetes is a container orchestration tool. To use Kubernetes, you must first have your applications containerized. Depending on the type of application, this could be as a set of microservices, as batch jobs or in other forms. Then, your applications can take advantage of a Kubernetes workflow that encompasses a huge ecosystem of tools, where containers can be stored as images in a container registry, deployed to a Kubernetes cluster, and run on an available node. You can build and test individual containers on your local computer with Docker or another container runtime, before deploying them to your Kubernetes cluster. • Scalable — If the demand for your applications exceeds the capacity of the running instances of those applications, Kubernetes is able to scale up. As needed, Kubernetes can tell if applications require more CPU or memory and respond by either automatically expanding available capacity or using more of existing capacity. Scaling can be done at the Pod level, if there is enough compute available to just run more instances of the application (horizontal Pod autoscaling), or at the node level, if more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads"} +{"global_id": 1009, "doc_id": "eks", "chunk_id": "9", "question_id": 2, "question": "What is Kubernetes primarily used for?", "answer_span": "Kubernetes is a container orchestration tool.", "chunk": "improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity rules, and more. Attributes of Kubernetes To achieve its goals, Kubernetes has the following attributes: • Containerized — Kubernetes is a container orchestration tool. To use Kubernetes, you must first have your applications containerized. Depending on the type of application, this could be as a set of microservices, as batch jobs or in other forms. Then, your applications can take advantage of a Kubernetes workflow that encompasses a huge ecosystem of tools, where containers can be stored as images in a container registry, deployed to a Kubernetes cluster, and run on an available node. You can build and test individual containers on your local computer with Docker or another container runtime, before deploying them to your Kubernetes cluster. • Scalable — If the demand for your applications exceeds the capacity of the running instances of those applications, Kubernetes is able to scale up. As needed, Kubernetes can tell if applications require more CPU or memory and respond by either automatically expanding available capacity or using more of existing capacity. Scaling can be done at the Pod level, if there is enough compute available to just run more instances of the application (horizontal Pod autoscaling), or at the node level, if more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads"} +{"global_id": 1010, "doc_id": "eks", "chunk_id": "9", "question_id": 3, "question": "How does Kubernetes respond if the demand for applications exceeds capacity?", "answer_span": "Kubernetes is able to scale up.", "chunk": "improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity rules, and more. Attributes of Kubernetes To achieve its goals, Kubernetes has the following attributes: • Containerized — Kubernetes is a container orchestration tool. To use Kubernetes, you must first have your applications containerized. Depending on the type of application, this could be as a set of microservices, as batch jobs or in other forms. Then, your applications can take advantage of a Kubernetes workflow that encompasses a huge ecosystem of tools, where containers can be stored as images in a container registry, deployed to a Kubernetes cluster, and run on an available node. You can build and test individual containers on your local computer with Docker or another container runtime, before deploying them to your Kubernetes cluster. • Scalable — If the demand for your applications exceeds the capacity of the running instances of those applications, Kubernetes is able to scale up. As needed, Kubernetes can tell if applications require more CPU or memory and respond by either automatically expanding available capacity or using more of existing capacity. Scaling can be done at the Pod level, if there is enough compute available to just run more instances of the application (horizontal Pod autoscaling), or at the node level, if more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads"} +{"global_id": 1011, "doc_id": "eks", "chunk_id": "9", "question_id": 4, "question": "What happens if an application or node becomes unhealthy or unavailable?", "answer_span": "Kubernetes can move running workloads", "chunk": "improving their application workloads, rather than worrying about Why Kubernetes? 9 Amazon EKS User Guide infrastructure. The developer typically creates configuration files, formatted as YAML files, that describe the desired state of the application. This could include which containers to run, resource limits, number of Pod replicas, CPU/memory allocation, affinity rules, and more. Attributes of Kubernetes To achieve its goals, Kubernetes has the following attributes: • Containerized — Kubernetes is a container orchestration tool. To use Kubernetes, you must first have your applications containerized. Depending on the type of application, this could be as a set of microservices, as batch jobs or in other forms. Then, your applications can take advantage of a Kubernetes workflow that encompasses a huge ecosystem of tools, where containers can be stored as images in a container registry, deployed to a Kubernetes cluster, and run on an available node. You can build and test individual containers on your local computer with Docker or another container runtime, before deploying them to your Kubernetes cluster. • Scalable — If the demand for your applications exceeds the capacity of the running instances of those applications, Kubernetes is able to scale up. As needed, Kubernetes can tell if applications require more CPU or memory and respond by either automatically expanding available capacity or using more of existing capacity. Scaling can be done at the Pod level, if there is enough compute available to just run more instances of the application (horizontal Pod autoscaling), or at the node level, if more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads"} +{"global_id": 1012, "doc_id": "eks", "chunk_id": "10", "question_id": 1, "question": "What services can delete unnecessary Pods and shut down unneeded nodes?", "answer_span": "these services can delete unnecessary Pods and shut down unneeded nodes.", "chunk": "more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads to another available node. You can force the issue by simply deleting a running instance of a workload or node that’s running your workloads. The bottom line here is that workloads can be brought up in other locations if they can no longer run where they are. • Declarative — Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state. By applying Kubernetes objects to a cluster, typically through YAML-formatted configuration files, you can, for example, ask to start up the workloads you want to run on your cluster. You can later change the configurations to do something like use a later version of a container or allocate more memory. Kubernetes will do what it needs to do to establish the desired state. This can include bringing nodes up or down, stopping and restarting workloads, or pulling updated containers. • Composable — Because an application typically consists of multiple components, you want to be able to manage a set of these components (often represented by multiple containers) together. While Docker Compose offers a way to do this directly with Docker, the Kubernetes Why Kubernetes? 10 Amazon EKS User Guide Kompose command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like"} +{"global_id": 1013, "doc_id": "eks", "chunk_id": "10", "question_id": 2, "question": "What can Kubernetes do if an application or node becomes unhealthy or unavailable?", "answer_span": "Kubernetes can move running workloads to another available node.", "chunk": "more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads to another available node. You can force the issue by simply deleting a running instance of a workload or node that’s running your workloads. The bottom line here is that workloads can be brought up in other locations if they can no longer run where they are. • Declarative — Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state. By applying Kubernetes objects to a cluster, typically through YAML-formatted configuration files, you can, for example, ask to start up the workloads you want to run on your cluster. You can later change the configurations to do something like use a later version of a container or allocate more memory. Kubernetes will do what it needs to do to establish the desired state. This can include bringing nodes up or down, stopping and restarting workloads, or pulling updated containers. • Composable — Because an application typically consists of multiple components, you want to be able to manage a set of these components (often represented by multiple containers) together. While Docker Compose offers a way to do this directly with Docker, the Kubernetes Why Kubernetes? 10 Amazon EKS User Guide Kompose command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like"} +{"global_id": 1014, "doc_id": "eks", "chunk_id": "10", "question_id": 3, "question": "How does Kubernetes ensure that the declared state matches the actual state?", "answer_span": "Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state.", "chunk": "more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads to another available node. You can force the issue by simply deleting a running instance of a workload or node that’s running your workloads. The bottom line here is that workloads can be brought up in other locations if they can no longer run where they are. • Declarative — Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state. By applying Kubernetes objects to a cluster, typically through YAML-formatted configuration files, you can, for example, ask to start up the workloads you want to run on your cluster. You can later change the configurations to do something like use a later version of a container or allocate more memory. Kubernetes will do what it needs to do to establish the desired state. This can include bringing nodes up or down, stopping and restarting workloads, or pulling updated containers. • Composable — Because an application typically consists of multiple components, you want to be able to manage a set of these components (often represented by multiple containers) together. While Docker Compose offers a way to do this directly with Docker, the Kubernetes Why Kubernetes? 10 Amazon EKS User Guide Kompose command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like"} +{"global_id": 1015, "doc_id": "eks", "chunk_id": "10", "question_id": 4, "question": "What command can help manage multiple components in Kubernetes?", "answer_span": "the Kubernetes Kompose command can help you do that with Kubernetes.", "chunk": "more nodes need to be brought up to handle the increased capacity (Cluster Autoscaler or Karpenter). As capacity is no longer needed, these services can delete unnecessary Pods and shut down unneeded nodes. • Available — If an application or node becomes unhealthy or unavailable, Kubernetes can move running workloads to another available node. You can force the issue by simply deleting a running instance of a workload or node that’s running your workloads. The bottom line here is that workloads can be brought up in other locations if they can no longer run where they are. • Declarative — Kubernetes uses active reconciliation to constantly check that the state that you declare for your cluster matches the actual state. By applying Kubernetes objects to a cluster, typically through YAML-formatted configuration files, you can, for example, ask to start up the workloads you want to run on your cluster. You can later change the configurations to do something like use a later version of a container or allocate more memory. Kubernetes will do what it needs to do to establish the desired state. This can include bringing nodes up or down, stopping and restarting workloads, or pulling updated containers. • Composable — Because an application typically consists of multiple components, you want to be able to manage a set of these components (often represented by multiple containers) together. While Docker Compose offers a way to do this directly with Docker, the Kubernetes Why Kubernetes? 10 Amazon EKS User Guide Kompose command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like"} +{"global_id": 1016, "doc_id": "eks", "chunk_id": "11", "question_id": 1, "question": "What can command help you do with Kubernetes?", "answer_span": "command can help you do that with Kubernetes.", "chunk": "command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs. APIs and configuration files are open to direct modifications. Third-parties are encouraged to write their own Controllers, to extend both infrastructure and end-user Kubernetes features. Webhooks let you set up cluster rules to enforce policies and adapt to changing conditions. For more ideas on how to extend Kubernetes clusters, see Extending Kubernetes. • Portable — Many organizations have standardized their operations on Kubernetes because it allows them to manage all of their application needs in the same way. Developers can use the same pipelines to build and store containerized applications. Those applications can then be deployed to Kubernetes clusters running on-premises, in clouds, on point-of-sales terminals in restaurants, or on IOT devices dispersed across company’s remote sites. Its open source nature makes it possible for people to develop these special Kubernetes distributions, along will tools needed to manage them. Managing Kubernetes Kubernetes source code is freely available, so with your own equipment you could install and manage Kubernetes yourself. However, self-managing Kubernetes requires deep operational expertise and takes time and effort to maintain. For those reasons, most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS"} +{"global_id": 1017, "doc_id": "eks", "chunk_id": "11", "question_id": 2, "question": "What is the nature of the Kubernetes project?", "answer_span": "the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs.", "chunk": "command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs. APIs and configuration files are open to direct modifications. Third-parties are encouraged to write their own Controllers, to extend both infrastructure and end-user Kubernetes features. Webhooks let you set up cluster rules to enforce policies and adapt to changing conditions. For more ideas on how to extend Kubernetes clusters, see Extending Kubernetes. • Portable — Many organizations have standardized their operations on Kubernetes because it allows them to manage all of their application needs in the same way. Developers can use the same pipelines to build and store containerized applications. Those applications can then be deployed to Kubernetes clusters running on-premises, in clouds, on point-of-sales terminals in restaurants, or on IOT devices dispersed across company’s remote sites. Its open source nature makes it possible for people to develop these special Kubernetes distributions, along will tools needed to manage them. Managing Kubernetes Kubernetes source code is freely available, so with your own equipment you could install and manage Kubernetes yourself. However, self-managing Kubernetes requires deep operational expertise and takes time and effort to maintain. For those reasons, most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS"} +{"global_id": 1018, "doc_id": "eks", "chunk_id": "11", "question_id": 3, "question": "Why do many organizations standardize their operations on Kubernetes?", "answer_span": "it allows them to manage all of their application needs in the same way.", "chunk": "command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs. APIs and configuration files are open to direct modifications. Third-parties are encouraged to write their own Controllers, to extend both infrastructure and end-user Kubernetes features. Webhooks let you set up cluster rules to enforce policies and adapt to changing conditions. For more ideas on how to extend Kubernetes clusters, see Extending Kubernetes. • Portable — Many organizations have standardized their operations on Kubernetes because it allows them to manage all of their application needs in the same way. Developers can use the same pipelines to build and store containerized applications. Those applications can then be deployed to Kubernetes clusters running on-premises, in clouds, on point-of-sales terminals in restaurants, or on IOT devices dispersed across company’s remote sites. Its open source nature makes it possible for people to develop these special Kubernetes distributions, along will tools needed to manage them. Managing Kubernetes Kubernetes source code is freely available, so with your own equipment you could install and manage Kubernetes yourself. However, self-managing Kubernetes requires deep operational expertise and takes time and effort to maintain. For those reasons, most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS"} +{"global_id": 1019, "doc_id": "eks", "chunk_id": "11", "question_id": 4, "question": "What do most people deploying production workloads choose?", "answer_span": "most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts.", "chunk": "command can help you do that with Kubernetes. See Translate a Docker Compose File to Kubernetes Resources for an example of how to do this. • Extensible — Unlike proprietary software, the open source Kubernetes project is designed to be open to you extending Kubernetes any way that you like to meet your needs. APIs and configuration files are open to direct modifications. Third-parties are encouraged to write their own Controllers, to extend both infrastructure and end-user Kubernetes features. Webhooks let you set up cluster rules to enforce policies and adapt to changing conditions. For more ideas on how to extend Kubernetes clusters, see Extending Kubernetes. • Portable — Many organizations have standardized their operations on Kubernetes because it allows them to manage all of their application needs in the same way. Developers can use the same pipelines to build and store containerized applications. Those applications can then be deployed to Kubernetes clusters running on-premises, in clouds, on point-of-sales terminals in restaurants, or on IOT devices dispersed across company’s remote sites. Its open source nature makes it possible for people to develop these special Kubernetes distributions, along will tools needed to manage them. Managing Kubernetes Kubernetes source code is freely available, so with your own equipment you could install and manage Kubernetes yourself. However, self-managing Kubernetes requires deep operational expertise and takes time and effort to maintain. For those reasons, most people deploying production workloads choose a cloud provider (such as Amazon EKS) or on-premises provider (such as Amazon EKS Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS"} +{"global_id": 1020, "doc_id": "eks", "chunk_id": "12", "question_id": 1, "question": "What does Amazon EKS allow you to do regarding hardware?", "answer_span": "a cloud provider such as AWS Amazon EKS can save you on upfront costs.", "chunk": "Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS Amazon EKS can save you on upfront costs. With Amazon EKS, this means that you can consume the best cloud resources offered by AWS, including computer instances (Amazon Elastic Compute Cloud), your own private environment (Amazon VPC), central identity and permissions management (IAM), and storage (Amazon EBS). AWS manages the computers, networks, data centers, and all the other physical components needed to run Kubernetes. Likewise, you don’t have to plan your datacenter to handle the maximum capacity on your highest-demand days. For Amazon EKS Anywhere, or other on premises Kubernetes clusters, you are responsible for managing the infrastructure used in your Kubernetes deployments, but you can still rely on AWS to help you keep Kubernetes up to date. Why Kubernetes? 11 Amazon EKS User Guide • Control plane management — Amazon EKS manages the security and availability of the AWShosted Kubernetes control plane, which is responsible for scheduling containers, managing the availability of applications, and other key tasks, so you can focus on your application workloads. If your cluster breaks, AWS should have the means to restore your cluster to a running state. For Amazon EKS Anywhere, you would manage the control plane yourself. • Tested upgrades — When you upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running"} +{"global_id": 1021, "doc_id": "eks", "chunk_id": "12", "question_id": 2, "question": "What is managed by AWS in relation to Kubernetes?", "answer_span": "AWS manages the computers, networks, data centers, and all the other physical components needed to run Kubernetes.", "chunk": "Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS Amazon EKS can save you on upfront costs. With Amazon EKS, this means that you can consume the best cloud resources offered by AWS, including computer instances (Amazon Elastic Compute Cloud), your own private environment (Amazon VPC), central identity and permissions management (IAM), and storage (Amazon EBS). AWS manages the computers, networks, data centers, and all the other physical components needed to run Kubernetes. Likewise, you don’t have to plan your datacenter to handle the maximum capacity on your highest-demand days. For Amazon EKS Anywhere, or other on premises Kubernetes clusters, you are responsible for managing the infrastructure used in your Kubernetes deployments, but you can still rely on AWS to help you keep Kubernetes up to date. Why Kubernetes? 11 Amazon EKS User Guide • Control plane management — Amazon EKS manages the security and availability of the AWShosted Kubernetes control plane, which is responsible for scheduling containers, managing the availability of applications, and other key tasks, so you can focus on your application workloads. If your cluster breaks, AWS should have the means to restore your cluster to a running state. For Amazon EKS Anywhere, you would manage the control plane yourself. • Tested upgrades — When you upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running"} +{"global_id": 1022, "doc_id": "eks", "chunk_id": "12", "question_id": 3, "question": "What does Amazon EKS manage in terms of control plane?", "answer_span": "Amazon EKS manages the security and availability of the AWS-hosted Kubernetes control plane.", "chunk": "Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS Amazon EKS can save you on upfront costs. With Amazon EKS, this means that you can consume the best cloud resources offered by AWS, including computer instances (Amazon Elastic Compute Cloud), your own private environment (Amazon VPC), central identity and permissions management (IAM), and storage (Amazon EBS). AWS manages the computers, networks, data centers, and all the other physical components needed to run Kubernetes. Likewise, you don’t have to plan your datacenter to handle the maximum capacity on your highest-demand days. For Amazon EKS Anywhere, or other on premises Kubernetes clusters, you are responsible for managing the infrastructure used in your Kubernetes deployments, but you can still rely on AWS to help you keep Kubernetes up to date. Why Kubernetes? 11 Amazon EKS User Guide • Control plane management — Amazon EKS manages the security and availability of the AWShosted Kubernetes control plane, which is responsible for scheduling containers, managing the availability of applications, and other key tasks, so you can focus on your application workloads. If your cluster breaks, AWS should have the means to restore your cluster to a running state. For Amazon EKS Anywhere, you would manage the control plane yourself. • Tested upgrades — When you upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running"} +{"global_id": 1023, "doc_id": "eks", "chunk_id": "12", "question_id": 4, "question": "What can you rely on Amazon EKS or Amazon EKS Anywhere to provide when upgrading clusters?", "answer_span": "to provide tested versions of their Kubernetes distributions.", "chunk": "Anywhere) with its own tested Kubernetes distribution and support of Kubernetes experts. This allows you to offload much of the undifferentiated heavy lifting needed to maintain your clusters, including: • Hardware — If you don’t have hardware available to run Kubernetes per your requirements, a cloud provider such as AWS Amazon EKS can save you on upfront costs. With Amazon EKS, this means that you can consume the best cloud resources offered by AWS, including computer instances (Amazon Elastic Compute Cloud), your own private environment (Amazon VPC), central identity and permissions management (IAM), and storage (Amazon EBS). AWS manages the computers, networks, data centers, and all the other physical components needed to run Kubernetes. Likewise, you don’t have to plan your datacenter to handle the maximum capacity on your highest-demand days. For Amazon EKS Anywhere, or other on premises Kubernetes clusters, you are responsible for managing the infrastructure used in your Kubernetes deployments, but you can still rely on AWS to help you keep Kubernetes up to date. Why Kubernetes? 11 Amazon EKS User Guide • Control plane management — Amazon EKS manages the security and availability of the AWShosted Kubernetes control plane, which is responsible for scheduling containers, managing the availability of applications, and other key tasks, so you can focus on your application workloads. If your cluster breaks, AWS should have the means to restore your cluster to a running state. For Amazon EKS Anywhere, you would manage the control plane yourself. • Tested upgrades — When you upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running"} +{"global_id": 1024, "doc_id": "eks", "chunk_id": "13", "question_id": 1, "question": "What services can you rely on to upgrade your clusters?", "answer_span": "you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions.", "chunk": "upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running of your workloads. Instead of building and managing those add-ons yourself, AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters. Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects. So you don’t have to build the software yourself or manage critical security patches, bug fixes, or upgrades. Likewise, if the defaults meet your needs, it’s typical for very little configuration of those add-ons to be needed. See the section called “Extend Clusters” for details on extending your cluster with add-ons. Kubernetes in action The following diagram shows key activities you would do as a Kubernetes Admin or Application Developer to create and use a Kubernetes cluster. In the process, it illustrates how Kubernetes components interact with each other, using the AWS cloud as the example of the underlying cloud provider. Why Kubernetes? 12 Amazon EKS User Guide A Kubernetes Admin creates the Kubernetes cluster using a tool specific to the type of provider on which the cluster will be built. This example uses the AWS cloud as the provider, which offers the managed Kubernetes service called Amazon EKS. The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon"} +{"global_id": 1025, "doc_id": "eks", "chunk_id": "13", "question_id": 2, "question": "What does AWS provide to help with add-ons for Kubernetes?", "answer_span": "AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters.", "chunk": "upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running of your workloads. Instead of building and managing those add-ons yourself, AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters. Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects. So you don’t have to build the software yourself or manage critical security patches, bug fixes, or upgrades. Likewise, if the defaults meet your needs, it’s typical for very little configuration of those add-ons to be needed. See the section called “Extend Clusters” for details on extending your cluster with add-ons. Kubernetes in action The following diagram shows key activities you would do as a Kubernetes Admin or Application Developer to create and use a Kubernetes cluster. In the process, it illustrates how Kubernetes components interact with each other, using the AWS cloud as the example of the underlying cloud provider. Why Kubernetes? 12 Amazon EKS User Guide A Kubernetes Admin creates the Kubernetes cluster using a tool specific to the type of provider on which the cluster will be built. This example uses the AWS cloud as the provider, which offers the managed Kubernetes service called Amazon EKS. The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon"} +{"global_id": 1026, "doc_id": "eks", "chunk_id": "13", "question_id": 3, "question": "What does Amazon EKS Anywhere provide?", "answer_span": "Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects.", "chunk": "upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running of your workloads. Instead of building and managing those add-ons yourself, AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters. Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects. So you don’t have to build the software yourself or manage critical security patches, bug fixes, or upgrades. Likewise, if the defaults meet your needs, it’s typical for very little configuration of those add-ons to be needed. See the section called “Extend Clusters” for details on extending your cluster with add-ons. Kubernetes in action The following diagram shows key activities you would do as a Kubernetes Admin or Application Developer to create and use a Kubernetes cluster. In the process, it illustrates how Kubernetes components interact with each other, using the AWS cloud as the example of the underlying cloud provider. Why Kubernetes? 12 Amazon EKS User Guide A Kubernetes Admin creates the Kubernetes cluster using a tool specific to the type of provider on which the cluster will be built. This example uses the AWS cloud as the provider, which offers the managed Kubernetes service called Amazon EKS. The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon"} +{"global_id": 1027, "doc_id": "eks", "chunk_id": "13", "question_id": 4, "question": "What does the managed service automatically allocate for the cluster?", "answer_span": "The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster.", "chunk": "upgrade your clusters, you can rely on Amazon EKS or Amazon EKS Anywhere to provide tested versions of their Kubernetes distributions. • Add-ons — There are hundreds of projects built to extend and work with Kubernetes that you can add to your cluster’s infrastructure or use to aid the running of your workloads. Instead of building and managing those add-ons yourself, AWS provides the section called “Amazon EKS add-ons” that you can use with your clusters. Amazon EKS Anywhere provides Curated Packages that include builds of many popular open source projects. So you don’t have to build the software yourself or manage critical security patches, bug fixes, or upgrades. Likewise, if the defaults meet your needs, it’s typical for very little configuration of those add-ons to be needed. See the section called “Extend Clusters” for details on extending your cluster with add-ons. Kubernetes in action The following diagram shows key activities you would do as a Kubernetes Admin or Application Developer to create and use a Kubernetes cluster. In the process, it illustrates how Kubernetes components interact with each other, using the AWS cloud as the example of the underlying cloud provider. Why Kubernetes? 12 Amazon EKS User Guide A Kubernetes Admin creates the Kubernetes cluster using a tool specific to the type of provider on which the cluster will be built. This example uses the AWS cloud as the provider, which offers the managed Kubernetes service called Amazon EKS. The managed service automatically allocates the resources needed to create the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon"} +{"global_id": 1028, "doc_id": "eks", "chunk_id": "14", "question_id": 1, "question": "What does the managed service allocate for running workloads?", "answer_span": "allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads.", "chunk": "the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads. AWS manages one Amazon VPC itself for the control plane, while the other Amazon VPC contains the customer nodes that run workloads. Many of the Kubernetes Admin’s tasks going forward are done using Kubernetes tools such as kubectl. That tool makes requests for services directly to the cluster’s control plane. The ways that queries and changes are made to the cluster are then very similar to the ways you would do them on any Kubernetes cluster. An application developer wanting to deploy workloads to this cluster can perform several tasks. The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster. AWS offers the Amazon Elastic Container Registry (Amazon ECR) for that purpose. Why Kubernetes? 13 Amazon EKS User Guide To run the application, the developer can create YAML-formatted configuration files that tell the cluster how to run the application, including which containers to pull from the registry and how to wrap those containers in Pods. The control plane (scheduler) schedules the containers to one or more nodes and the container runtime on each node actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to"} +{"global_id": 1029, "doc_id": "eks", "chunk_id": "14", "question_id": 2, "question": "What tool does the Kubernetes Admin use to make requests for services?", "answer_span": "That tool makes requests for services directly to the cluster’s control plane.", "chunk": "the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads. AWS manages one Amazon VPC itself for the control plane, while the other Amazon VPC contains the customer nodes that run workloads. Many of the Kubernetes Admin’s tasks going forward are done using Kubernetes tools such as kubectl. That tool makes requests for services directly to the cluster’s control plane. The ways that queries and changes are made to the cluster are then very similar to the ways you would do them on any Kubernetes cluster. An application developer wanting to deploy workloads to this cluster can perform several tasks. The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster. AWS offers the Amazon Elastic Container Registry (Amazon ECR) for that purpose. Why Kubernetes? 13 Amazon EKS User Guide To run the application, the developer can create YAML-formatted configuration files that tell the cluster how to run the application, including which containers to pull from the registry and how to wrap those containers in Pods. The control plane (scheduler) schedules the containers to one or more nodes and the container runtime on each node actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to"} +{"global_id": 1030, "doc_id": "eks", "chunk_id": "14", "question_id": 3, "question": "What does the developer need to do to deploy workloads to the cluster?", "answer_span": "The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster.", "chunk": "the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads. AWS manages one Amazon VPC itself for the control plane, while the other Amazon VPC contains the customer nodes that run workloads. Many of the Kubernetes Admin’s tasks going forward are done using Kubernetes tools such as kubectl. That tool makes requests for services directly to the cluster’s control plane. The ways that queries and changes are made to the cluster are then very similar to the ways you would do them on any Kubernetes cluster. An application developer wanting to deploy workloads to this cluster can perform several tasks. The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster. AWS offers the Amazon Elastic Container Registry (Amazon ECR) for that purpose. Why Kubernetes? 13 Amazon EKS User Guide To run the application, the developer can create YAML-formatted configuration files that tell the cluster how to run the application, including which containers to pull from the registry and how to wrap those containers in Pods. The control plane (scheduler) schedules the containers to one or more nodes and the container runtime on each node actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to"} +{"global_id": 1031, "doc_id": "eks", "chunk_id": "14", "question_id": 4, "question": "What service does AWS offer for container registry?", "answer_span": "AWS offers the Amazon Elastic Container Registry (Amazon ECR) for that purpose.", "chunk": "the cluster, including creating two new Virtual Private Clouds (Amazon VPCs) for the cluster, setting up networking, and mapping Kubernetes permissions directly into the new VPCs for cloud asset management. The managed service also sees that the control plane services have places to run and allocates zero or more Amazon EC2 instances as Kubernetes nodes for running workloads. AWS manages one Amazon VPC itself for the control plane, while the other Amazon VPC contains the customer nodes that run workloads. Many of the Kubernetes Admin’s tasks going forward are done using Kubernetes tools such as kubectl. That tool makes requests for services directly to the cluster’s control plane. The ways that queries and changes are made to the cluster are then very similar to the ways you would do them on any Kubernetes cluster. An application developer wanting to deploy workloads to this cluster can perform several tasks. The developer needs to build the application into one or more container images, then push those images to a container registry that is accessible to the Kubernetes cluster. AWS offers the Amazon Elastic Container Registry (Amazon ECR) for that purpose. Why Kubernetes? 13 Amazon EKS User Guide To run the application, the developer can create YAML-formatted configuration files that tell the cluster how to run the application, including which containers to pull from the registry and how to wrap those containers in Pods. The control plane (scheduler) schedules the containers to one or more nodes and the container runtime on each node actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to"} +{"global_id": 1032, "doc_id": "eks", "chunk_id": "15", "question_id": 1, "question": "What can a developer set up to balance traffic to available containers?", "answer_span": "The developer can also set up an application load balancer to balance traffic to available containers running on each node", "chunk": "actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to use the application can connect to the application endpoint to access it. The following sections go through details of each of these features, from the perspective of Kubernetes Clusters and Workloads. Clusters If your job is to start and manage Kubernetes clusters, you should know how Kubernetes clusters are created, enhanced, managed, and deleted. You should also know what the components are that make up a cluster and what you need to do to maintain those components. Tools for managing clusters handle the overlap between the Kubernetes services and the underlying hardware provider. For that reason, automation of these tasks tend to be done by the Kubernetes provider (such as Amazon EKS or Amazon EKS Anywhere) using tools that are specific to the provider. For example, to start an Amazon EKS cluster you can use eksctl create cluster, while for Amazon EKS Anywhere you can use eksctl anywhere create cluster. Note that while these commands create a Kubernetes cluster, they are specific to the provider and are not part of the Kubernetes project itself. Cluster creation and management tools The Kubernetes project offers tools for creating a Kubernetes cluster manually. So if you want to install Kubernetes on a single machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported"} +{"global_id": 1033, "doc_id": "eks", "chunk_id": "15", "question_id": 2, "question": "What should you know if your job is to start and manage Kubernetes clusters?", "answer_span": "you should know how Kubernetes clusters are created, enhanced, managed, and deleted", "chunk": "actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to use the application can connect to the application endpoint to access it. The following sections go through details of each of these features, from the perspective of Kubernetes Clusters and Workloads. Clusters If your job is to start and manage Kubernetes clusters, you should know how Kubernetes clusters are created, enhanced, managed, and deleted. You should also know what the components are that make up a cluster and what you need to do to maintain those components. Tools for managing clusters handle the overlap between the Kubernetes services and the underlying hardware provider. For that reason, automation of these tasks tend to be done by the Kubernetes provider (such as Amazon EKS or Amazon EKS Anywhere) using tools that are specific to the provider. For example, to start an Amazon EKS cluster you can use eksctl create cluster, while for Amazon EKS Anywhere you can use eksctl anywhere create cluster. Note that while these commands create a Kubernetes cluster, they are specific to the provider and are not part of the Kubernetes project itself. Cluster creation and management tools The Kubernetes project offers tools for creating a Kubernetes cluster manually. So if you want to install Kubernetes on a single machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported"} +{"global_id": 1034, "doc_id": "eks", "chunk_id": "15", "question_id": 3, "question": "What tools can you use to start an Amazon EKS cluster?", "answer_span": "to start an Amazon EKS cluster you can use eksctl create cluster", "chunk": "actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to use the application can connect to the application endpoint to access it. The following sections go through details of each of these features, from the perspective of Kubernetes Clusters and Workloads. Clusters If your job is to start and manage Kubernetes clusters, you should know how Kubernetes clusters are created, enhanced, managed, and deleted. You should also know what the components are that make up a cluster and what you need to do to maintain those components. Tools for managing clusters handle the overlap between the Kubernetes services and the underlying hardware provider. For that reason, automation of these tasks tend to be done by the Kubernetes provider (such as Amazon EKS or Amazon EKS Anywhere) using tools that are specific to the provider. For example, to start an Amazon EKS cluster you can use eksctl create cluster, while for Amazon EKS Anywhere you can use eksctl anywhere create cluster. Note that while these commands create a Kubernetes cluster, they are specific to the provider and are not part of the Kubernetes project itself. Cluster creation and management tools The Kubernetes project offers tools for creating a Kubernetes cluster manually. So if you want to install Kubernetes on a single machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported"} +{"global_id": 1035, "doc_id": "eks", "chunk_id": "15", "question_id": 4, "question": "What tools does the Kubernetes project offer for creating a Kubernetes cluster manually?", "answer_span": "you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools", "chunk": "actually pulls and runs the needed containers. The developer can also set up an application load balancer to balance traffic to available containers running on each node and expose the application so it is available on a public network to the outside world. With that all done, someone wanting to use the application can connect to the application endpoint to access it. The following sections go through details of each of these features, from the perspective of Kubernetes Clusters and Workloads. Clusters If your job is to start and manage Kubernetes clusters, you should know how Kubernetes clusters are created, enhanced, managed, and deleted. You should also know what the components are that make up a cluster and what you need to do to maintain those components. Tools for managing clusters handle the overlap between the Kubernetes services and the underlying hardware provider. For that reason, automation of these tasks tend to be done by the Kubernetes provider (such as Amazon EKS or Amazon EKS Anywhere) using tools that are specific to the provider. For example, to start an Amazon EKS cluster you can use eksctl create cluster, while for Amazon EKS Anywhere you can use eksctl anywhere create cluster. Note that while these commands create a Kubernetes cluster, they are specific to the provider and are not part of the Kubernetes project itself. Cluster creation and management tools The Kubernetes project offers tools for creating a Kubernetes cluster manually. So if you want to install Kubernetes on a single machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported"} +{"global_id": 1036, "doc_id": "eks", "chunk_id": "16", "question_id": 1, "question": "What tools can you use to create a Kubernetes cluster?", "answer_span": "you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools.", "chunk": "machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported by an established Kubernetes provider, such as Amazon EKS or Amazon EKS Anywhere. In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform (see Amazon EKS Blueprints for Terraform). You can also create a cluster from the AWS Management Console. See Amazon EKS features for a list what you get with Amazon EKS. Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones. • Node management — Instead of manually adding nodes, you can have Amazon EKS create nodes automatically as needed, using Managed Node Groups (see the section called “Managed node groups”) or Karpenter. Managed Node Groups have integrations with Kubernetes Cluster Autoscaling. Using node management tools, you can take advantage of cost savings, with things like Spot Instances and node consolidation, and availability, using Scheduling features to set how workloads are deployed and nodes are selected. • Cluster networking — Using CloudFormation templates, eksctl sets up networking between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities"} +{"global_id": 1037, "doc_id": "eks", "chunk_id": "16", "question_id": 2, "question": "What does Amazon EKS manage for you?", "answer_span": "Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones.", "chunk": "machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported by an established Kubernetes provider, such as Amazon EKS or Amazon EKS Anywhere. In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform (see Amazon EKS Blueprints for Terraform). You can also create a cluster from the AWS Management Console. See Amazon EKS features for a list what you get with Amazon EKS. Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones. • Node management — Instead of manually adding nodes, you can have Amazon EKS create nodes automatically as needed, using Managed Node Groups (see the section called “Managed node groups”) or Karpenter. Managed Node Groups have integrations with Kubernetes Cluster Autoscaling. Using node management tools, you can take advantage of cost savings, with things like Spot Instances and node consolidation, and availability, using Scheduling features to set how workloads are deployed and nodes are selected. • Cluster networking — Using CloudFormation templates, eksctl sets up networking between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities"} +{"global_id": 1038, "doc_id": "eks", "chunk_id": "16", "question_id": 3, "question": "How can you create Amazon EKS clusters in AWS Cloud?", "answer_span": "In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform.", "chunk": "machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported by an established Kubernetes provider, such as Amazon EKS or Amazon EKS Anywhere. In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform (see Amazon EKS Blueprints for Terraform). You can also create a cluster from the AWS Management Console. See Amazon EKS features for a list what you get with Amazon EKS. Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones. • Node management — Instead of manually adding nodes, you can have Amazon EKS create nodes automatically as needed, using Managed Node Groups (see the section called “Managed node groups”) or Karpenter. Managed Node Groups have integrations with Kubernetes Cluster Autoscaling. Using node management tools, you can take advantage of cost savings, with things like Spot Instances and node consolidation, and availability, using Scheduling features to set how workloads are deployed and nodes are selected. • Cluster networking — Using CloudFormation templates, eksctl sets up networking between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities"} +{"global_id": 1039, "doc_id": "eks", "chunk_id": "16", "question_id": 4, "question": "What does eksctl set up in a Kubernetes cluster?", "answer_span": "Using CloudFormation templates, eksctl sets up networking between control plane and data plane (node) components in the Kubernetes cluster.", "chunk": "machine, or run the control plane on a machine and add nodes manually, you can use CLI tools like kind, minikube, or kubeadm that are listed under Kubernetes Install Tools. To simplify and automate the full lifecycle of cluster creation and management, it is much easier to use tools supported by an established Kubernetes provider, such as Amazon EKS or Amazon EKS Anywhere. In AWS Cloud, you can create Amazon EKS clusters using CLI tools, such as eksctl, or more declarative tools, such as Terraform (see Amazon EKS Blueprints for Terraform). You can also create a cluster from the AWS Management Console. See Amazon EKS features for a list what you get with Amazon EKS. Kubernetes responsibilities that Amazon EKS takes on for you include: Clusters 14 Amazon EKS User Guide • Managed control plane — AWS makes sure that the Amazon EKS cluster is available and scalable because it manages the control plane for you and makes it available across AWS Availability Zones. • Node management — Instead of manually adding nodes, you can have Amazon EKS create nodes automatically as needed, using Managed Node Groups (see the section called “Managed node groups”) or Karpenter. Managed Node Groups have integrations with Kubernetes Cluster Autoscaling. Using node management tools, you can take advantage of cost savings, with things like Spot Instances and node consolidation, and availability, using Scheduling features to set how workloads are deployed and nodes are selected. • Cluster networking — Using CloudFormation templates, eksctl sets up networking between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities"} +{"global_id": 1040, "doc_id": "eks", "chunk_id": "17", "question_id": 1, "question": "What does Amazon EKS save you from having to build?", "answer_span": "Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters.", "chunk": "between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities (see the section called “Pod Identity”), which provides a means of letting Pods tap into AWS cloud methods of managing credentials and permissions. • Add-Ons — Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters. For example, when you create an Amazon EKS cluster from the AWS Management Console, it automatically adds the Amazon EKS kube-proxy (the section called “kube-proxy”), Amazon VPC CNI plugin for Kubernetes (the section called “Amazon VPC CNI”), and CoreDNS (the section called “CoreDNS”) add-ons. See the section called “Amazon EKS add-ons” for more on these add-ons, including a list of which are available. To run your clusters on your own on-premises computers and networks, Amazon offers Amazon EKS Anywhere. Instead of the AWS Cloud being the provider, you have the choice of running Amazon EKS Anywhere on VMWare vSphere, bare metal (Tinkerbell provider), Snow, CloudStack, or Nutanix platforms using your own equipment. Amazon EKS Anywhere is based on the same Amazon EKS Distro software that is used by Amazon EKS. However, Amazon EKS Anywhere relies on different implementations of the Kubernetes Cluster API (CAPI) interface to manage the full lifecycle of the machines in an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User"} +{"global_id": 1041, "doc_id": "eks", "chunk_id": "17", "question_id": 2, "question": "What are the add-ons automatically added when creating an Amazon EKS cluster?", "answer_span": "it automatically adds the Amazon EKS kube-proxy, Amazon VPC CNI plugin for Kubernetes, and CoreDNS add-ons.", "chunk": "between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities (see the section called “Pod Identity”), which provides a means of letting Pods tap into AWS cloud methods of managing credentials and permissions. • Add-Ons — Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters. For example, when you create an Amazon EKS cluster from the AWS Management Console, it automatically adds the Amazon EKS kube-proxy (the section called “kube-proxy”), Amazon VPC CNI plugin for Kubernetes (the section called “Amazon VPC CNI”), and CoreDNS (the section called “CoreDNS”) add-ons. See the section called “Amazon EKS add-ons” for more on these add-ons, including a list of which are available. To run your clusters on your own on-premises computers and networks, Amazon offers Amazon EKS Anywhere. Instead of the AWS Cloud being the provider, you have the choice of running Amazon EKS Anywhere on VMWare vSphere, bare metal (Tinkerbell provider), Snow, CloudStack, or Nutanix platforms using your own equipment. Amazon EKS Anywhere is based on the same Amazon EKS Distro software that is used by Amazon EKS. However, Amazon EKS Anywhere relies on different implementations of the Kubernetes Cluster API (CAPI) interface to manage the full lifecycle of the machines in an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User"} +{"global_id": 1042, "doc_id": "eks", "chunk_id": "17", "question_id": 3, "question": "What does Amazon EKS Anywhere allow you to do?", "answer_span": "Amazon offers Amazon EKS Anywhere.", "chunk": "between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities (see the section called “Pod Identity”), which provides a means of letting Pods tap into AWS cloud methods of managing credentials and permissions. • Add-Ons — Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters. For example, when you create an Amazon EKS cluster from the AWS Management Console, it automatically adds the Amazon EKS kube-proxy (the section called “kube-proxy”), Amazon VPC CNI plugin for Kubernetes (the section called “Amazon VPC CNI”), and CoreDNS (the section called “CoreDNS”) add-ons. See the section called “Amazon EKS add-ons” for more on these add-ons, including a list of which are available. To run your clusters on your own on-premises computers and networks, Amazon offers Amazon EKS Anywhere. Instead of the AWS Cloud being the provider, you have the choice of running Amazon EKS Anywhere on VMWare vSphere, bare metal (Tinkerbell provider), Snow, CloudStack, or Nutanix platforms using your own equipment. Amazon EKS Anywhere is based on the same Amazon EKS Distro software that is used by Amazon EKS. However, Amazon EKS Anywhere relies on different implementations of the Kubernetes Cluster API (CAPI) interface to manage the full lifecycle of the machines in an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User"} +{"global_id": 1043, "doc_id": "eks", "chunk_id": "17", "question_id": 4, "question": "What software does Amazon EKS Anywhere rely on?", "answer_span": "Amazon EKS Anywhere is based on the same Amazon EKS Distro software that is used by Amazon EKS.", "chunk": "between control plane and data plane (node) components in the Kubernetes cluster. It also sets up endpoints through which internal and external communications can take place. See Demystifying cluster networking for Amazon EKS worker nodes for details. Communications between Pods in Amazon EKS is done using Amazon EKS Pod Identities (see the section called “Pod Identity”), which provides a means of letting Pods tap into AWS cloud methods of managing credentials and permissions. • Add-Ons — Amazon EKS saves you from having to build and add software components that are commonly used to support Kubernetes clusters. For example, when you create an Amazon EKS cluster from the AWS Management Console, it automatically adds the Amazon EKS kube-proxy (the section called “kube-proxy”), Amazon VPC CNI plugin for Kubernetes (the section called “Amazon VPC CNI”), and CoreDNS (the section called “CoreDNS”) add-ons. See the section called “Amazon EKS add-ons” for more on these add-ons, including a list of which are available. To run your clusters on your own on-premises computers and networks, Amazon offers Amazon EKS Anywhere. Instead of the AWS Cloud being the provider, you have the choice of running Amazon EKS Anywhere on VMWare vSphere, bare metal (Tinkerbell provider), Snow, CloudStack, or Nutanix platforms using your own equipment. Amazon EKS Anywhere is based on the same Amazon EKS Distro software that is used by Amazon EKS. However, Amazon EKS Anywhere relies on different implementations of the Kubernetes Cluster API (CAPI) interface to manage the full lifecycle of the machines in an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User"} +{"global_id": 1044, "doc_id": "eks", "chunk_id": "18", "question_id": 1, "question": "What is an Amazon EKS Anywhere cluster?", "answer_span": "an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack)", "chunk": "an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User Guide Cluster components Kubernetes cluster components are divided into two major areas: control plane and worker nodes. Control Plane Components manage the cluster and provide access to its APIs. Worker nodes (sometimes just referred to as Nodes) provide the places where the actual workloads are run. Node Components consist of services that run on each node to communicate with the control plane and run containers. The set of worker nodes for your cluster is referred to as the Data Plane. Control plane The control plane consists of a set of services that manage the cluster. These services may all be running on a single computer or may be spread across multiple computers. Internally, these are referred to as Control Plane Instances (CPIs). How CPIs are run depends on the size of the cluster and requirements for high availability. As demand increase in the cluster, a control plane service can scale to provide more instances of that service, with requests being load balanced between the instances. Tasks that components of the Kubernetes control plane performs include: • Communicating with cluster components (API server) — The API server (kube-apiserver) exposes the Kubernetes API so requests to the cluster can be made from both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within"} +{"global_id": 1045, "doc_id": "eks", "chunk_id": "18", "question_id": 2, "question": "What are the two major areas into which Kubernetes cluster components are divided?", "answer_span": "Kubernetes cluster components are divided into two major areas: control plane and worker nodes", "chunk": "an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User Guide Cluster components Kubernetes cluster components are divided into two major areas: control plane and worker nodes. Control Plane Components manage the cluster and provide access to its APIs. Worker nodes (sometimes just referred to as Nodes) provide the places where the actual workloads are run. Node Components consist of services that run on each node to communicate with the control plane and run containers. The set of worker nodes for your cluster is referred to as the Data Plane. Control plane The control plane consists of a set of services that manage the cluster. These services may all be running on a single computer or may be spread across multiple computers. Internally, these are referred to as Control Plane Instances (CPIs). How CPIs are run depends on the size of the cluster and requirements for high availability. As demand increase in the cluster, a control plane service can scale to provide more instances of that service, with requests being load balanced between the instances. Tasks that components of the Kubernetes control plane performs include: • Communicating with cluster components (API server) — The API server (kube-apiserver) exposes the Kubernetes API so requests to the cluster can be made from both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within"} +{"global_id": 1046, "doc_id": "eks", "chunk_id": "18", "question_id": 3, "question": "What do Control Plane Components do?", "answer_span": "Control Plane Components manage the cluster and provide access to its APIs", "chunk": "an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User Guide Cluster components Kubernetes cluster components are divided into two major areas: control plane and worker nodes. Control Plane Components manage the cluster and provide access to its APIs. Worker nodes (sometimes just referred to as Nodes) provide the places where the actual workloads are run. Node Components consist of services that run on each node to communicate with the control plane and run containers. The set of worker nodes for your cluster is referred to as the Data Plane. Control plane The control plane consists of a set of services that manage the cluster. These services may all be running on a single computer or may be spread across multiple computers. Internally, these are referred to as Control Plane Instances (CPIs). How CPIs are run depends on the size of the cluster and requirements for high availability. As demand increase in the cluster, a control plane service can scale to provide more instances of that service, with requests being load balanced between the instances. Tasks that components of the Kubernetes control plane performs include: • Communicating with cluster components (API server) — The API server (kube-apiserver) exposes the Kubernetes API so requests to the cluster can be made from both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within"} +{"global_id": 1047, "doc_id": "eks", "chunk_id": "18", "question_id": 4, "question": "What is referred to as the Data Plane?", "answer_span": "The set of worker nodes for your cluster is referred to as the Data Plane", "chunk": "an Amazon EKS Anywhere cluster (such as CAPV for vSphere and CAPC for CloudStack). Because the entire cluster is running on your equipment, you take on the added responsibility of managing the control plane and backing up its data (see etcd later in this document). Clusters 15 Amazon EKS User Guide Cluster components Kubernetes cluster components are divided into two major areas: control plane and worker nodes. Control Plane Components manage the cluster and provide access to its APIs. Worker nodes (sometimes just referred to as Nodes) provide the places where the actual workloads are run. Node Components consist of services that run on each node to communicate with the control plane and run containers. The set of worker nodes for your cluster is referred to as the Data Plane. Control plane The control plane consists of a set of services that manage the cluster. These services may all be running on a single computer or may be spread across multiple computers. Internally, these are referred to as Control Plane Instances (CPIs). How CPIs are run depends on the size of the cluster and requirements for high availability. As demand increase in the cluster, a control plane service can scale to provide more instances of that service, with requests being load balanced between the instances. Tasks that components of the Kubernetes control plane performs include: • Communicating with cluster components (API server) — The API server (kube-apiserver) exposes the Kubernetes API so requests to the cluster can be made from both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within"} +{"global_id": 1048, "doc_id": "eks", "chunk_id": "19", "question_id": 1, "question": "What role does the etcd service play in a cluster?", "answer_span": "The etcd service provides the critical role of keeping track of the current state of the cluster.", "chunk": "both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within the cluster, such as a query to the kubelet service for the status of a Pod. • Store data about the cluster (etcd key value store) — The etcd service provides the critical role of keeping track of the current state of the cluster. If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while. For that reason, critical clusters typically have multiple, loadbalanced instances of the etcd service running at a time and do periodic backups of the etcd key value store in case of data loss or corruption. Keep in mind that, in Amazon EKS, this is all handled for you automatically by default. Amazon EKS Anywhere provides instruction for etcd backup and restore. See the etcd Data Model to learn how etcd manages data. • Schedule Pods to nodes (Scheduler) — Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler). Because a cluster could have multiple nodes that are capable of running the Pod, it is up to the Scheduler to choose which node (or nodes, in the case of replicas) the Pod should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed"} +{"global_id": 1049, "doc_id": "eks", "chunk_id": "19", "question_id": 2, "question": "What happens if the etcd service becomes inaccessible?", "answer_span": "If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while.", "chunk": "both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within the cluster, such as a query to the kubelet service for the status of a Pod. • Store data about the cluster (etcd key value store) — The etcd service provides the critical role of keeping track of the current state of the cluster. If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while. For that reason, critical clusters typically have multiple, loadbalanced instances of the etcd service running at a time and do periodic backups of the etcd key value store in case of data loss or corruption. Keep in mind that, in Amazon EKS, this is all handled for you automatically by default. Amazon EKS Anywhere provides instruction for etcd backup and restore. See the etcd Data Model to learn how etcd manages data. • Schedule Pods to nodes (Scheduler) — Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler). Because a cluster could have multiple nodes that are capable of running the Pod, it is up to the Scheduler to choose which node (or nodes, in the case of replicas) the Pod should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed"} +{"global_id": 1050, "doc_id": "eks", "chunk_id": "19", "question_id": 3, "question": "Who directs requests to start or stop a Pod in Kubernetes?", "answer_span": "Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler).", "chunk": "both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within the cluster, such as a query to the kubelet service for the status of a Pod. • Store data about the cluster (etcd key value store) — The etcd service provides the critical role of keeping track of the current state of the cluster. If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while. For that reason, critical clusters typically have multiple, loadbalanced instances of the etcd service running at a time and do periodic backups of the etcd key value store in case of data loss or corruption. Keep in mind that, in Amazon EKS, this is all handled for you automatically by default. Amazon EKS Anywhere provides instruction for etcd backup and restore. See the etcd Data Model to learn how etcd manages data. • Schedule Pods to nodes (Scheduler) — Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler). Because a cluster could have multiple nodes that are capable of running the Pod, it is up to the Scheduler to choose which node (or nodes, in the case of replicas) the Pod should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed"} +{"global_id": 1051, "doc_id": "eks", "chunk_id": "19", "question_id": 4, "question": "What does the Scheduler do in a Kubernetes cluster?", "answer_span": "it is up to the Scheduler to choose which node (or nodes, in the case of replicas) the Pod should run on.", "chunk": "both inside and outside of the cluster. In other words, requests to add or change a cluster’s objects (Pods, Services, Nodes, and so on) can come from outside commands, such as requests from kubectl to run a Pod. Likewise, requests can be made from the API server to components within the cluster, such as a query to the kubelet service for the status of a Pod. • Store data about the cluster (etcd key value store) — The etcd service provides the critical role of keeping track of the current state of the cluster. If the etcd service became inaccessible, you would be unable to update or query the status of the cluster, though workloads would continue to run for a while. For that reason, critical clusters typically have multiple, loadbalanced instances of the etcd service running at a time and do periodic backups of the etcd key value store in case of data loss or corruption. Keep in mind that, in Amazon EKS, this is all handled for you automatically by default. Amazon EKS Anywhere provides instruction for etcd backup and restore. See the etcd Data Model to learn how etcd manages data. • Schedule Pods to nodes (Scheduler) — Requests to start or stop a Pod in Kubernetes are directed to the Kubernetes Scheduler (kube-scheduler). Because a cluster could have multiple nodes that are capable of running the Pod, it is up to the Scheduler to choose which node (or nodes, in the case of replicas) the Pod should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed"} +{"global_id": 1052, "doc_id": "eks", "chunk_id": "20", "question_id": 1, "question": "What happens if there is not enough available capacity to run the requested Pod on an existing node?", "answer_span": "the request will fail, unless you have made other provisions.", "chunk": "should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed node groups”) or Karpenter that can automatically start up new nodes to handle the workloads. • Keep components in desired state (Controller Manager) — The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states. In particular, there are several controllers that watch over different Kubernetes objects, which includes a statefulsetcontroller, endpoint-controller, cronjob-controller, node-controller, and others. • Manage cloud resources (Cloud Controller Manager) — Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager). Controllers managed by the Cloud Controller Manager can include a route controller (for setting up cloud network routes), service controller (for using cloud load balancing services), and node lifecycle controller (to keep nodes in sync with Kubernetes throughout their lifecycles). Worker Nodes (data plane) For a single-node Kubernetes cluster, workloads run on the same machine as the control plane. However, a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads. When you first create a Kubernetes cluster, some cluster creation tools allow you to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) —"} +{"global_id": 1053, "doc_id": "eks", "chunk_id": "20", "question_id": 2, "question": "What does the Kubernetes Controller Manager do?", "answer_span": "The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states.", "chunk": "should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed node groups”) or Karpenter that can automatically start up new nodes to handle the workloads. • Keep components in desired state (Controller Manager) — The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states. In particular, there are several controllers that watch over different Kubernetes objects, which includes a statefulsetcontroller, endpoint-controller, cronjob-controller, node-controller, and others. • Manage cloud resources (Cloud Controller Manager) — Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager). Controllers managed by the Cloud Controller Manager can include a route controller (for setting up cloud network routes), service controller (for using cloud load balancing services), and node lifecycle controller (to keep nodes in sync with Kubernetes throughout their lifecycles). Worker Nodes (data plane) For a single-node Kubernetes cluster, workloads run on the same machine as the control plane. However, a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads. When you first create a Kubernetes cluster, some cluster creation tools allow you to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) —"} +{"global_id": 1054, "doc_id": "eks", "chunk_id": "20", "question_id": 3, "question": "What handles interactions between Kubernetes and the cloud provider?", "answer_span": "Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager).", "chunk": "should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed node groups”) or Karpenter that can automatically start up new nodes to handle the workloads. • Keep components in desired state (Controller Manager) — The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states. In particular, there are several controllers that watch over different Kubernetes objects, which includes a statefulsetcontroller, endpoint-controller, cronjob-controller, node-controller, and others. • Manage cloud resources (Cloud Controller Manager) — Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager). Controllers managed by the Cloud Controller Manager can include a route controller (for setting up cloud network routes), service controller (for using cloud load balancing services), and node lifecycle controller (to keep nodes in sync with Kubernetes throughout their lifecycles). Worker Nodes (data plane) For a single-node Kubernetes cluster, workloads run on the same machine as the control plane. However, a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads. When you first create a Kubernetes cluster, some cluster creation tools allow you to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) —"} +{"global_id": 1055, "doc_id": "eks", "chunk_id": "20", "question_id": 4, "question": "What is a more standard configuration for a Kubernetes cluster?", "answer_span": "a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads.", "chunk": "should run on. If there is not enough available capacity to run the requested Pod on an existing node, the request will fail, unless you have made other Clusters 16 Amazon EKS User Guide provisions. Those provisions could include enabling services such as Managed Node Groups (the section called “Managed node groups”) or Karpenter that can automatically start up new nodes to handle the workloads. • Keep components in desired state (Controller Manager) — The Kubernetes Controller Manager runs as a daemon process (kube-controller-manager) to watch the state of the cluster and make changes to the cluster to reestablish the expected states. In particular, there are several controllers that watch over different Kubernetes objects, which includes a statefulsetcontroller, endpoint-controller, cronjob-controller, node-controller, and others. • Manage cloud resources (Cloud Controller Manager) — Interactions between Kubernetes and the cloud provider that carries out requests for the underlying data center resources are handled by the Cloud Controller Manager (cloud-controller-manager). Controllers managed by the Cloud Controller Manager can include a route controller (for setting up cloud network routes), service controller (for using cloud load balancing services), and node lifecycle controller (to keep nodes in sync with Kubernetes throughout their lifecycles). Worker Nodes (data plane) For a single-node Kubernetes cluster, workloads run on the same machine as the control plane. However, a more standard configuration is to have one or more separate computer systems (Nodes) that are dedicated to running Kubernetes workloads. When you first create a Kubernetes cluster, some cluster creation tools allow you to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) —"} +{"global_id": 1056, "doc_id": "eks", "chunk_id": "21", "question_id": 1, "question": "What service manages each node in the cluster?", "answer_span": "Manage each node (kubelet) — The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running.", "chunk": "to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) — The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running. The kubelet can read the Pod manifests and set up storage volumes or other features needed by the Pods on the local system. It can also check on the health of the locally running containers. • Run containers on a node (container runtime) — The Container Runtime on each node manages the containers requested for each Pod assigned to the node. That means that it can pull container images from the appropriate registry, run the container, stop it, and responds to queries about the container. The default container runtime is containerd. As of Kubernetes 1.24, the special integration of Docker (dockershim) that could be used as the container runtime was Clusters 17 Amazon EKS User Guide dropped from Kubernetes. While you can still use Docker to test and run containers on your local system, to use Docker with Kubernetes you would now have to Install Docker Engine on each node to use it with Kubernetes. • Manage networking between containers (kube-proxy) — To be able to support communication between Pods, Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes"} +{"global_id": 1057, "doc_id": "eks", "chunk_id": "21", "question_id": 2, "question": "What is the default container runtime mentioned in the text?", "answer_span": "The default container runtime is containerd.", "chunk": "to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) — The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running. The kubelet can read the Pod manifests and set up storage volumes or other features needed by the Pods on the local system. It can also check on the health of the locally running containers. • Run containers on a node (container runtime) — The Container Runtime on each node manages the containers requested for each Pod assigned to the node. That means that it can pull container images from the appropriate registry, run the container, stop it, and responds to queries about the container. The default container runtime is containerd. As of Kubernetes 1.24, the special integration of Docker (dockershim) that could be used as the container runtime was Clusters 17 Amazon EKS User Guide dropped from Kubernetes. While you can still use Docker to test and run containers on your local system, to use Docker with Kubernetes you would now have to Install Docker Engine on each node to use it with Kubernetes. • Manage networking between containers (kube-proxy) — To be able to support communication between Pods, Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes"} +{"global_id": 1058, "doc_id": "eks", "chunk_id": "21", "question_id": 3, "question": "What feature does Kubernetes use to support communication between Pods?", "answer_span": "Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods.", "chunk": "to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) — The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running. The kubelet can read the Pod manifests and set up storage volumes or other features needed by the Pods on the local system. It can also check on the health of the locally running containers. • Run containers on a node (container runtime) — The Container Runtime on each node manages the containers requested for each Pod assigned to the node. That means that it can pull container images from the appropriate registry, run the container, stop it, and responds to queries about the container. The default container runtime is containerd. As of Kubernetes 1.24, the special integration of Docker (dockershim) that could be used as the container runtime was Clusters 17 Amazon EKS User Guide dropped from Kubernetes. While you can still use Docker to test and run containers on your local system, to use Docker with Kubernetes you would now have to Install Docker Engine on each node to use it with Kubernetes. • Manage networking between containers (kube-proxy) — To be able to support communication between Pods, Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes"} +{"global_id": 1059, "doc_id": "eks", "chunk_id": "21", "question_id": 4, "question": "What service runs on every node to allow communication between Pods?", "answer_span": "The kube-proxy service runs on every node to allow that communication between Pods to take place.", "chunk": "to configure a certain number nodes to be added to the cluster (either by identifying existing computer systems or by having the provider create new ones). Before any workloads are added to those systems, services are added to each node to implement these features: • Manage each node (kubelet) — The API server communicates with the kubelet service running on each node to make sure that the node is properly registered and Pods requested by the Scheduler are running. The kubelet can read the Pod manifests and set up storage volumes or other features needed by the Pods on the local system. It can also check on the health of the locally running containers. • Run containers on a node (container runtime) — The Container Runtime on each node manages the containers requested for each Pod assigned to the node. That means that it can pull container images from the appropriate registry, run the container, stop it, and responds to queries about the container. The default container runtime is containerd. As of Kubernetes 1.24, the special integration of Docker (dockershim) that could be used as the container runtime was Clusters 17 Amazon EKS User Guide dropped from Kubernetes. While you can still use Docker to test and run containers on your local system, to use Docker with Kubernetes you would now have to Install Docker Engine on each node to use it with Kubernetes. • Manage networking between containers (kube-proxy) — To be able to support communication between Pods, Kubernetes uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes"} +{"global_id": 1060, "doc_id": "eks", "chunk_id": "22", "question_id": 1, "question": "What feature is used to set up Pod networks that track IP addresses and ports?", "answer_span": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods.", "chunk": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes to support the cluster, but are not run in the control plane. These services often run directly on nodes in the kube-system namespace or in its own namespace (as is often done with third-party service providers). A common example is the CoreDNS service, which provides DNS services to the cluster. Refer to Discovering builtin services for information on how to see which cluster services are running in kube-system on your cluster. There are different types of add-ons you can consider adding to your clusters. To keep your clusters healthy, you can add observability features (see Monitor clusters) that allow you to do things like logging, auditing, and metrics. With this information, you can troubleshoot problems that occur, often through the same observability interfaces. Examples of these types of services include Amazon GuardDuty, CloudWatch (see the section called “Amazon CloudWatch”), AWS Distro for OpenTelemetry, Amazon VPC CNI plugin for Kubernetes (see the section called “Amazon VPC CNI”), and Grafana Kubernetes Monitoring. For storage (see App data storage), add-ons to Amazon EKS include Amazon Elastic Block Store CSI Driver (see the section called “Amazon EBS”), Amazon Elastic File System CSI Driver (see the section called “Amazon EFS”), and several third-party storage add-ons such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of"} +{"global_id": 1061, "doc_id": "eks", "chunk_id": "22", "question_id": 2, "question": "What runs on every node to allow communication between Pods?", "answer_span": "The kube-proxy service runs on every node to allow that communication between Pods to take place.", "chunk": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes to support the cluster, but are not run in the control plane. These services often run directly on nodes in the kube-system namespace or in its own namespace (as is often done with third-party service providers). A common example is the CoreDNS service, which provides DNS services to the cluster. Refer to Discovering builtin services for information on how to see which cluster services are running in kube-system on your cluster. There are different types of add-ons you can consider adding to your clusters. To keep your clusters healthy, you can add observability features (see Monitor clusters) that allow you to do things like logging, auditing, and metrics. With this information, you can troubleshoot problems that occur, often through the same observability interfaces. Examples of these types of services include Amazon GuardDuty, CloudWatch (see the section called “Amazon CloudWatch”), AWS Distro for OpenTelemetry, Amazon VPC CNI plugin for Kubernetes (see the section called “Amazon VPC CNI”), and Grafana Kubernetes Monitoring. For storage (see App data storage), add-ons to Amazon EKS include Amazon Elastic Block Store CSI Driver (see the section called “Amazon EBS”), Amazon Elastic File System CSI Driver (see the section called “Amazon EFS”), and several third-party storage add-ons such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of"} +{"global_id": 1062, "doc_id": "eks", "chunk_id": "22", "question_id": 3, "question": "What is a common example of a service that provides DNS services to the cluster?", "answer_span": "A common example is the CoreDNS service, which provides DNS services to the cluster.", "chunk": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes to support the cluster, but are not run in the control plane. These services often run directly on nodes in the kube-system namespace or in its own namespace (as is often done with third-party service providers). A common example is the CoreDNS service, which provides DNS services to the cluster. Refer to Discovering builtin services for information on how to see which cluster services are running in kube-system on your cluster. There are different types of add-ons you can consider adding to your clusters. To keep your clusters healthy, you can add observability features (see Monitor clusters) that allow you to do things like logging, auditing, and metrics. With this information, you can troubleshoot problems that occur, often through the same observability interfaces. Examples of these types of services include Amazon GuardDuty, CloudWatch (see the section called “Amazon CloudWatch”), AWS Distro for OpenTelemetry, Amazon VPC CNI plugin for Kubernetes (see the section called “Amazon VPC CNI”), and Grafana Kubernetes Monitoring. For storage (see App data storage), add-ons to Amazon EKS include Amazon Elastic Block Store CSI Driver (see the section called “Amazon EBS”), Amazon Elastic File System CSI Driver (see the section called “Amazon EFS”), and several third-party storage add-ons such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of"} +{"global_id": 1063, "doc_id": "eks", "chunk_id": "22", "question_id": 4, "question": "What does Kubernetes define as a Workload?", "answer_span": "Kubernetes defines a Workload as \"an application running on Kubernetes.\"", "chunk": "uses a feature referred to as a Service to set up Pod networks that track IP addresses and ports associated with those Pods. The kube-proxy service runs on every node to allow that communication between Pods to take place. Extend Clusters There are some services you can add to Kubernetes to support the cluster, but are not run in the control plane. These services often run directly on nodes in the kube-system namespace or in its own namespace (as is often done with third-party service providers). A common example is the CoreDNS service, which provides DNS services to the cluster. Refer to Discovering builtin services for information on how to see which cluster services are running in kube-system on your cluster. There are different types of add-ons you can consider adding to your clusters. To keep your clusters healthy, you can add observability features (see Monitor clusters) that allow you to do things like logging, auditing, and metrics. With this information, you can troubleshoot problems that occur, often through the same observability interfaces. Examples of these types of services include Amazon GuardDuty, CloudWatch (see the section called “Amazon CloudWatch”), AWS Distro for OpenTelemetry, Amazon VPC CNI plugin for Kubernetes (see the section called “Amazon VPC CNI”), and Grafana Kubernetes Monitoring. For storage (see App data storage), add-ons to Amazon EKS include Amazon Elastic Block Store CSI Driver (see the section called “Amazon EBS”), Amazon Elastic File System CSI Driver (see the section called “Amazon EFS”), and several third-party storage add-ons such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of"} +{"global_id": 1064, "doc_id": "eks", "chunk_id": "23", "question_id": 1, "question": "What is defined as an application running on Kubernetes?", "answer_span": "Kubernetes defines a Workload as \"an application running on Kubernetes.\"", "chunk": "such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of a set of microservices run as Containers in Pods, or could be run as a batch job or other type of applications. The job of Kubernetes is to make sure that the requests that you make for those objects to be set up or deployed are carried out. As someone deploying applications, you Workloads 18 Amazon EKS User Guide should learn about how containers are built, how Pods are defined, and what methods you can use for deploying them. Containers The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod . A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes. Contrast this to something like an RPM or Deb package, which packages together software for a Linux system, but does not itself run as an entity. Because the Pod is the smallest deployable unit, it typically holds a single container. However, multiple containers can be in a Pod in cases where the containers are tightly coupled. For example, a web server container might be packaged in a Pod with a sidecar type of container that may provide logging, monitoring, or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers"} +{"global_id": 1065, "doc_id": "eks", "chunk_id": "23", "question_id": 2, "question": "What is the most basic element of an application workload that you deploy and manage in Kubernetes?", "answer_span": "The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod.", "chunk": "such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of a set of microservices run as Containers in Pods, or could be run as a batch job or other type of applications. The job of Kubernetes is to make sure that the requests that you make for those objects to be set up or deployed are carried out. As someone deploying applications, you Workloads 18 Amazon EKS User Guide should learn about how containers are built, how Pods are defined, and what methods you can use for deploying them. Containers The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod . A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes. Contrast this to something like an RPM or Deb package, which packages together software for a Linux system, but does not itself run as an entity. Because the Pod is the smallest deployable unit, it typically holds a single container. However, multiple containers can be in a Pod in cases where the containers are tightly coupled. For example, a web server container might be packaged in a Pod with a sidecar type of container that may provide logging, monitoring, or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers"} +{"global_id": 1066, "doc_id": "eks", "chunk_id": "23", "question_id": 3, "question": "What does a Pod represent?", "answer_span": "A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes.", "chunk": "such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of a set of microservices run as Containers in Pods, or could be run as a batch job or other type of applications. The job of Kubernetes is to make sure that the requests that you make for those objects to be set up or deployed are carried out. As someone deploying applications, you Workloads 18 Amazon EKS User Guide should learn about how containers are built, how Pods are defined, and what methods you can use for deploying them. Containers The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod . A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes. Contrast this to something like an RPM or Deb package, which packages together software for a Linux system, but does not itself run as an entity. Because the Pod is the smallest deployable unit, it typically holds a single container. However, multiple containers can be in a Pod in cases where the containers are tightly coupled. For example, a web server container might be packaged in a Pod with a sidecar type of container that may provide logging, monitoring, or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers"} +{"global_id": 1067, "doc_id": "eks", "chunk_id": "23", "question_id": 4, "question": "What ensures that both containers always run on the same node?", "answer_span": "In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node.", "chunk": "such as Amazon FSx for NetApp ONTAP CSI driver the section called “Amazon FSx for NetApp ONTAP”). For a more complete list of available Amazon EKS add-ons, see the section called “Amazon EKS add-ons”. Workloads Kubernetes defines a Workload as \"an application running on Kubernetes.\" That application can consist of a set of microservices run as Containers in Pods, or could be run as a batch job or other type of applications. The job of Kubernetes is to make sure that the requests that you make for those objects to be set up or deployed are carried out. As someone deploying applications, you Workloads 18 Amazon EKS User Guide should learn about how containers are built, how Pods are defined, and what methods you can use for deploying them. Containers The most basic element of an application workload that you deploy and manage in Kubernetes is a Pod . A Pod represents a way of holding the components of an application as well as defining specifications that describe the Pod’s attributes. Contrast this to something like an RPM or Deb package, which packages together software for a Linux system, but does not itself run as an entity. Because the Pod is the smallest deployable unit, it typically holds a single container. However, multiple containers can be in a Pod in cases where the containers are tightly coupled. For example, a web server container might be packaged in a Pod with a sidecar type of container that may provide logging, monitoring, or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers"} +{"global_id": 1068, "doc_id": "eks", "chunk_id": "24", "question_id": 1, "question": "What ensures that for each running instance of the Pod, both containers always run on the same node?", "answer_span": "being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node.", "chunk": "or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers in a Pod running as though they are in the same isolated host. The effect of this is that the containers share a single IP address that provides access to the Pod and the containers can communicate with each other as though they were running on their own localhost. Pod specifications (PodSpec) define the desired state of the Pod. You can deploy an individual Pod or multiple Pods by using workload resources to manage Pod Templates. Workload resources include Deployments (to manage multiple Pod Replicas), StatefulSets (to deploy Pods that need to be unique, such as database Pods), and DaemonSets (where a Pod needs to run continuously on every node). More on those later. While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage. Building Containers The Pod is really just a structure around one or more containers, with each container itself holding the file system, executables, configuration files, libraries, and other components to actually run the application. Because a company called Docker Inc. first popularized containers, some people refer to containers as Docker Containers. However, the Open Container Initiative has since defined container runtimes, images, and distribution methods for the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside"} +{"global_id": 1069, "doc_id": "eks", "chunk_id": "24", "question_id": 2, "question": "What do Pod specifications (PodSpec) define?", "answer_span": "Pod specifications (PodSpec) define the desired state of the Pod.", "chunk": "or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers in a Pod running as though they are in the same isolated host. The effect of this is that the containers share a single IP address that provides access to the Pod and the containers can communicate with each other as though they were running on their own localhost. Pod specifications (PodSpec) define the desired state of the Pod. You can deploy an individual Pod or multiple Pods by using workload resources to manage Pod Templates. Workload resources include Deployments (to manage multiple Pod Replicas), StatefulSets (to deploy Pods that need to be unique, such as database Pods), and DaemonSets (where a Pod needs to run continuously on every node). More on those later. While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage. Building Containers The Pod is really just a structure around one or more containers, with each container itself holding the file system, executables, configuration files, libraries, and other components to actually run the application. Because a company called Docker Inc. first popularized containers, some people refer to containers as Docker Containers. However, the Open Container Initiative has since defined container runtimes, images, and distribution methods for the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside"} +{"global_id": 1070, "doc_id": "eks", "chunk_id": "24", "question_id": 3, "question": "What is the smallest unit you deploy?", "answer_span": "While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage.", "chunk": "or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers in a Pod running as though they are in the same isolated host. The effect of this is that the containers share a single IP address that provides access to the Pod and the containers can communicate with each other as though they were running on their own localhost. Pod specifications (PodSpec) define the desired state of the Pod. You can deploy an individual Pod or multiple Pods by using workload resources to manage Pod Templates. Workload resources include Deployments (to manage multiple Pod Replicas), StatefulSets (to deploy Pods that need to be unique, such as database Pods), and DaemonSets (where a Pod needs to run continuously on every node). More on those later. While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage. Building Containers The Pod is really just a structure around one or more containers, with each container itself holding the file system, executables, configuration files, libraries, and other components to actually run the application. Because a company called Docker Inc. first popularized containers, some people refer to containers as Docker Containers. However, the Open Container Initiative has since defined container runtimes, images, and distribution methods for the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside"} +{"global_id": 1071, "doc_id": "eks", "chunk_id": "24", "question_id": 4, "question": "What do you typically start with when you build a container?", "answer_span": "you typically start with a Dockerfile (literally named that).", "chunk": "or other service that is closely tied to the web server container. In this case, being in the same Pod ensures that for each running instance of the Pod, both containers always run on the same node. Likewise, all containers in a Pod share the same environment, with the containers in a Pod running as though they are in the same isolated host. The effect of this is that the containers share a single IP address that provides access to the Pod and the containers can communicate with each other as though they were running on their own localhost. Pod specifications (PodSpec) define the desired state of the Pod. You can deploy an individual Pod or multiple Pods by using workload resources to manage Pod Templates. Workload resources include Deployments (to manage multiple Pod Replicas), StatefulSets (to deploy Pods that need to be unique, such as database Pods), and DaemonSets (where a Pod needs to run continuously on every node). More on those later. While a Pod is the smallest unit you deploy, a container is the smallest unit that you build and manage. Building Containers The Pod is really just a structure around one or more containers, with each container itself holding the file system, executables, configuration files, libraries, and other components to actually run the application. Because a company called Docker Inc. first popularized containers, some people refer to containers as Docker Containers. However, the Open Container Initiative has since defined container runtimes, images, and distribution methods for the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside"} +{"global_id": 1072, "doc_id": "eks", "chunk_id": "25", "question_id": 1, "question": "What is a Dockerfile typically named?", "answer_span": "you typically start with a Dockerfile (literally named that)", "chunk": "the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside that Dockerfile, you identify: • A base image — A base container image is a container that is typically built from either a minimal version of an operating system’s file system (such as Red Hat Enterprise Linux or Ubuntu) or a minimal system that is enhanced to provide software to run specific types of applications (such as a nodejs or python apps). • Application software — You can add your application software to your container in much the same way you would add it to a Linux system. For example, in your Dockerfile you can run npm and yarn to install a Java application or yum and dnf to install RPM packages. In other words, using a RUN command in a Dockerfile, you can run any command that is available in the file system of your base image to install software or configure software inside of the resulting container image. • Instructions — The Dockerfile reference describes the instructions you can add to a Dockerfile when you configure it. These include instructions used to build what is in the container itself (ADD or COPY files from the local system), identify commands to execute when the container is run (CMD or ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build"} +{"global_id": 1073, "doc_id": "eks", "chunk_id": "25", "question_id": 2, "question": "What is a base container image typically built from?", "answer_span": "A base container image is a container that is typically built from either a minimal version of an operating system’s file system (such as Red Hat Enterprise Linux or Ubuntu)", "chunk": "the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside that Dockerfile, you identify: • A base image — A base container image is a container that is typically built from either a minimal version of an operating system’s file system (such as Red Hat Enterprise Linux or Ubuntu) or a minimal system that is enhanced to provide software to run specific types of applications (such as a nodejs or python apps). • Application software — You can add your application software to your container in much the same way you would add it to a Linux system. For example, in your Dockerfile you can run npm and yarn to install a Java application or yum and dnf to install RPM packages. In other words, using a RUN command in a Dockerfile, you can run any command that is available in the file system of your base image to install software or configure software inside of the resulting container image. • Instructions — The Dockerfile reference describes the instructions you can add to a Dockerfile when you configure it. These include instructions used to build what is in the container itself (ADD or COPY files from the local system), identify commands to execute when the container is run (CMD or ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build"} +{"global_id": 1074, "doc_id": "eks", "chunk_id": "25", "question_id": 3, "question": "What can you run in your Dockerfile to install a Java application?", "answer_span": "you can run npm and yarn to install a Java application", "chunk": "the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside that Dockerfile, you identify: • A base image — A base container image is a container that is typically built from either a minimal version of an operating system’s file system (such as Red Hat Enterprise Linux or Ubuntu) or a minimal system that is enhanced to provide software to run specific types of applications (such as a nodejs or python apps). • Application software — You can add your application software to your container in much the same way you would add it to a Linux system. For example, in your Dockerfile you can run npm and yarn to install a Java application or yum and dnf to install RPM packages. In other words, using a RUN command in a Dockerfile, you can run any command that is available in the file system of your base image to install software or configure software inside of the resulting container image. • Instructions — The Dockerfile reference describes the instructions you can add to a Dockerfile when you configure it. These include instructions used to build what is in the container itself (ADD or COPY files from the local system), identify commands to execute when the container is run (CMD or ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build"} +{"global_id": 1075, "doc_id": "eks", "chunk_id": "25", "question_id": 4, "question": "What instructions can you add to a Dockerfile?", "answer_span": "These include instructions used to build what is in the container itself (ADD or COPY files from the local system), identify commands to execute when the container is run (CMD or ENTRYPOINT), and connect the container to the system it runs on", "chunk": "the industry. Add to that the fact that containers were created from many existing Linux features, others often refer to containers as OCI Containers, Linux Containers, or just Containers. Workloads 19 Amazon EKS User Guide When you build a container, you typically start with a Dockerfile (literally named that). Inside that Dockerfile, you identify: • A base image — A base container image is a container that is typically built from either a minimal version of an operating system’s file system (such as Red Hat Enterprise Linux or Ubuntu) or a minimal system that is enhanced to provide software to run specific types of applications (such as a nodejs or python apps). • Application software — You can add your application software to your container in much the same way you would add it to a Linux system. For example, in your Dockerfile you can run npm and yarn to install a Java application or yum and dnf to install RPM packages. In other words, using a RUN command in a Dockerfile, you can run any command that is available in the file system of your base image to install software or configure software inside of the resulting container image. • Instructions — The Dockerfile reference describes the instructions you can add to a Dockerfile when you configure it. These include instructions used to build what is in the container itself (ADD or COPY files from the local system), identify commands to execute when the container is run (CMD or ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build"} +{"global_id": 1076, "doc_id": "eks", "chunk_id": "26", "question_id": 1, "question": "What tools are available to build container images besides docker?", "answer_span": "other tools that are available to build container images include podman and nerdctl.", "chunk": "ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build container images include podman and nerdctl. See Building Better Container Images or Overview of Docker Build to learn about building containers. Storing Containers Once you’ve built your container image, you can store it in a container distribution registry on your workstation or on a public container registry. Running a private container registry on your workstation allows you to store container images locally, making them readily available to you. To store container images in a more public manner, you can push them to a public container registry. Public container registries provide a central location for storing and distributing container images. Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry. When running containerized workloads on Amazon Elastic Kubernetes Service (Amazon EKS) we recommend pulling copies of Docker Official Images that are stored in Amazon Elastic Container Registry. Amazon ECR has been storing these images since 2021. You can search for popular Workloads 20 Amazon EKS User Guide container images in the Amazon ECR Public Gallery, and specifically for the Docker Hub images, you can search the Amazon ECR Docker Gallery. Running containers Because containers are built in a standard format, a container can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start"} +{"global_id": 1077, "doc_id": "eks", "chunk_id": "26", "question_id": 2, "question": "What is the purpose of running a private container registry on your workstation?", "answer_span": "Running a private container registry on your workstation allows you to store container images locally, making them readily available to you.", "chunk": "ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build container images include podman and nerdctl. See Building Better Container Images or Overview of Docker Build to learn about building containers. Storing Containers Once you’ve built your container image, you can store it in a container distribution registry on your workstation or on a public container registry. Running a private container registry on your workstation allows you to store container images locally, making them readily available to you. To store container images in a more public manner, you can push them to a public container registry. Public container registries provide a central location for storing and distributing container images. Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry. When running containerized workloads on Amazon Elastic Kubernetes Service (Amazon EKS) we recommend pulling copies of Docker Official Images that are stored in Amazon Elastic Container Registry. Amazon ECR has been storing these images since 2021. You can search for popular Workloads 20 Amazon EKS User Guide container images in the Amazon ECR Public Gallery, and specifically for the Docker Hub images, you can search the Amazon ECR Docker Gallery. Running containers Because containers are built in a standard format, a container can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start"} +{"global_id": 1078, "doc_id": "eks", "chunk_id": "26", "question_id": 3, "question": "What are examples of public container registries?", "answer_span": "Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry.", "chunk": "ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build container images include podman and nerdctl. See Building Better Container Images or Overview of Docker Build to learn about building containers. Storing Containers Once you’ve built your container image, you can store it in a container distribution registry on your workstation or on a public container registry. Running a private container registry on your workstation allows you to store container images locally, making them readily available to you. To store container images in a more public manner, you can push them to a public container registry. Public container registries provide a central location for storing and distributing container images. Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry. When running containerized workloads on Amazon Elastic Kubernetes Service (Amazon EKS) we recommend pulling copies of Docker Official Images that are stored in Amazon Elastic Container Registry. Amazon ECR has been storing these images since 2021. You can search for popular Workloads 20 Amazon EKS User Guide container images in the Amazon ECR Public Gallery, and specifically for the Docker Hub images, you can search the Amazon ECR Docker Gallery. Running containers Because containers are built in a standard format, a container can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start"} +{"global_id": 1079, "doc_id": "eks", "chunk_id": "26", "question_id": 4, "question": "What commands can you use to start a container on your local desktop?", "answer_span": "you can use docker run or podman run commands to start", "chunk": "ENTRYPOINT), and connect the container to the system it runs on (by identifying the USER to run as, a local VOLUME to mount, or the ports to EXPOSE). While the docker command and service have traditionally been used to build containers (docker build), other tools that are available to build container images include podman and nerdctl. See Building Better Container Images or Overview of Docker Build to learn about building containers. Storing Containers Once you’ve built your container image, you can store it in a container distribution registry on your workstation or on a public container registry. Running a private container registry on your workstation allows you to store container images locally, making them readily available to you. To store container images in a more public manner, you can push them to a public container registry. Public container registries provide a central location for storing and distributing container images. Examples of public container registries include the Amazon Elastic Container Registry, Red Hat Quay registry, and Docker Hub registry. When running containerized workloads on Amazon Elastic Kubernetes Service (Amazon EKS) we recommend pulling copies of Docker Official Images that are stored in Amazon Elastic Container Registry. Amazon ECR has been storing these images since 2021. You can search for popular Workloads 20 Amazon EKS User Guide container images in the Amazon ECR Public Gallery, and specifically for the Docker Hub images, you can search the Amazon ECR Docker Gallery. Running containers Because containers are built in a standard format, a container can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start"} +{"global_id": 1080, "doc_id": "eks", "chunk_id": "27", "question_id": 1, "question": "What can run on any machine that can run a container runtime?", "answer_span": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm).", "chunk": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start up a container on the localhost. For Kubernetes, however, each worker node has a container runtime deployed and it is up to Kubernetes to request that a node run a container. Once a container has been assigned to run on a node, the node looks to see if the requested version of the container image already exists on the node. If it doesn’t, Kubernetes tells the container runtime to pull that container from the appropriate container registry, then run that container locally. Keep in mind that a container image refers to the software package that is moved around between your laptop, the container registry, and Kubernetes nodes. A container refers to a running instance of that image. Pods Once your containers are ready, working with Pods includes configuring, deploying, and making the Pods accessible. Configuring Pods When you define a Pod, you assign a set of attributes to it. Those attributes must include at least the Pod name and the container image to run. However, there are many other things you want to configure with your Pod definitions as well (see the PodSpec page for details on what can go into a Pod). These include: • Storage — When a running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device"} +{"global_id": 1081, "doc_id": "eks", "chunk_id": "27", "question_id": 2, "question": "What commands can you use to start up a container on the localhost?", "answer_span": "you can use docker run or podman run commands to start up a container on the localhost.", "chunk": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start up a container on the localhost. For Kubernetes, however, each worker node has a container runtime deployed and it is up to Kubernetes to request that a node run a container. Once a container has been assigned to run on a node, the node looks to see if the requested version of the container image already exists on the node. If it doesn’t, Kubernetes tells the container runtime to pull that container from the appropriate container registry, then run that container locally. Keep in mind that a container image refers to the software package that is moved around between your laptop, the container registry, and Kubernetes nodes. A container refers to a running instance of that image. Pods Once your containers are ready, working with Pods includes configuring, deploying, and making the Pods accessible. Configuring Pods When you define a Pod, you assign a set of attributes to it. Those attributes must include at least the Pod name and the container image to run. However, there are many other things you want to configure with your Pod definitions as well (see the PodSpec page for details on what can go into a Pod). These include: • Storage — When a running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device"} +{"global_id": 1082, "doc_id": "eks", "chunk_id": "27", "question_id": 3, "question": "What must the attributes of a Pod include at a minimum?", "answer_span": "Those attributes must include at least the Pod name and the container image to run.", "chunk": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start up a container on the localhost. For Kubernetes, however, each worker node has a container runtime deployed and it is up to Kubernetes to request that a node run a container. Once a container has been assigned to run on a node, the node looks to see if the requested version of the container image already exists on the node. If it doesn’t, Kubernetes tells the container runtime to pull that container from the appropriate container registry, then run that container locally. Keep in mind that a container image refers to the software package that is moved around between your laptop, the container registry, and Kubernetes nodes. A container refers to a running instance of that image. Pods Once your containers are ready, working with Pods includes configuring, deploying, and making the Pods accessible. Configuring Pods When you define a Pod, you assign a set of attributes to it. Those attributes must include at least the Pod name and the container image to run. However, there are many other things you want to configure with your Pod definitions as well (see the PodSpec page for details on what can go into a Pod). These include: • Storage — When a running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device"} +{"global_id": 1083, "doc_id": "eks", "chunk_id": "27", "question_id": 4, "question": "What happens to data storage in a running container when it is stopped and deleted?", "answer_span": "data storage in that container will disappear, unless you set up more permanent storage.", "chunk": "can run on any machine that can run a container runtime (such as Docker) and whose contents match the local machine’s architecture (such as x86_64 or arm). To test a container or just run it on your local desktop, you can use docker run or podman run commands to start up a container on the localhost. For Kubernetes, however, each worker node has a container runtime deployed and it is up to Kubernetes to request that a node run a container. Once a container has been assigned to run on a node, the node looks to see if the requested version of the container image already exists on the node. If it doesn’t, Kubernetes tells the container runtime to pull that container from the appropriate container registry, then run that container locally. Keep in mind that a container image refers to the software package that is moved around between your laptop, the container registry, and Kubernetes nodes. A container refers to a running instance of that image. Pods Once your containers are ready, working with Pods includes configuring, deploying, and making the Pods accessible. Configuring Pods When you define a Pod, you assign a set of attributes to it. Those attributes must include at least the Pod name and the container image to run. However, there are many other things you want to configure with your Pod definitions as well (see the PodSpec page for details on what can go into a Pod). These include: • Storage — When a running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device"} +{"global_id": 1084, "doc_id": "eks", "chunk_id": "28", "question_id": 1, "question": "What happens to data storage in a running container when it is stopped and deleted?", "answer_span": "data storage in that container will disappear, unless you set up more permanent storage.", "chunk": "running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device from the local computer. With one of those storage types available from your cluster, you can mount the storage volume to a selected mount point in your container’s file system. A Persistent Volume is one that continues to exist after the Pod is deleted, while an Ephemeral Volume is deleted when the Pod is deleted. If your cluster administrator created different storage classes for your cluster, you might have the Workloads 21 Amazon EKS User Guide option for choosing the attributes of the storage you use, such as whether the volume is deleted or reclaimed after use, whether it will expand if more space is needed, and even whether it meets certain performance requirements. • Secrets — By making Secrets available to containers in Pod specs, you can provide the permissions those containers need to access file systems, data bases, or other protected assets. Keys, passwords, and tokens are among the items that can be stored as secrets. Using secrets makes it so you don’t have to store this information in container images, but need only make the secrets available to running containers. Similar to Secrets are ConfigMaps. A ConfigMap tends to hold less critical information, such as key-value pairs for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the"} +{"global_id": 1085, "doc_id": "eks", "chunk_id": "28", "question_id": 2, "question": "What is a Persistent Volume?", "answer_span": "A Persistent Volume is one that continues to exist after the Pod is deleted.", "chunk": "running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device from the local computer. With one of those storage types available from your cluster, you can mount the storage volume to a selected mount point in your container’s file system. A Persistent Volume is one that continues to exist after the Pod is deleted, while an Ephemeral Volume is deleted when the Pod is deleted. If your cluster administrator created different storage classes for your cluster, you might have the Workloads 21 Amazon EKS User Guide option for choosing the attributes of the storage you use, such as whether the volume is deleted or reclaimed after use, whether it will expand if more space is needed, and even whether it meets certain performance requirements. • Secrets — By making Secrets available to containers in Pod specs, you can provide the permissions those containers need to access file systems, data bases, or other protected assets. Keys, passwords, and tokens are among the items that can be stored as secrets. Using secrets makes it so you don’t have to store this information in container images, but need only make the secrets available to running containers. Similar to Secrets are ConfigMaps. A ConfigMap tends to hold less critical information, such as key-value pairs for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the"} +{"global_id": 1086, "doc_id": "eks", "chunk_id": "28", "question_id": 3, "question": "What can be stored as secrets in Kubernetes?", "answer_span": "Keys, passwords, and tokens are among the items that can be stored as secrets.", "chunk": "running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device from the local computer. With one of those storage types available from your cluster, you can mount the storage volume to a selected mount point in your container’s file system. A Persistent Volume is one that continues to exist after the Pod is deleted, while an Ephemeral Volume is deleted when the Pod is deleted. If your cluster administrator created different storage classes for your cluster, you might have the Workloads 21 Amazon EKS User Guide option for choosing the attributes of the storage you use, such as whether the volume is deleted or reclaimed after use, whether it will expand if more space is needed, and even whether it meets certain performance requirements. • Secrets — By making Secrets available to containers in Pod specs, you can provide the permissions those containers need to access file systems, data bases, or other protected assets. Keys, passwords, and tokens are among the items that can be stored as secrets. Using secrets makes it so you don’t have to store this information in container images, but need only make the secrets available to running containers. Similar to Secrets are ConfigMaps. A ConfigMap tends to hold less critical information, such as key-value pairs for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the"} +{"global_id": 1087, "doc_id": "eks", "chunk_id": "28", "question_id": 4, "question": "What does a ConfigMap tend to hold?", "answer_span": "A ConfigMap tends to hold less critical information, such as key-value pairs for configuring a service.", "chunk": "running container is stopped and deleted, data storage in that container will disappear, unless you set up more permanent storage. Kubernetes supports many different storage types and abstracts them under the umbrella of Volumes. Storage types include CephFS, NFS, iSCSI, and others. You can even use a local block device from the local computer. With one of those storage types available from your cluster, you can mount the storage volume to a selected mount point in your container’s file system. A Persistent Volume is one that continues to exist after the Pod is deleted, while an Ephemeral Volume is deleted when the Pod is deleted. If your cluster administrator created different storage classes for your cluster, you might have the Workloads 21 Amazon EKS User Guide option for choosing the attributes of the storage you use, such as whether the volume is deleted or reclaimed after use, whether it will expand if more space is needed, and even whether it meets certain performance requirements. • Secrets — By making Secrets available to containers in Pod specs, you can provide the permissions those containers need to access file systems, data bases, or other protected assets. Keys, passwords, and tokens are among the items that can be stored as secrets. Using secrets makes it so you don’t have to store this information in container images, but need only make the secrets available to running containers. Similar to Secrets are ConfigMaps. A ConfigMap tends to hold less critical information, such as key-value pairs for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the"} +{"global_id": 1088, "doc_id": "eks", "chunk_id": "29", "question_id": 1, "question": "What can you request for each container in terms of resources?", "answer_span": "For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use.", "chunk": "for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use. See Resource Management for Pods and Containers for examples. • Disruptions — Pods can be disrupted involuntarily (a node goes down) or voluntarily (an upgrade is desired). By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur. See Specifying a Disruption Budget for your application for examples. • Namespaces — Kubernetes provides different ways to isolate Kubernetes components and workloads from each other. Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together. You can create your own namespaces to use or choose to not indicate a namespace (which causes Kubernetes to use the default namespace). Kubernetes control plane components typically run in the kubesystem namespace. The configuration just described is typically gathered together in a YAML file to be applied to the Kubernetes cluster. For personal Kubernetes clusters, you might just store these YAML files on your local system. However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources. The objects used to gather together and deploy Pod information is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide Automate cluster infrastructure with EKS Auto"} +{"global_id": 1089, "doc_id": "eks", "chunk_id": "29", "question_id": 2, "question": "What is a Pod disruption budget used for?", "answer_span": "By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur.", "chunk": "for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use. See Resource Management for Pods and Containers for examples. • Disruptions — Pods can be disrupted involuntarily (a node goes down) or voluntarily (an upgrade is desired). By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur. See Specifying a Disruption Budget for your application for examples. • Namespaces — Kubernetes provides different ways to isolate Kubernetes components and workloads from each other. Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together. You can create your own namespaces to use or choose to not indicate a namespace (which causes Kubernetes to use the default namespace). Kubernetes control plane components typically run in the kubesystem namespace. The configuration just described is typically gathered together in a YAML file to be applied to the Kubernetes cluster. For personal Kubernetes clusters, you might just store these YAML files on your local system. However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources. The objects used to gather together and deploy Pod information is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide Automate cluster infrastructure with EKS Auto"} +{"global_id": 1090, "doc_id": "eks", "chunk_id": "29", "question_id": 3, "question": "What is a common way to secure and manage Pods for a particular application?", "answer_span": "Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together.", "chunk": "for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use. See Resource Management for Pods and Containers for examples. • Disruptions — Pods can be disrupted involuntarily (a node goes down) or voluntarily (an upgrade is desired). By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur. See Specifying a Disruption Budget for your application for examples. • Namespaces — Kubernetes provides different ways to isolate Kubernetes components and workloads from each other. Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together. You can create your own namespaces to use or choose to not indicate a namespace (which causes Kubernetes to use the default namespace). Kubernetes control plane components typically run in the kubesystem namespace. The configuration just described is typically gathered together in a YAML file to be applied to the Kubernetes cluster. For personal Kubernetes clusters, you might just store these YAML files on your local system. However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources. The objects used to gather together and deploy Pod information is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide Automate cluster infrastructure with EKS Auto"} +{"global_id": 1091, "doc_id": "eks", "chunk_id": "29", "question_id": 4, "question": "What is a popular way to automate storage and updates for critical clusters?", "answer_span": "However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources.", "chunk": "for configuring a service. • Container resources — Objects for further configuring containers can take the form of resource configuration. For each container, you can request the amount of memory and CPU that it can use, as well as place limits of the total amount of those resources that the container can use. See Resource Management for Pods and Containers for examples. • Disruptions — Pods can be disrupted involuntarily (a node goes down) or voluntarily (an upgrade is desired). By configuring a Pod disruption budget, you can exert some control over how available your application remains when disruptions occur. See Specifying a Disruption Budget for your application for examples. • Namespaces — Kubernetes provides different ways to isolate Kubernetes components and workloads from each other. Running all the Pods for a particular application in the same Namespace is a common way to secure and manage those Pods together. You can create your own namespaces to use or choose to not indicate a namespace (which causes Kubernetes to use the default namespace). Kubernetes control plane components typically run in the kubesystem namespace. The configuration just described is typically gathered together in a YAML file to be applied to the Kubernetes cluster. For personal Kubernetes clusters, you might just store these YAML files on your local system. However, with more critical clusters and workloads, GitOps is a popular way to automate storage and updates to both workload and Kubernetes infrastructure resources. The objects used to gather together and deploy Pod information is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide Automate cluster infrastructure with EKS Auto"} +{"global_id": 1092, "doc_id": "eks", "chunk_id": "30", "question_id": 1, "question": "What does EKS Auto Mode extend management of?", "answer_span": "EKS Auto Mode extends AWS management of Kubernetes clusters beyond the cluster itself", "chunk": "is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide Automate cluster infrastructure with EKS Auto Mode EKS Auto Mode extends AWS management of Kubernetes clusters beyond the cluster itself, to allow AWS to also set up and manage the infrastructure that enables the smooth operation of your workloads. You can delegate key infrastructure decisions and leverage the expertise of AWS for day-to-day operations. Cluster infrastructure managed by AWS includes many Kubernetes capabilities as core components, as opposed to add-ons, such as compute autoscaling, pod and service networking, application load balancing, cluster DNS, block storage, and GPU support. To get started, you can deploy a new EKS Auto Mode cluster or enable EKS Auto Mode on an existing cluster. You can deploy, upgrade, or modify your EKS Auto Mode clusters using eksctl, the AWS CLI, the AWS Management Console, EKS APIs, or your preferred infrastructure-as-code tools. With EKS Auto Mode, you can continue using your preferred Kubernetes-compatible tools. EKS Auto Mode integrates with AWS services like Amazon EC2, Amazon EBS, and ELB, leveraging AWS cloud resources that follow best practices. These resources are automatically scaled, costoptimized, and regularly updated to help minimize operational costs and overhead. Features EKS Auto Mode provides the following high-level features: Streamline Kubernetes Cluster Management: EKS Auto Mode streamlines EKS management by providing production-ready clusters with minimal operational overhead. With EKS Auto Mode, you can run demanding, dynamic workloads confidently, without requiring deep EKS expertise. Application Availability: EKS Auto Mode dynamically adds or removes nodes in your EKS cluster based on the demands of your Kubernetes applications. This minimizes the need for"} +{"global_id": 1093, "doc_id": "eks", "chunk_id": "30", "question_id": 2, "question": "What can you delegate to AWS when using EKS Auto Mode?", "answer_span": "You can delegate key infrastructure decisions and leverage the expertise of AWS for day-to-day operations.", "chunk": "is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide Automate cluster infrastructure with EKS Auto Mode EKS Auto Mode extends AWS management of Kubernetes clusters beyond the cluster itself, to allow AWS to also set up and manage the infrastructure that enables the smooth operation of your workloads. You can delegate key infrastructure decisions and leverage the expertise of AWS for day-to-day operations. Cluster infrastructure managed by AWS includes many Kubernetes capabilities as core components, as opposed to add-ons, such as compute autoscaling, pod and service networking, application load balancing, cluster DNS, block storage, and GPU support. To get started, you can deploy a new EKS Auto Mode cluster or enable EKS Auto Mode on an existing cluster. You can deploy, upgrade, or modify your EKS Auto Mode clusters using eksctl, the AWS CLI, the AWS Management Console, EKS APIs, or your preferred infrastructure-as-code tools. With EKS Auto Mode, you can continue using your preferred Kubernetes-compatible tools. EKS Auto Mode integrates with AWS services like Amazon EC2, Amazon EBS, and ELB, leveraging AWS cloud resources that follow best practices. These resources are automatically scaled, costoptimized, and regularly updated to help minimize operational costs and overhead. Features EKS Auto Mode provides the following high-level features: Streamline Kubernetes Cluster Management: EKS Auto Mode streamlines EKS management by providing production-ready clusters with minimal operational overhead. With EKS Auto Mode, you can run demanding, dynamic workloads confidently, without requiring deep EKS expertise. Application Availability: EKS Auto Mode dynamically adds or removes nodes in your EKS cluster based on the demands of your Kubernetes applications. This minimizes the need for"} +{"global_id": 1094, "doc_id": "eks", "chunk_id": "30", "question_id": 3, "question": "What does EKS Auto Mode streamline?", "answer_span": "EKS Auto Mode streamlines EKS management by providing production-ready clusters with minimal operational overhead.", "chunk": "is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide Automate cluster infrastructure with EKS Auto Mode EKS Auto Mode extends AWS management of Kubernetes clusters beyond the cluster itself, to allow AWS to also set up and manage the infrastructure that enables the smooth operation of your workloads. You can delegate key infrastructure decisions and leverage the expertise of AWS for day-to-day operations. Cluster infrastructure managed by AWS includes many Kubernetes capabilities as core components, as opposed to add-ons, such as compute autoscaling, pod and service networking, application load balancing, cluster DNS, block storage, and GPU support. To get started, you can deploy a new EKS Auto Mode cluster or enable EKS Auto Mode on an existing cluster. You can deploy, upgrade, or modify your EKS Auto Mode clusters using eksctl, the AWS CLI, the AWS Management Console, EKS APIs, or your preferred infrastructure-as-code tools. With EKS Auto Mode, you can continue using your preferred Kubernetes-compatible tools. EKS Auto Mode integrates with AWS services like Amazon EC2, Amazon EBS, and ELB, leveraging AWS cloud resources that follow best practices. These resources are automatically scaled, costoptimized, and regularly updated to help minimize operational costs and overhead. Features EKS Auto Mode provides the following high-level features: Streamline Kubernetes Cluster Management: EKS Auto Mode streamlines EKS management by providing production-ready clusters with minimal operational overhead. With EKS Auto Mode, you can run demanding, dynamic workloads confidently, without requiring deep EKS expertise. Application Availability: EKS Auto Mode dynamically adds or removes nodes in your EKS cluster based on the demands of your Kubernetes applications. This minimizes the need for"} +{"global_id": 1095, "doc_id": "eks", "chunk_id": "30", "question_id": 4, "question": "How does EKS Auto Mode handle application availability?", "answer_span": "EKS Auto Mode dynamically adds or removes nodes in your EKS cluster based on the demands of your Kubernetes applications.", "chunk": "is defined by one of the following deployment methods. Deploying Pods The method you would choose for deploying Pods depends on the type of application you plan to run with those Pods. Here are some of your choices: Workloads 22 Amazon EKS User Guide Automate cluster infrastructure with EKS Auto Mode EKS Auto Mode extends AWS management of Kubernetes clusters beyond the cluster itself, to allow AWS to also set up and manage the infrastructure that enables the smooth operation of your workloads. You can delegate key infrastructure decisions and leverage the expertise of AWS for day-to-day operations. Cluster infrastructure managed by AWS includes many Kubernetes capabilities as core components, as opposed to add-ons, such as compute autoscaling, pod and service networking, application load balancing, cluster DNS, block storage, and GPU support. To get started, you can deploy a new EKS Auto Mode cluster or enable EKS Auto Mode on an existing cluster. You can deploy, upgrade, or modify your EKS Auto Mode clusters using eksctl, the AWS CLI, the AWS Management Console, EKS APIs, or your preferred infrastructure-as-code tools. With EKS Auto Mode, you can continue using your preferred Kubernetes-compatible tools. EKS Auto Mode integrates with AWS services like Amazon EC2, Amazon EBS, and ELB, leveraging AWS cloud resources that follow best practices. These resources are automatically scaled, costoptimized, and regularly updated to help minimize operational costs and overhead. Features EKS Auto Mode provides the following high-level features: Streamline Kubernetes Cluster Management: EKS Auto Mode streamlines EKS management by providing production-ready clusters with minimal operational overhead. With EKS Auto Mode, you can run demanding, dynamic workloads confidently, without requiring deep EKS expertise. Application Availability: EKS Auto Mode dynamically adds or removes nodes in your EKS cluster based on the demands of your Kubernetes applications. This minimizes the need for"} +{"global_id": 1096, "doc_id": "eks", "chunk_id": "31", "question_id": 1, "question": "What does EKS Auto Mode dynamically do based on the demands of your Kubernetes applications?", "answer_span": "EKS Auto Mode dynamically adds or removes nodes in your EKS cluster based on the demands of your Kubernetes applications.", "chunk": "providing production-ready clusters with minimal operational overhead. With EKS Auto Mode, you can run demanding, dynamic workloads confidently, without requiring deep EKS expertise. Application Availability: EKS Auto Mode dynamically adds or removes nodes in your EKS cluster based on the demands of your Kubernetes applications. This minimizes the need for manual capacity planning and ensures application availability. Efficiency: EKS Auto Mode is designed to optimize compute costs while adhering to the flexibility defined by your NodePool and workload requirements. It also terminates unused instances and consolidates workloads onto other nodes to improve cost efficiency. Security: EKS Auto Mode uses AMIs that are treated as immutable, for your nodes. These AMIs enforce locked-down software, enable SELinux mandatory access controls, and provide read-only root file systems. Additionally, nodes launched by EKS Auto Mode have a maximum lifetime of 21 days (which you can reduce), after which they are automatically replaced with new nodes. This Features 74 Amazon EKS User Guide approach enhances your security posture by regularly cycling nodes, aligning with best practices already adopted by many customers. Automated Upgrades: EKS Auto Mode keeps your Kubernetes cluster, nodes, and related components up to date with the latest patches, while respecting your configured Pod Disruption Budgets (PDBs) and NodePool Disruption Budgets (NDBs). Up to the 21-day maximum lifetime, intervention might be required if blocking PDBs or other configurations prevent updates. Managed Components: EKS Auto Mode includes Kubernetes and AWS cloud features as core components that would otherwise have to be managed as add-ons. This includes built-in support for Pod IP address assignments, Pod network policies, local DNS services, GPU plug-ins, health checkers, and EBS CSI storage. Customizable NodePools and NodeClasses: If your workload requires changes to storage, compute, or networking configurations, you can create custom NodePools and NodeClasses using EKS Auto Mode. While"} +{"global_id": 1097, "doc_id": "eks", "chunk_id": "31", "question_id": 2, "question": "How does EKS Auto Mode enhance your security posture?", "answer_span": "This approach enhances your security posture by regularly cycling nodes, aligning with best practices already adopted by many customers.", "chunk": "providing production-ready clusters with minimal operational overhead. With EKS Auto Mode, you can run demanding, dynamic workloads confidently, without requiring deep EKS expertise. Application Availability: EKS Auto Mode dynamically adds or removes nodes in your EKS cluster based on the demands of your Kubernetes applications. This minimizes the need for manual capacity planning and ensures application availability. Efficiency: EKS Auto Mode is designed to optimize compute costs while adhering to the flexibility defined by your NodePool and workload requirements. It also terminates unused instances and consolidates workloads onto other nodes to improve cost efficiency. Security: EKS Auto Mode uses AMIs that are treated as immutable, for your nodes. These AMIs enforce locked-down software, enable SELinux mandatory access controls, and provide read-only root file systems. Additionally, nodes launched by EKS Auto Mode have a maximum lifetime of 21 days (which you can reduce), after which they are automatically replaced with new nodes. This Features 74 Amazon EKS User Guide approach enhances your security posture by regularly cycling nodes, aligning with best practices already adopted by many customers. Automated Upgrades: EKS Auto Mode keeps your Kubernetes cluster, nodes, and related components up to date with the latest patches, while respecting your configured Pod Disruption Budgets (PDBs) and NodePool Disruption Budgets (NDBs). Up to the 21-day maximum lifetime, intervention might be required if blocking PDBs or other configurations prevent updates. Managed Components: EKS Auto Mode includes Kubernetes and AWS cloud features as core components that would otherwise have to be managed as add-ons. This includes built-in support for Pod IP address assignments, Pod network policies, local DNS services, GPU plug-ins, health checkers, and EBS CSI storage. Customizable NodePools and NodeClasses: If your workload requires changes to storage, compute, or networking configurations, you can create custom NodePools and NodeClasses using EKS Auto Mode. While"} +{"global_id": 1098, "doc_id": "eks", "chunk_id": "31", "question_id": 3, "question": "What is the maximum lifetime of nodes launched by EKS Auto Mode?", "answer_span": "nodes launched by EKS Auto Mode have a maximum lifetime of 21 days (which you can reduce)", "chunk": "providing production-ready clusters with minimal operational overhead. With EKS Auto Mode, you can run demanding, dynamic workloads confidently, without requiring deep EKS expertise. Application Availability: EKS Auto Mode dynamically adds or removes nodes in your EKS cluster based on the demands of your Kubernetes applications. This minimizes the need for manual capacity planning and ensures application availability. Efficiency: EKS Auto Mode is designed to optimize compute costs while adhering to the flexibility defined by your NodePool and workload requirements. It also terminates unused instances and consolidates workloads onto other nodes to improve cost efficiency. Security: EKS Auto Mode uses AMIs that are treated as immutable, for your nodes. These AMIs enforce locked-down software, enable SELinux mandatory access controls, and provide read-only root file systems. Additionally, nodes launched by EKS Auto Mode have a maximum lifetime of 21 days (which you can reduce), after which they are automatically replaced with new nodes. This Features 74 Amazon EKS User Guide approach enhances your security posture by regularly cycling nodes, aligning with best practices already adopted by many customers. Automated Upgrades: EKS Auto Mode keeps your Kubernetes cluster, nodes, and related components up to date with the latest patches, while respecting your configured Pod Disruption Budgets (PDBs) and NodePool Disruption Budgets (NDBs). Up to the 21-day maximum lifetime, intervention might be required if blocking PDBs or other configurations prevent updates. Managed Components: EKS Auto Mode includes Kubernetes and AWS cloud features as core components that would otherwise have to be managed as add-ons. This includes built-in support for Pod IP address assignments, Pod network policies, local DNS services, GPU plug-ins, health checkers, and EBS CSI storage. Customizable NodePools and NodeClasses: If your workload requires changes to storage, compute, or networking configurations, you can create custom NodePools and NodeClasses using EKS Auto Mode. While"} +{"global_id": 1099, "doc_id": "eks", "chunk_id": "31", "question_id": 4, "question": "What does EKS Auto Mode include as core components?", "answer_span": "EKS Auto Mode includes Kubernetes and AWS cloud features as core components that would otherwise have to be managed as add-ons.", "chunk": "providing production-ready clusters with minimal operational overhead. With EKS Auto Mode, you can run demanding, dynamic workloads confidently, without requiring deep EKS expertise. Application Availability: EKS Auto Mode dynamically adds or removes nodes in your EKS cluster based on the demands of your Kubernetes applications. This minimizes the need for manual capacity planning and ensures application availability. Efficiency: EKS Auto Mode is designed to optimize compute costs while adhering to the flexibility defined by your NodePool and workload requirements. It also terminates unused instances and consolidates workloads onto other nodes to improve cost efficiency. Security: EKS Auto Mode uses AMIs that are treated as immutable, for your nodes. These AMIs enforce locked-down software, enable SELinux mandatory access controls, and provide read-only root file systems. Additionally, nodes launched by EKS Auto Mode have a maximum lifetime of 21 days (which you can reduce), after which they are automatically replaced with new nodes. This Features 74 Amazon EKS User Guide approach enhances your security posture by regularly cycling nodes, aligning with best practices already adopted by many customers. Automated Upgrades: EKS Auto Mode keeps your Kubernetes cluster, nodes, and related components up to date with the latest patches, while respecting your configured Pod Disruption Budgets (PDBs) and NodePool Disruption Budgets (NDBs). Up to the 21-day maximum lifetime, intervention might be required if blocking PDBs or other configurations prevent updates. Managed Components: EKS Auto Mode includes Kubernetes and AWS cloud features as core components that would otherwise have to be managed as add-ons. This includes built-in support for Pod IP address assignments, Pod network policies, local DNS services, GPU plug-ins, health checkers, and EBS CSI storage. Customizable NodePools and NodeClasses: If your workload requires changes to storage, compute, or networking configurations, you can create custom NodePools and NodeClasses using EKS Auto Mode. While"} +{"global_id": 1100, "doc_id": "eks", "chunk_id": "32", "question_id": 1, "question": "What does EKS Auto Mode automate?", "answer_span": "EKS Auto Mode streamlines the operation of your Amazon EKS clusters by automating key infrastructure components.", "chunk": "This includes built-in support for Pod IP address assignments, Pod network policies, local DNS services, GPU plug-ins, health checkers, and EBS CSI storage. Customizable NodePools and NodeClasses: If your workload requires changes to storage, compute, or networking configurations, you can create custom NodePools and NodeClasses using EKS Auto Mode. While you should not edit default NodePools and NodeClasses, you can add new custom NodePools or NodeClasses alongside the default configurations to meet your specific requirements. Automated Components EKS Auto Mode streamlines the operation of your Amazon EKS clusters by automating key infrastructure components. Enabling EKS Auto Mode further reduces the tasks to manage your EKS clusters. The following is a list of data plane components that are automated: • Compute: For many workloads, with EKS Auto Mode you can forget about many aspects of compute for your EKS clusters. These include: • Nodes: EKS Auto Mode nodes are designed to be treated like appliances. EKS Auto Mode does the following: • Chooses an appropriate AMI that’s configured with many services needed to run your workloads without intervention. • Locks down access to files on the AMI using SELinux enforcing mode and a read-only root file system. • Prevents direct access to the nodes by disallowing SSH or SSM access. • Includes GPU support, with separate kernel drivers and plugins for NVIDIA and Neuron GPUs, enabling high-performance workloads. Automated Components 75 Amazon EKS User Guide • Automatically handles EC2 Spot Instance interruption notices and EC2 Instance health events • Auto scaling: Relying on Karpenter auto scaling, EKS Auto Mode monitors for unschedulable Pods and makes it possible for new nodes to be deployed to run those pods. As workloads are terminated, EKS Auto Mode dynamically disrupts and terminates nodes when they are no longer needed, optimizing resource usage. • Upgrades: Taking"} +{"global_id": 1101, "doc_id": "eks", "chunk_id": "32", "question_id": 2, "question": "What should you not edit in EKS Auto Mode?", "answer_span": "While you should not edit default NodePools and NodeClasses, you can add new custom NodePools or NodeClasses alongside the default configurations to meet your specific requirements.", "chunk": "This includes built-in support for Pod IP address assignments, Pod network policies, local DNS services, GPU plug-ins, health checkers, and EBS CSI storage. Customizable NodePools and NodeClasses: If your workload requires changes to storage, compute, or networking configurations, you can create custom NodePools and NodeClasses using EKS Auto Mode. While you should not edit default NodePools and NodeClasses, you can add new custom NodePools or NodeClasses alongside the default configurations to meet your specific requirements. Automated Components EKS Auto Mode streamlines the operation of your Amazon EKS clusters by automating key infrastructure components. Enabling EKS Auto Mode further reduces the tasks to manage your EKS clusters. The following is a list of data plane components that are automated: • Compute: For many workloads, with EKS Auto Mode you can forget about many aspects of compute for your EKS clusters. These include: • Nodes: EKS Auto Mode nodes are designed to be treated like appliances. EKS Auto Mode does the following: • Chooses an appropriate AMI that’s configured with many services needed to run your workloads without intervention. • Locks down access to files on the AMI using SELinux enforcing mode and a read-only root file system. • Prevents direct access to the nodes by disallowing SSH or SSM access. • Includes GPU support, with separate kernel drivers and plugins for NVIDIA and Neuron GPUs, enabling high-performance workloads. Automated Components 75 Amazon EKS User Guide • Automatically handles EC2 Spot Instance interruption notices and EC2 Instance health events • Auto scaling: Relying on Karpenter auto scaling, EKS Auto Mode monitors for unschedulable Pods and makes it possible for new nodes to be deployed to run those pods. As workloads are terminated, EKS Auto Mode dynamically disrupts and terminates nodes when they are no longer needed, optimizing resource usage. • Upgrades: Taking"} +{"global_id": 1102, "doc_id": "eks", "chunk_id": "32", "question_id": 3, "question": "What does EKS Auto Mode do for nodes?", "answer_span": "EKS Auto Mode nodes are designed to be treated like appliances.", "chunk": "This includes built-in support for Pod IP address assignments, Pod network policies, local DNS services, GPU plug-ins, health checkers, and EBS CSI storage. Customizable NodePools and NodeClasses: If your workload requires changes to storage, compute, or networking configurations, you can create custom NodePools and NodeClasses using EKS Auto Mode. While you should not edit default NodePools and NodeClasses, you can add new custom NodePools or NodeClasses alongside the default configurations to meet your specific requirements. Automated Components EKS Auto Mode streamlines the operation of your Amazon EKS clusters by automating key infrastructure components. Enabling EKS Auto Mode further reduces the tasks to manage your EKS clusters. The following is a list of data plane components that are automated: • Compute: For many workloads, with EKS Auto Mode you can forget about many aspects of compute for your EKS clusters. These include: • Nodes: EKS Auto Mode nodes are designed to be treated like appliances. EKS Auto Mode does the following: • Chooses an appropriate AMI that’s configured with many services needed to run your workloads without intervention. • Locks down access to files on the AMI using SELinux enforcing mode and a read-only root file system. • Prevents direct access to the nodes by disallowing SSH or SSM access. • Includes GPU support, with separate kernel drivers and plugins for NVIDIA and Neuron GPUs, enabling high-performance workloads. Automated Components 75 Amazon EKS User Guide • Automatically handles EC2 Spot Instance interruption notices and EC2 Instance health events • Auto scaling: Relying on Karpenter auto scaling, EKS Auto Mode monitors for unschedulable Pods and makes it possible for new nodes to be deployed to run those pods. As workloads are terminated, EKS Auto Mode dynamically disrupts and terminates nodes when they are no longer needed, optimizing resource usage. • Upgrades: Taking"} +{"global_id": 1103, "doc_id": "eks", "chunk_id": "32", "question_id": 4, "question": "How does EKS Auto Mode handle EC2 Spot Instance interruption notices?", "answer_span": "Automatically handles EC2 Spot Instance interruption notices and EC2 Instance health events.", "chunk": "This includes built-in support for Pod IP address assignments, Pod network policies, local DNS services, GPU plug-ins, health checkers, and EBS CSI storage. Customizable NodePools and NodeClasses: If your workload requires changes to storage, compute, or networking configurations, you can create custom NodePools and NodeClasses using EKS Auto Mode. While you should not edit default NodePools and NodeClasses, you can add new custom NodePools or NodeClasses alongside the default configurations to meet your specific requirements. Automated Components EKS Auto Mode streamlines the operation of your Amazon EKS clusters by automating key infrastructure components. Enabling EKS Auto Mode further reduces the tasks to manage your EKS clusters. The following is a list of data plane components that are automated: • Compute: For many workloads, with EKS Auto Mode you can forget about many aspects of compute for your EKS clusters. These include: • Nodes: EKS Auto Mode nodes are designed to be treated like appliances. EKS Auto Mode does the following: • Chooses an appropriate AMI that’s configured with many services needed to run your workloads without intervention. • Locks down access to files on the AMI using SELinux enforcing mode and a read-only root file system. • Prevents direct access to the nodes by disallowing SSH or SSM access. • Includes GPU support, with separate kernel drivers and plugins for NVIDIA and Neuron GPUs, enabling high-performance workloads. Automated Components 75 Amazon EKS User Guide • Automatically handles EC2 Spot Instance interruption notices and EC2 Instance health events • Auto scaling: Relying on Karpenter auto scaling, EKS Auto Mode monitors for unschedulable Pods and makes it possible for new nodes to be deployed to run those pods. As workloads are terminated, EKS Auto Mode dynamically disrupts and terminates nodes when they are no longer needed, optimizing resource usage. • Upgrades: Taking"} +{"global_id": 1104, "doc_id": "eks", "chunk_id": "33", "question_id": 1, "question": "What does EKS Auto Mode monitor for?", "answer_span": "EKS Auto Mode monitors for unschedulable Pods", "chunk": "Relying on Karpenter auto scaling, EKS Auto Mode monitors for unschedulable Pods and makes it possible for new nodes to be deployed to run those pods. As workloads are terminated, EKS Auto Mode dynamically disrupts and terminates nodes when they are no longer needed, optimizing resource usage. • Upgrades: Taking control of your nodes streamlines EKS Auto Mode’s ability to provide security patches and operating system and component upgrades as needed. Those upgrades are designed to provide minimal disruption of your workloads. EKS Auto Mode enforces a 21day maximum node lifetime to ensure up-to-date software and APIs. • Load balancing: EKS Auto Mode streamlines load balancing by integrating with Amazon’s Elastic Load Balancing service, automating the provisioning and configuration of load balancers for Kubernetes Services and Ingress resources. It supports advanced features for both Application and Network Load Balancers, manages their lifecycle, and scales them to match cluster demands. This integration provides a production-ready load balancing solution adhering to AWS best practices, allowing you to focus on applications rather than infrastructure management. • Storage: EKS Auto Mode configures ephemeral storage for you by setting up volume types, volume sizes, encryption policies, and deletion policies upon node termination. • Networking: EKS Auto Mode automates critical networking tasks for Pod and service connectivity. This includes IPv4/IPv6 support and the use of secondary CIDR blocks for extending IP address spaces. • Identity and Access Management: You do not have to install the EKS Pod Identity Agent on EKS Auto Mode clusters. For more information about these components, see the section called “How it works”. Configuration While EKS Auto Mode will effectively manage most of your data plane services without your intervention, there might be times when you want to change the behavior of some of those services. You can modify the configuration of"} +{"global_id": 1105, "doc_id": "eks", "chunk_id": "33", "question_id": 2, "question": "What is the maximum node lifetime enforced by EKS Auto Mode?", "answer_span": "EKS Auto Mode enforces a 21day maximum node lifetime", "chunk": "Relying on Karpenter auto scaling, EKS Auto Mode monitors for unschedulable Pods and makes it possible for new nodes to be deployed to run those pods. As workloads are terminated, EKS Auto Mode dynamically disrupts and terminates nodes when they are no longer needed, optimizing resource usage. • Upgrades: Taking control of your nodes streamlines EKS Auto Mode’s ability to provide security patches and operating system and component upgrades as needed. Those upgrades are designed to provide minimal disruption of your workloads. EKS Auto Mode enforces a 21day maximum node lifetime to ensure up-to-date software and APIs. • Load balancing: EKS Auto Mode streamlines load balancing by integrating with Amazon’s Elastic Load Balancing service, automating the provisioning and configuration of load balancers for Kubernetes Services and Ingress resources. It supports advanced features for both Application and Network Load Balancers, manages their lifecycle, and scales them to match cluster demands. This integration provides a production-ready load balancing solution adhering to AWS best practices, allowing you to focus on applications rather than infrastructure management. • Storage: EKS Auto Mode configures ephemeral storage for you by setting up volume types, volume sizes, encryption policies, and deletion policies upon node termination. • Networking: EKS Auto Mode automates critical networking tasks for Pod and service connectivity. This includes IPv4/IPv6 support and the use of secondary CIDR blocks for extending IP address spaces. • Identity and Access Management: You do not have to install the EKS Pod Identity Agent on EKS Auto Mode clusters. For more information about these components, see the section called “How it works”. Configuration While EKS Auto Mode will effectively manage most of your data plane services without your intervention, there might be times when you want to change the behavior of some of those services. You can modify the configuration of"} +{"global_id": 1106, "doc_id": "eks", "chunk_id": "33", "question_id": 3, "question": "How does EKS Auto Mode streamline load balancing?", "answer_span": "EKS Auto Mode streamlines load balancing by integrating with Amazon’s Elastic Load Balancing service", "chunk": "Relying on Karpenter auto scaling, EKS Auto Mode monitors for unschedulable Pods and makes it possible for new nodes to be deployed to run those pods. As workloads are terminated, EKS Auto Mode dynamically disrupts and terminates nodes when they are no longer needed, optimizing resource usage. • Upgrades: Taking control of your nodes streamlines EKS Auto Mode’s ability to provide security patches and operating system and component upgrades as needed. Those upgrades are designed to provide minimal disruption of your workloads. EKS Auto Mode enforces a 21day maximum node lifetime to ensure up-to-date software and APIs. • Load balancing: EKS Auto Mode streamlines load balancing by integrating with Amazon’s Elastic Load Balancing service, automating the provisioning and configuration of load balancers for Kubernetes Services and Ingress resources. It supports advanced features for both Application and Network Load Balancers, manages their lifecycle, and scales them to match cluster demands. This integration provides a production-ready load balancing solution adhering to AWS best practices, allowing you to focus on applications rather than infrastructure management. • Storage: EKS Auto Mode configures ephemeral storage for you by setting up volume types, volume sizes, encryption policies, and deletion policies upon node termination. • Networking: EKS Auto Mode automates critical networking tasks for Pod and service connectivity. This includes IPv4/IPv6 support and the use of secondary CIDR blocks for extending IP address spaces. • Identity and Access Management: You do not have to install the EKS Pod Identity Agent on EKS Auto Mode clusters. For more information about these components, see the section called “How it works”. Configuration While EKS Auto Mode will effectively manage most of your data plane services without your intervention, there might be times when you want to change the behavior of some of those services. You can modify the configuration of"} +{"global_id": 1107, "doc_id": "eks", "chunk_id": "33", "question_id": 4, "question": "What critical networking tasks does EKS Auto Mode automate?", "answer_span": "EKS Auto Mode automates critical networking tasks for Pod and service connectivity", "chunk": "Relying on Karpenter auto scaling, EKS Auto Mode monitors for unschedulable Pods and makes it possible for new nodes to be deployed to run those pods. As workloads are terminated, EKS Auto Mode dynamically disrupts and terminates nodes when they are no longer needed, optimizing resource usage. • Upgrades: Taking control of your nodes streamlines EKS Auto Mode’s ability to provide security patches and operating system and component upgrades as needed. Those upgrades are designed to provide minimal disruption of your workloads. EKS Auto Mode enforces a 21day maximum node lifetime to ensure up-to-date software and APIs. • Load balancing: EKS Auto Mode streamlines load balancing by integrating with Amazon’s Elastic Load Balancing service, automating the provisioning and configuration of load balancers for Kubernetes Services and Ingress resources. It supports advanced features for both Application and Network Load Balancers, manages their lifecycle, and scales them to match cluster demands. This integration provides a production-ready load balancing solution adhering to AWS best practices, allowing you to focus on applications rather than infrastructure management. • Storage: EKS Auto Mode configures ephemeral storage for you by setting up volume types, volume sizes, encryption policies, and deletion policies upon node termination. • Networking: EKS Auto Mode automates critical networking tasks for Pod and service connectivity. This includes IPv4/IPv6 support and the use of secondary CIDR blocks for extending IP address spaces. • Identity and Access Management: You do not have to install the EKS Pod Identity Agent on EKS Auto Mode clusters. For more information about these components, see the section called “How it works”. Configuration While EKS Auto Mode will effectively manage most of your data plane services without your intervention, there might be times when you want to change the behavior of some of those services. You can modify the configuration of"} +{"global_id": 1108, "doc_id": "eks", "chunk_id": "34", "question_id": 1, "question": "What is the purpose of the Kubernetes Dashboard?", "answer_span": "The Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters.", "chunk": "information about these components, see the section called “How it works”. Configuration While EKS Auto Mode will effectively manage most of your data plane services without your intervention, there might be times when you want to change the behavior of some of those services. You can modify the configuration of your EKS Auto Mode clusters in the following ways: • Kubernetes DaemonSets: Rather than modify services installed on your nodes, you can instead use Kubernetes daemonsets. Daemonsets are designed to be managed by Kubernetes, but run Configuration 76 Amazon EKS User Guide Organize and monitor cluster resources This chapter includes the following topics to help you manage your cluster. You can also view information about your Kubernetes resources with the AWS Management Console. • The Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself. For more information, see The Kubernetes Dashboard GitHub repository. • the section called “Metrics server” – The Kubernetes Metrics Server is an aggregator of resource usage data in your cluster. It isn’t deployed by default in your cluster, but is used by Kubernetes add-ons, such as the Kubernetes Dashboard and the section called “Horizontal Pod Autoscaler”. In this topic you learn how to install the Metrics Server. • the section called “Deploy apps with Helm” – The Helm package manager for Kubernetes helps you install and manage applications on your Kubernetes cluster. This topic helps you install and run the Helm binaries so that you can install and manage charts using the Helm CLI on your local computer. • the section called “Tagging your resources” – To help you manage your Amazon EKS resources, you can assign your own metadata to each resource in"} +{"global_id": 1109, "doc_id": "eks", "chunk_id": "34", "question_id": 2, "question": "What is the Kubernetes Metrics Server used for?", "answer_span": "The Kubernetes Metrics Server is an aggregator of resource usage data in your cluster.", "chunk": "information about these components, see the section called “How it works”. Configuration While EKS Auto Mode will effectively manage most of your data plane services without your intervention, there might be times when you want to change the behavior of some of those services. You can modify the configuration of your EKS Auto Mode clusters in the following ways: • Kubernetes DaemonSets: Rather than modify services installed on your nodes, you can instead use Kubernetes daemonsets. Daemonsets are designed to be managed by Kubernetes, but run Configuration 76 Amazon EKS User Guide Organize and monitor cluster resources This chapter includes the following topics to help you manage your cluster. You can also view information about your Kubernetes resources with the AWS Management Console. • The Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself. For more information, see The Kubernetes Dashboard GitHub repository. • the section called “Metrics server” – The Kubernetes Metrics Server is an aggregator of resource usage data in your cluster. It isn’t deployed by default in your cluster, but is used by Kubernetes add-ons, such as the Kubernetes Dashboard and the section called “Horizontal Pod Autoscaler”. In this topic you learn how to install the Metrics Server. • the section called “Deploy apps with Helm” – The Helm package manager for Kubernetes helps you install and manage applications on your Kubernetes cluster. This topic helps you install and run the Helm binaries so that you can install and manage charts using the Helm CLI on your local computer. • the section called “Tagging your resources” – To help you manage your Amazon EKS resources, you can assign your own metadata to each resource in"} +{"global_id": 1110, "doc_id": "eks", "chunk_id": "34", "question_id": 3, "question": "What does the Helm package manager help you do?", "answer_span": "The Helm package manager for Kubernetes helps you install and manage applications on your Kubernetes cluster.", "chunk": "information about these components, see the section called “How it works”. Configuration While EKS Auto Mode will effectively manage most of your data plane services without your intervention, there might be times when you want to change the behavior of some of those services. You can modify the configuration of your EKS Auto Mode clusters in the following ways: • Kubernetes DaemonSets: Rather than modify services installed on your nodes, you can instead use Kubernetes daemonsets. Daemonsets are designed to be managed by Kubernetes, but run Configuration 76 Amazon EKS User Guide Organize and monitor cluster resources This chapter includes the following topics to help you manage your cluster. You can also view information about your Kubernetes resources with the AWS Management Console. • The Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself. For more information, see The Kubernetes Dashboard GitHub repository. • the section called “Metrics server” – The Kubernetes Metrics Server is an aggregator of resource usage data in your cluster. It isn’t deployed by default in your cluster, but is used by Kubernetes add-ons, such as the Kubernetes Dashboard and the section called “Horizontal Pod Autoscaler”. In this topic you learn how to install the Metrics Server. • the section called “Deploy apps with Helm” – The Helm package manager for Kubernetes helps you install and manage applications on your Kubernetes cluster. This topic helps you install and run the Helm binaries so that you can install and manage charts using the Helm CLI on your local computer. • the section called “Tagging your resources” – To help you manage your Amazon EKS resources, you can assign your own metadata to each resource in"} +{"global_id": 1111, "doc_id": "eks", "chunk_id": "34", "question_id": 4, "question": "How can you modify the configuration of your EKS Auto Mode clusters?", "answer_span": "You can modify the configuration of your EKS Auto Mode clusters in the following ways: • Kubernetes DaemonSets.", "chunk": "information about these components, see the section called “How it works”. Configuration While EKS Auto Mode will effectively manage most of your data plane services without your intervention, there might be times when you want to change the behavior of some of those services. You can modify the configuration of your EKS Auto Mode clusters in the following ways: • Kubernetes DaemonSets: Rather than modify services installed on your nodes, you can instead use Kubernetes daemonsets. Daemonsets are designed to be managed by Kubernetes, but run Configuration 76 Amazon EKS User Guide Organize and monitor cluster resources This chapter includes the following topics to help you manage your cluster. You can also view information about your Kubernetes resources with the AWS Management Console. • The Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself. For more information, see The Kubernetes Dashboard GitHub repository. • the section called “Metrics server” – The Kubernetes Metrics Server is an aggregator of resource usage data in your cluster. It isn’t deployed by default in your cluster, but is used by Kubernetes add-ons, such as the Kubernetes Dashboard and the section called “Horizontal Pod Autoscaler”. In this topic you learn how to install the Metrics Server. • the section called “Deploy apps with Helm” – The Helm package manager for Kubernetes helps you install and manage applications on your Kubernetes cluster. This topic helps you install and run the Helm binaries so that you can install and manage charts using the Helm CLI on your local computer. • the section called “Tagging your resources” – To help you manage your Amazon EKS resources, you can assign your own metadata to each resource in"} +{"global_id": 1112, "doc_id": "eks", "chunk_id": "35", "question_id": 1, "question": "What does the Helm binaries installation help you do?", "answer_span": "helps you install and run the Helm binaries so that you can install and manage charts using the Helm CLI on your local computer.", "chunk": "helps you install and run the Helm binaries so that you can install and manage charts using the Helm CLI on your local computer. • the section called “Tagging your resources” – To help you manage your Amazon EKS resources, you can assign your own metadata to each resource in the form of tags. This topic describes tags and shows you how to create them. • the section called “Service quotas” – Your AWS account has default quotas, formerly referred to as limits, for each AWS service. Learn about the quotas for Amazon EKS and how to increase them. Monitor and optimize Amazon EKS cluster costs Cost monitoring is an essential aspect of managing your Kubernetes clusters on Amazon EKS. By gaining visibility into your cluster costs, you can optimize resource utilization, set budgets, and make data-driven decisions about your deployments. Amazon EKS provides two cost monitoring solutions, each with its own unique advantages, to help you track and allocate your costs effectively: AWS Billing split cost allocation data for Amazon EKS — This native feature integrates seamlessly with the AWS Billing Console, allowing you to analyze and allocate costs using the same familiar interface and workflows you use for other AWS services. With split cost allocation, you can gain insights into your Kubernetes costs directly alongside your other AWS spend, making it easier Cost monitoring 1271 Amazon EKS User Guide to optimize costs holistically across your AWS environment. You can also leverage existing AWS Billing features like Cost Categories and Cost Anomaly Detection to further enhance your cost management capabilities. For more information, see Understanding split cost allocation data in the AWS Billing User Guide. Kubecost — Amazon EKS supports Kubecost, a Kubernetes cost monitoring tool. Kubecost offers a feature-rich, Kubernetes-native approach to cost monitoring, providing granular cost breakdowns"} +{"global_id": 1113, "doc_id": "eks", "chunk_id": "35", "question_id": 2, "question": "What can you assign to each Amazon EKS resource to help manage them?", "answer_span": "you can assign your own metadata to each resource in the form of tags.", "chunk": "helps you install and run the Helm binaries so that you can install and manage charts using the Helm CLI on your local computer. • the section called “Tagging your resources” – To help you manage your Amazon EKS resources, you can assign your own metadata to each resource in the form of tags. This topic describes tags and shows you how to create them. • the section called “Service quotas” – Your AWS account has default quotas, formerly referred to as limits, for each AWS service. Learn about the quotas for Amazon EKS and how to increase them. Monitor and optimize Amazon EKS cluster costs Cost monitoring is an essential aspect of managing your Kubernetes clusters on Amazon EKS. By gaining visibility into your cluster costs, you can optimize resource utilization, set budgets, and make data-driven decisions about your deployments. Amazon EKS provides two cost monitoring solutions, each with its own unique advantages, to help you track and allocate your costs effectively: AWS Billing split cost allocation data for Amazon EKS — This native feature integrates seamlessly with the AWS Billing Console, allowing you to analyze and allocate costs using the same familiar interface and workflows you use for other AWS services. With split cost allocation, you can gain insights into your Kubernetes costs directly alongside your other AWS spend, making it easier Cost monitoring 1271 Amazon EKS User Guide to optimize costs holistically across your AWS environment. You can also leverage existing AWS Billing features like Cost Categories and Cost Anomaly Detection to further enhance your cost management capabilities. For more information, see Understanding split cost allocation data in the AWS Billing User Guide. Kubecost — Amazon EKS supports Kubecost, a Kubernetes cost monitoring tool. Kubecost offers a feature-rich, Kubernetes-native approach to cost monitoring, providing granular cost breakdowns"} +{"global_id": 1114, "doc_id": "eks", "chunk_id": "35", "question_id": 3, "question": "What are the default quotas for each AWS service formerly referred to as?", "answer_span": "default quotas, formerly referred to as limits, for each AWS service.", "chunk": "helps you install and run the Helm binaries so that you can install and manage charts using the Helm CLI on your local computer. • the section called “Tagging your resources” – To help you manage your Amazon EKS resources, you can assign your own metadata to each resource in the form of tags. This topic describes tags and shows you how to create them. • the section called “Service quotas” – Your AWS account has default quotas, formerly referred to as limits, for each AWS service. Learn about the quotas for Amazon EKS and how to increase them. Monitor and optimize Amazon EKS cluster costs Cost monitoring is an essential aspect of managing your Kubernetes clusters on Amazon EKS. By gaining visibility into your cluster costs, you can optimize resource utilization, set budgets, and make data-driven decisions about your deployments. Amazon EKS provides two cost monitoring solutions, each with its own unique advantages, to help you track and allocate your costs effectively: AWS Billing split cost allocation data for Amazon EKS — This native feature integrates seamlessly with the AWS Billing Console, allowing you to analyze and allocate costs using the same familiar interface and workflows you use for other AWS services. With split cost allocation, you can gain insights into your Kubernetes costs directly alongside your other AWS spend, making it easier Cost monitoring 1271 Amazon EKS User Guide to optimize costs holistically across your AWS environment. You can also leverage existing AWS Billing features like Cost Categories and Cost Anomaly Detection to further enhance your cost management capabilities. For more information, see Understanding split cost allocation data in the AWS Billing User Guide. Kubecost — Amazon EKS supports Kubecost, a Kubernetes cost monitoring tool. Kubecost offers a feature-rich, Kubernetes-native approach to cost monitoring, providing granular cost breakdowns"} +{"global_id": 1115, "doc_id": "eks", "chunk_id": "35", "question_id": 4, "question": "What does Amazon EKS provide to help track and allocate costs effectively?", "answer_span": "Amazon EKS provides two cost monitoring solutions, each with its own unique advantages, to help you track and allocate your costs effectively.", "chunk": "helps you install and run the Helm binaries so that you can install and manage charts using the Helm CLI on your local computer. • the section called “Tagging your resources” – To help you manage your Amazon EKS resources, you can assign your own metadata to each resource in the form of tags. This topic describes tags and shows you how to create them. • the section called “Service quotas” – Your AWS account has default quotas, formerly referred to as limits, for each AWS service. Learn about the quotas for Amazon EKS and how to increase them. Monitor and optimize Amazon EKS cluster costs Cost monitoring is an essential aspect of managing your Kubernetes clusters on Amazon EKS. By gaining visibility into your cluster costs, you can optimize resource utilization, set budgets, and make data-driven decisions about your deployments. Amazon EKS provides two cost monitoring solutions, each with its own unique advantages, to help you track and allocate your costs effectively: AWS Billing split cost allocation data for Amazon EKS — This native feature integrates seamlessly with the AWS Billing Console, allowing you to analyze and allocate costs using the same familiar interface and workflows you use for other AWS services. With split cost allocation, you can gain insights into your Kubernetes costs directly alongside your other AWS spend, making it easier Cost monitoring 1271 Amazon EKS User Guide to optimize costs holistically across your AWS environment. You can also leverage existing AWS Billing features like Cost Categories and Cost Anomaly Detection to further enhance your cost management capabilities. For more information, see Understanding split cost allocation data in the AWS Billing User Guide. Kubecost — Amazon EKS supports Kubecost, a Kubernetes cost monitoring tool. Kubecost offers a feature-rich, Kubernetes-native approach to cost monitoring, providing granular cost breakdowns"} +{"global_id": 1116, "doc_id": "eks", "chunk_id": "36", "question_id": 1, "question": "What tool does Amazon EKS support for cost monitoring?", "answer_span": "Kubecost, a Kubernetes cost monitoring tool.", "chunk": "and Cost Anomaly Detection to further enhance your cost management capabilities. For more information, see Understanding split cost allocation data in the AWS Billing User Guide. Kubecost — Amazon EKS supports Kubecost, a Kubernetes cost monitoring tool. Kubecost offers a feature-rich, Kubernetes-native approach to cost monitoring, providing granular cost breakdowns by Kubernetes resources, cost optimization recommendations, and out-of-the-box dashboards and reports. Kubecost also retrieves accurate pricing data by integrating with the AWS Cost and Usage Report, ensuring you get a precise view of your Amazon EKS costs. Learn how to Install Kubecost. See the Kubecost AWS Marketplace page for information on getting a free Kubecost subscription. View costs by Pod in AWS billing with split cost allocation Cost monitoring using AWS split cost allocation data for Amazon EKS You can use AWS split cost allocation data for Amazon EKS to get granular cost visibility for your Amazon EKS clusters. This enables you to analyze, optimize, and chargeback cost and usage for your Kubernetes applications. You allocate application costs to individual business units and teams based on Amazon EC2 CPU and memory resources consumed by your Kubernetes application. Split cost allocation data for Amazon EKS gives visibility into cost per Pod, and enables you to aggregate the cost data per Pod using namespace, cluster, and other Kubernetes primitives. The following are examples of Kubernetes primitives that you can use to analyze Amazon EKS cost allocation data. • Cluster name • Deployment • Namespace • Node • Workload Name • Workload Type User-defined cost allocation tags are also supported. For more information about using split cost allocation data, see Understanding split cost allocation data in the AWS Billing User Guide. Set up Cost and Usage Reports You can turn on Split Cost Allocation Data for EKS in the Cost Management Console, AWS"} +{"global_id": 1117, "doc_id": "eks", "chunk_id": "36", "question_id": 2, "question": "What does split cost allocation data for Amazon EKS enable you to do?", "answer_span": "This enables you to analyze, optimize, and chargeback cost and usage for your Kubernetes applications.", "chunk": "and Cost Anomaly Detection to further enhance your cost management capabilities. For more information, see Understanding split cost allocation data in the AWS Billing User Guide. Kubecost — Amazon EKS supports Kubecost, a Kubernetes cost monitoring tool. Kubecost offers a feature-rich, Kubernetes-native approach to cost monitoring, providing granular cost breakdowns by Kubernetes resources, cost optimization recommendations, and out-of-the-box dashboards and reports. Kubecost also retrieves accurate pricing data by integrating with the AWS Cost and Usage Report, ensuring you get a precise view of your Amazon EKS costs. Learn how to Install Kubecost. See the Kubecost AWS Marketplace page for information on getting a free Kubecost subscription. View costs by Pod in AWS billing with split cost allocation Cost monitoring using AWS split cost allocation data for Amazon EKS You can use AWS split cost allocation data for Amazon EKS to get granular cost visibility for your Amazon EKS clusters. This enables you to analyze, optimize, and chargeback cost and usage for your Kubernetes applications. You allocate application costs to individual business units and teams based on Amazon EC2 CPU and memory resources consumed by your Kubernetes application. Split cost allocation data for Amazon EKS gives visibility into cost per Pod, and enables you to aggregate the cost data per Pod using namespace, cluster, and other Kubernetes primitives. The following are examples of Kubernetes primitives that you can use to analyze Amazon EKS cost allocation data. • Cluster name • Deployment • Namespace • Node • Workload Name • Workload Type User-defined cost allocation tags are also supported. For more information about using split cost allocation data, see Understanding split cost allocation data in the AWS Billing User Guide. Set up Cost and Usage Reports You can turn on Split Cost Allocation Data for EKS in the Cost Management Console, AWS"} +{"global_id": 1118, "doc_id": "eks", "chunk_id": "36", "question_id": 3, "question": "What can you allocate application costs based on?", "answer_span": "You allocate application costs to individual business units and teams based on Amazon EC2 CPU and memory resources consumed by your Kubernetes application.", "chunk": "and Cost Anomaly Detection to further enhance your cost management capabilities. For more information, see Understanding split cost allocation data in the AWS Billing User Guide. Kubecost — Amazon EKS supports Kubecost, a Kubernetes cost monitoring tool. Kubecost offers a feature-rich, Kubernetes-native approach to cost monitoring, providing granular cost breakdowns by Kubernetes resources, cost optimization recommendations, and out-of-the-box dashboards and reports. Kubecost also retrieves accurate pricing data by integrating with the AWS Cost and Usage Report, ensuring you get a precise view of your Amazon EKS costs. Learn how to Install Kubecost. See the Kubecost AWS Marketplace page for information on getting a free Kubecost subscription. View costs by Pod in AWS billing with split cost allocation Cost monitoring using AWS split cost allocation data for Amazon EKS You can use AWS split cost allocation data for Amazon EKS to get granular cost visibility for your Amazon EKS clusters. This enables you to analyze, optimize, and chargeback cost and usage for your Kubernetes applications. You allocate application costs to individual business units and teams based on Amazon EC2 CPU and memory resources consumed by your Kubernetes application. Split cost allocation data for Amazon EKS gives visibility into cost per Pod, and enables you to aggregate the cost data per Pod using namespace, cluster, and other Kubernetes primitives. The following are examples of Kubernetes primitives that you can use to analyze Amazon EKS cost allocation data. • Cluster name • Deployment • Namespace • Node • Workload Name • Workload Type User-defined cost allocation tags are also supported. For more information about using split cost allocation data, see Understanding split cost allocation data in the AWS Billing User Guide. Set up Cost and Usage Reports You can turn on Split Cost Allocation Data for EKS in the Cost Management Console, AWS"} +{"global_id": 1119, "doc_id": "eks", "chunk_id": "36", "question_id": 4, "question": "Where can you find more information about using split cost allocation data?", "answer_span": "For more information about using split cost allocation data, see Understanding split cost allocation data in the AWS Billing User Guide.", "chunk": "and Cost Anomaly Detection to further enhance your cost management capabilities. For more information, see Understanding split cost allocation data in the AWS Billing User Guide. Kubecost — Amazon EKS supports Kubecost, a Kubernetes cost monitoring tool. Kubecost offers a feature-rich, Kubernetes-native approach to cost monitoring, providing granular cost breakdowns by Kubernetes resources, cost optimization recommendations, and out-of-the-box dashboards and reports. Kubecost also retrieves accurate pricing data by integrating with the AWS Cost and Usage Report, ensuring you get a precise view of your Amazon EKS costs. Learn how to Install Kubecost. See the Kubecost AWS Marketplace page for information on getting a free Kubecost subscription. View costs by Pod in AWS billing with split cost allocation Cost monitoring using AWS split cost allocation data for Amazon EKS You can use AWS split cost allocation data for Amazon EKS to get granular cost visibility for your Amazon EKS clusters. This enables you to analyze, optimize, and chargeback cost and usage for your Kubernetes applications. You allocate application costs to individual business units and teams based on Amazon EC2 CPU and memory resources consumed by your Kubernetes application. Split cost allocation data for Amazon EKS gives visibility into cost per Pod, and enables you to aggregate the cost data per Pod using namespace, cluster, and other Kubernetes primitives. The following are examples of Kubernetes primitives that you can use to analyze Amazon EKS cost allocation data. • Cluster name • Deployment • Namespace • Node • Workload Name • Workload Type User-defined cost allocation tags are also supported. For more information about using split cost allocation data, see Understanding split cost allocation data in the AWS Billing User Guide. Set up Cost and Usage Reports You can turn on Split Cost Allocation Data for EKS in the Cost Management Console, AWS"} +{"global_id": 1120, "doc_id": "eks", "chunk_id": "37", "question_id": 1, "question": "What type of tags are supported for cost allocation?", "answer_span": "User-defined cost allocation tags are also supported.", "chunk": "User-defined cost allocation tags are also supported. For more information about using split cost allocation data, see Understanding split cost allocation data in the AWS Billing User Guide. Set up Cost and Usage Reports You can turn on Split Cost Allocation Data for EKS in the Cost Management Console, AWS Command Line Interface, or the AWS SDKs. View costs by Pod 1272"} +{"global_id": 1121, "doc_id": "eks", "chunk_id": "37", "question_id": 2, "question": "Where can you find more information about using split cost allocation data?", "answer_span": "see Understanding split cost allocation data in the AWS Billing User Guide.", "chunk": "User-defined cost allocation tags are also supported. For more information about using split cost allocation data, see Understanding split cost allocation data in the AWS Billing User Guide. Set up Cost and Usage Reports You can turn on Split Cost Allocation Data for EKS in the Cost Management Console, AWS Command Line Interface, or the AWS SDKs. View costs by Pod 1272"} +{"global_id": 1122, "doc_id": "eks", "chunk_id": "37", "question_id": 3, "question": "How can you turn on Split Cost Allocation Data for EKS?", "answer_span": "You can turn on Split Cost Allocation Data for EKS in the Cost Management Console, AWS Command Line Interface, or the AWS SDKs.", "chunk": "User-defined cost allocation tags are also supported. For more information about using split cost allocation data, see Understanding split cost allocation data in the AWS Billing User Guide. Set up Cost and Usage Reports You can turn on Split Cost Allocation Data for EKS in the Cost Management Console, AWS Command Line Interface, or the AWS SDKs. View costs by Pod 1272"} +{"global_id": 1123, "doc_id": "eks", "chunk_id": "37", "question_id": 4, "question": "What can you view costs by?", "answer_span": "View costs by Pod 1272", "chunk": "User-defined cost allocation tags are also supported. For more information about using split cost allocation data, see Understanding split cost allocation data in the AWS Billing User Guide. Set up Cost and Usage Reports You can turn on Split Cost Allocation Data for EKS in the Cost Management Console, AWS Command Line Interface, or the AWS SDKs. View costs by Pod 1272"} +{"global_id": 1124, "doc_id": "rds", "chunk_id": "0", "question_id": 1, "question": "What is Amazon Relational Database Service (Amazon RDS)?", "answer_span": "Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud.", "chunk": "Amazon Relational Database Service User Guide What is Amazon Relational Database Service (Amazon RDS)? Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks. Note This guide covers Amazon RDS database engines other than Amazon Aurora. For information about using Amazon Aurora, see the Amazon Aurora User Guide. If you are new to AWS products and services, begin learning more with the following resources: • For an overview of all AWS products, see What is cloud computing? • Amazon Web Services provides a number of database services. To learn more about the variety of database options available on AWS, see Choosing an AWS database service and Running databases on AWS. Advantages of Amazon RDS Amazon RDS is a managed database service. It's responsible for most management tasks. By eliminating tedious manual processes, Amazon RDS frees you to focus on your application and your users. Amazon RDS provides the following principal advantages over database deployments that aren't fully managed: • You can use database engines that you are already familiar with: IBM Db2, MariaDB, Microsoft SQL Server, MySQL, Oracle Database, and PostgreSQL. • Amazon RDS manages backups, software patching, automatic failure detection, and recovery. • You can turn on automated backups, or manually create your own backup snapshots. You can use these backups to restore a database. The Amazon RDS restore process works reliably and efficiently. Advantages of Amazon RDS 1 Amazon Relational Database Service User Guide • You can get high availability with a primary DB instance and a synchronous secondary DB instance that you can fail over to when problems occur. You can also use read"} +{"global_id": 1125, "doc_id": "rds", "chunk_id": "0", "question_id": 2, "question": "What does Amazon RDS manage?", "answer_span": "Amazon RDS manages backups, software patching, automatic failure detection, and recovery.", "chunk": "Amazon Relational Database Service User Guide What is Amazon Relational Database Service (Amazon RDS)? Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks. Note This guide covers Amazon RDS database engines other than Amazon Aurora. For information about using Amazon Aurora, see the Amazon Aurora User Guide. If you are new to AWS products and services, begin learning more with the following resources: • For an overview of all AWS products, see What is cloud computing? • Amazon Web Services provides a number of database services. To learn more about the variety of database options available on AWS, see Choosing an AWS database service and Running databases on AWS. Advantages of Amazon RDS Amazon RDS is a managed database service. It's responsible for most management tasks. By eliminating tedious manual processes, Amazon RDS frees you to focus on your application and your users. Amazon RDS provides the following principal advantages over database deployments that aren't fully managed: • You can use database engines that you are already familiar with: IBM Db2, MariaDB, Microsoft SQL Server, MySQL, Oracle Database, and PostgreSQL. • Amazon RDS manages backups, software patching, automatic failure detection, and recovery. • You can turn on automated backups, or manually create your own backup snapshots. You can use these backups to restore a database. The Amazon RDS restore process works reliably and efficiently. Advantages of Amazon RDS 1 Amazon Relational Database Service User Guide • You can get high availability with a primary DB instance and a synchronous secondary DB instance that you can fail over to when problems occur. You can also use read"} +{"global_id": 1126, "doc_id": "rds", "chunk_id": "0", "question_id": 3, "question": "What can you use to restore a database in Amazon RDS?", "answer_span": "You can use these backups to restore a database.", "chunk": "Amazon Relational Database Service User Guide What is Amazon Relational Database Service (Amazon RDS)? Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks. Note This guide covers Amazon RDS database engines other than Amazon Aurora. For information about using Amazon Aurora, see the Amazon Aurora User Guide. If you are new to AWS products and services, begin learning more with the following resources: • For an overview of all AWS products, see What is cloud computing? • Amazon Web Services provides a number of database services. To learn more about the variety of database options available on AWS, see Choosing an AWS database service and Running databases on AWS. Advantages of Amazon RDS Amazon RDS is a managed database service. It's responsible for most management tasks. By eliminating tedious manual processes, Amazon RDS frees you to focus on your application and your users. Amazon RDS provides the following principal advantages over database deployments that aren't fully managed: • You can use database engines that you are already familiar with: IBM Db2, MariaDB, Microsoft SQL Server, MySQL, Oracle Database, and PostgreSQL. • Amazon RDS manages backups, software patching, automatic failure detection, and recovery. • You can turn on automated backups, or manually create your own backup snapshots. You can use these backups to restore a database. The Amazon RDS restore process works reliably and efficiently. Advantages of Amazon RDS 1 Amazon Relational Database Service User Guide • You can get high availability with a primary DB instance and a synchronous secondary DB instance that you can fail over to when problems occur. You can also use read"} +{"global_id": 1127, "doc_id": "rds", "chunk_id": "0", "question_id": 4, "question": "What is one advantage of Amazon RDS regarding availability?", "answer_span": "You can get high availability with a primary DB instance and a synchronous secondary DB instance that you can fail over to when problems occur.", "chunk": "Amazon Relational Database Service User Guide What is Amazon Relational Database Service (Amazon RDS)? Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the AWS Cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks. Note This guide covers Amazon RDS database engines other than Amazon Aurora. For information about using Amazon Aurora, see the Amazon Aurora User Guide. If you are new to AWS products and services, begin learning more with the following resources: • For an overview of all AWS products, see What is cloud computing? • Amazon Web Services provides a number of database services. To learn more about the variety of database options available on AWS, see Choosing an AWS database service and Running databases on AWS. Advantages of Amazon RDS Amazon RDS is a managed database service. It's responsible for most management tasks. By eliminating tedious manual processes, Amazon RDS frees you to focus on your application and your users. Amazon RDS provides the following principal advantages over database deployments that aren't fully managed: • You can use database engines that you are already familiar with: IBM Db2, MariaDB, Microsoft SQL Server, MySQL, Oracle Database, and PostgreSQL. • Amazon RDS manages backups, software patching, automatic failure detection, and recovery. • You can turn on automated backups, or manually create your own backup snapshots. You can use these backups to restore a database. The Amazon RDS restore process works reliably and efficiently. Advantages of Amazon RDS 1 Amazon Relational Database Service User Guide • You can get high availability with a primary DB instance and a synchronous secondary DB instance that you can fail over to when problems occur. You can also use read"} +{"global_id": 1128, "doc_id": "rds", "chunk_id": "1", "question_id": 1, "question": "What is one advantage of Amazon RDS regarding availability?", "answer_span": "You can get high availability with a primary DB instance and a synchronous secondary DB instance that you can fail over to when problems occur.", "chunk": "Amazon RDS restore process works reliably and efficiently. Advantages of Amazon RDS 1 Amazon Relational Database Service User Guide • You can get high availability with a primary DB instance and a synchronous secondary DB instance that you can fail over to when problems occur. You can also use read replicas to increase read scaling. • In addition to the security in your database package, you can control access by using AWS Identity and Access Management (IAM) to define users and permissions. You can also help protect your databases by putting them in a virtual private cloud (VPC). Comparison of responsibilities with Amazon EC2 and onpremises deployments We recommend Amazon RDS as your default choice for most relational database deployments. The following alternatives have the disadvantage of making you spend more time managing software and hardware: On-premises deployment When you buy an on-premises server, you get CPU, memory, storage, and IOPS, all bundled together. You assume full responsibility for the server, operating system, and database software. Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Unlike in an on-premises server, CPU, memory, storage, and IOPS are separated so that you can scale them independently. AWS manages the hardware layers, which eliminates some of the burden of managing an on-premises database server. The disadvantage to running a database on Amazon EC2 is that you're more prone to user errors. For example, when you update the operating system or database software manually, you might accidentally cause application downtime. You might spend hours checking every change to identify and fix an issue. The following table compares the management models for on-premises databases, Amazon EC2, and Amazon RDS. Comparison of responsibilities 2 Amazon Relational Database Service User Guide Feature On-premises management Amazon EC2 management Amazon RDS management"} +{"global_id": 1129, "doc_id": "rds", "chunk_id": "1", "question_id": 2, "question": "How can you control access to your databases in Amazon RDS?", "answer_span": "you can control access by using AWS Identity and Access Management (IAM) to define users and permissions.", "chunk": "Amazon RDS restore process works reliably and efficiently. Advantages of Amazon RDS 1 Amazon Relational Database Service User Guide • You can get high availability with a primary DB instance and a synchronous secondary DB instance that you can fail over to when problems occur. You can also use read replicas to increase read scaling. • In addition to the security in your database package, you can control access by using AWS Identity and Access Management (IAM) to define users and permissions. You can also help protect your databases by putting them in a virtual private cloud (VPC). Comparison of responsibilities with Amazon EC2 and onpremises deployments We recommend Amazon RDS as your default choice for most relational database deployments. The following alternatives have the disadvantage of making you spend more time managing software and hardware: On-premises deployment When you buy an on-premises server, you get CPU, memory, storage, and IOPS, all bundled together. You assume full responsibility for the server, operating system, and database software. Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Unlike in an on-premises server, CPU, memory, storage, and IOPS are separated so that you can scale them independently. AWS manages the hardware layers, which eliminates some of the burden of managing an on-premises database server. The disadvantage to running a database on Amazon EC2 is that you're more prone to user errors. For example, when you update the operating system or database software manually, you might accidentally cause application downtime. You might spend hours checking every change to identify and fix an issue. The following table compares the management models for on-premises databases, Amazon EC2, and Amazon RDS. Comparison of responsibilities 2 Amazon Relational Database Service User Guide Feature On-premises management Amazon EC2 management Amazon RDS management"} +{"global_id": 1130, "doc_id": "rds", "chunk_id": "1", "question_id": 3, "question": "What is a disadvantage of on-premises deployment?", "answer_span": "You assume full responsibility for the server, operating system, and database software.", "chunk": "Amazon RDS restore process works reliably and efficiently. Advantages of Amazon RDS 1 Amazon Relational Database Service User Guide • You can get high availability with a primary DB instance and a synchronous secondary DB instance that you can fail over to when problems occur. You can also use read replicas to increase read scaling. • In addition to the security in your database package, you can control access by using AWS Identity and Access Management (IAM) to define users and permissions. You can also help protect your databases by putting them in a virtual private cloud (VPC). Comparison of responsibilities with Amazon EC2 and onpremises deployments We recommend Amazon RDS as your default choice for most relational database deployments. The following alternatives have the disadvantage of making you spend more time managing software and hardware: On-premises deployment When you buy an on-premises server, you get CPU, memory, storage, and IOPS, all bundled together. You assume full responsibility for the server, operating system, and database software. Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Unlike in an on-premises server, CPU, memory, storage, and IOPS are separated so that you can scale them independently. AWS manages the hardware layers, which eliminates some of the burden of managing an on-premises database server. The disadvantage to running a database on Amazon EC2 is that you're more prone to user errors. For example, when you update the operating system or database software manually, you might accidentally cause application downtime. You might spend hours checking every change to identify and fix an issue. The following table compares the management models for on-premises databases, Amazon EC2, and Amazon RDS. Comparison of responsibilities 2 Amazon Relational Database Service User Guide Feature On-premises management Amazon EC2 management Amazon RDS management"} +{"global_id": 1131, "doc_id": "rds", "chunk_id": "1", "question_id": 4, "question": "What does AWS manage in Amazon EC2 that reduces management burden?", "answer_span": "AWS manages the hardware layers, which eliminates some of the burden of managing an on-premises database server.", "chunk": "Amazon RDS restore process works reliably and efficiently. Advantages of Amazon RDS 1 Amazon Relational Database Service User Guide • You can get high availability with a primary DB instance and a synchronous secondary DB instance that you can fail over to when problems occur. You can also use read replicas to increase read scaling. • In addition to the security in your database package, you can control access by using AWS Identity and Access Management (IAM) to define users and permissions. You can also help protect your databases by putting them in a virtual private cloud (VPC). Comparison of responsibilities with Amazon EC2 and onpremises deployments We recommend Amazon RDS as your default choice for most relational database deployments. The following alternatives have the disadvantage of making you spend more time managing software and hardware: On-premises deployment When you buy an on-premises server, you get CPU, memory, storage, and IOPS, all bundled together. You assume full responsibility for the server, operating system, and database software. Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the AWS Cloud. Unlike in an on-premises server, CPU, memory, storage, and IOPS are separated so that you can scale them independently. AWS manages the hardware layers, which eliminates some of the burden of managing an on-premises database server. The disadvantage to running a database on Amazon EC2 is that you're more prone to user errors. For example, when you update the operating system or database software manually, you might accidentally cause application downtime. You might spend hours checking every change to identify and fix an issue. The following table compares the management models for on-premises databases, Amazon EC2, and Amazon RDS. Comparison of responsibilities 2 Amazon Relational Database Service User Guide Feature On-premises management Amazon EC2 management Amazon RDS management"} +{"global_id": 1132, "doc_id": "rds", "chunk_id": "2", "question_id": 1, "question": "What is responsible for hosting the software components and infrastructure of DB instances and DB clusters in Amazon RDS?", "answer_span": "Amazon RDS is responsible for hosting the software components and infrastructure of DB instances and DB clusters.", "chunk": "cause application downtime. You might spend hours checking every change to identify and fix an issue. The following table compares the management models for on-premises databases, Amazon EC2, and Amazon RDS. Comparison of responsibilities 2 Amazon Relational Database Service User Guide Feature On-premises management Amazon EC2 management Amazon RDS management Application optimization Customer Customer Customer Scaling Customer Customer AWS High availability Customer Customer AWS Database backups Customer Customer AWS Database software patching Customer Customer AWS Database software install Customer Customer AWS Operating system (OS) patching Customer Customer AWS OS installation Customer Customer AWS Server maintenance Customer AWS AWS Hardware lifecycle Customer AWS AWS Power, network, and cooling Customer AWS AWS Amazon RDS shared responsibility model Amazon RDS is responsible for hosting the software components and infrastructure of DB instances and DB clusters. You are responsible for query tuning, which is the process of adjusting SQL queries to improve performance. Query performance is highly dependent on database design, data size, data distribution, application workload, and query patterns, which can vary greatly. Monitoring and tuning are highly individualized processes that you own for your RDS databases. You can use Amazon RDS Performance Insights and other tools to identify problematic queries. Amazon RDS shared responsibility model 3 Amazon Relational Database Service User Guide Amazon RDS DB instances A DB instance is an isolated database environment in the AWS Cloud. The basic building block of Amazon RDS is the DB instance. Your DB instance can contain one or more user-created databases. The following diagram shows a virtual private cloud (VPC) that contains two Availability Zones, with each AZ containing two DB instances. You can access your DB instances by using the same tools and applications that you use with a standalone database instance. You can create and modify a DB instance by using the"} +{"global_id": 1133, "doc_id": "rds", "chunk_id": "2", "question_id": 2, "question": "Who is responsible for query tuning in Amazon RDS?", "answer_span": "You are responsible for query tuning, which is the process of adjusting SQL queries to improve performance.", "chunk": "cause application downtime. You might spend hours checking every change to identify and fix an issue. The following table compares the management models for on-premises databases, Amazon EC2, and Amazon RDS. Comparison of responsibilities 2 Amazon Relational Database Service User Guide Feature On-premises management Amazon EC2 management Amazon RDS management Application optimization Customer Customer Customer Scaling Customer Customer AWS High availability Customer Customer AWS Database backups Customer Customer AWS Database software patching Customer Customer AWS Database software install Customer Customer AWS Operating system (OS) patching Customer Customer AWS OS installation Customer Customer AWS Server maintenance Customer AWS AWS Hardware lifecycle Customer AWS AWS Power, network, and cooling Customer AWS AWS Amazon RDS shared responsibility model Amazon RDS is responsible for hosting the software components and infrastructure of DB instances and DB clusters. You are responsible for query tuning, which is the process of adjusting SQL queries to improve performance. Query performance is highly dependent on database design, data size, data distribution, application workload, and query patterns, which can vary greatly. Monitoring and tuning are highly individualized processes that you own for your RDS databases. You can use Amazon RDS Performance Insights and other tools to identify problematic queries. Amazon RDS shared responsibility model 3 Amazon Relational Database Service User Guide Amazon RDS DB instances A DB instance is an isolated database environment in the AWS Cloud. The basic building block of Amazon RDS is the DB instance. Your DB instance can contain one or more user-created databases. The following diagram shows a virtual private cloud (VPC) that contains two Availability Zones, with each AZ containing two DB instances. You can access your DB instances by using the same tools and applications that you use with a standalone database instance. You can create and modify a DB instance by using the"} +{"global_id": 1134, "doc_id": "rds", "chunk_id": "2", "question_id": 3, "question": "What is a DB instance in the context of Amazon RDS?", "answer_span": "A DB instance is an isolated database environment in the AWS Cloud.", "chunk": "cause application downtime. You might spend hours checking every change to identify and fix an issue. The following table compares the management models for on-premises databases, Amazon EC2, and Amazon RDS. Comparison of responsibilities 2 Amazon Relational Database Service User Guide Feature On-premises management Amazon EC2 management Amazon RDS management Application optimization Customer Customer Customer Scaling Customer Customer AWS High availability Customer Customer AWS Database backups Customer Customer AWS Database software patching Customer Customer AWS Database software install Customer Customer AWS Operating system (OS) patching Customer Customer AWS OS installation Customer Customer AWS Server maintenance Customer AWS AWS Hardware lifecycle Customer AWS AWS Power, network, and cooling Customer AWS AWS Amazon RDS shared responsibility model Amazon RDS is responsible for hosting the software components and infrastructure of DB instances and DB clusters. You are responsible for query tuning, which is the process of adjusting SQL queries to improve performance. Query performance is highly dependent on database design, data size, data distribution, application workload, and query patterns, which can vary greatly. Monitoring and tuning are highly individualized processes that you own for your RDS databases. You can use Amazon RDS Performance Insights and other tools to identify problematic queries. Amazon RDS shared responsibility model 3 Amazon Relational Database Service User Guide Amazon RDS DB instances A DB instance is an isolated database environment in the AWS Cloud. The basic building block of Amazon RDS is the DB instance. Your DB instance can contain one or more user-created databases. The following diagram shows a virtual private cloud (VPC) that contains two Availability Zones, with each AZ containing two DB instances. You can access your DB instances by using the same tools and applications that you use with a standalone database instance. You can create and modify a DB instance by using the"} +{"global_id": 1135, "doc_id": "rds", "chunk_id": "2", "question_id": 4, "question": "How can you access your DB instances?", "answer_span": "You can access your DB instances by using the same tools and applications that you use with a standalone database instance.", "chunk": "cause application downtime. You might spend hours checking every change to identify and fix an issue. The following table compares the management models for on-premises databases, Amazon EC2, and Amazon RDS. Comparison of responsibilities 2 Amazon Relational Database Service User Guide Feature On-premises management Amazon EC2 management Amazon RDS management Application optimization Customer Customer Customer Scaling Customer Customer AWS High availability Customer Customer AWS Database backups Customer Customer AWS Database software patching Customer Customer AWS Database software install Customer Customer AWS Operating system (OS) patching Customer Customer AWS OS installation Customer Customer AWS Server maintenance Customer AWS AWS Hardware lifecycle Customer AWS AWS Power, network, and cooling Customer AWS AWS Amazon RDS shared responsibility model Amazon RDS is responsible for hosting the software components and infrastructure of DB instances and DB clusters. You are responsible for query tuning, which is the process of adjusting SQL queries to improve performance. Query performance is highly dependent on database design, data size, data distribution, application workload, and query patterns, which can vary greatly. Monitoring and tuning are highly individualized processes that you own for your RDS databases. You can use Amazon RDS Performance Insights and other tools to identify problematic queries. Amazon RDS shared responsibility model 3 Amazon Relational Database Service User Guide Amazon RDS DB instances A DB instance is an isolated database environment in the AWS Cloud. The basic building block of Amazon RDS is the DB instance. Your DB instance can contain one or more user-created databases. The following diagram shows a virtual private cloud (VPC) that contains two Availability Zones, with each AZ containing two DB instances. You can access your DB instances by using the same tools and applications that you use with a standalone database instance. You can create and modify a DB instance by using the"} +{"global_id": 1136, "doc_id": "rds", "chunk_id": "3", "question_id": 1, "question": "What does a virtual private cloud (VPC) contain?", "answer_span": "shows a virtual private cloud (VPC) that contains two Availability Zones, with each AZ containing two DB instances.", "chunk": "shows a virtual private cloud (VPC) that contains two Availability Zones, with each AZ containing two DB instances. You can access your DB instances by using the same tools and applications that you use with a standalone database instance. You can create and modify a DB instance by using the AWS Command Line Interface (AWS CLI), the Amazon RDS API, or the AWS Management Console. Topics • Amazon RDS application architecture: example • DB engines • DB instance classes • DB instance storage • DB instances in an Amazon Virtual Private Cloud (Amazon VPC) DB instances 4 Amazon Relational Database Service User Guide Best practices for Amazon RDS Learn best practices for working with Amazon RDS. As new best practices are identified, we will keep this section up to date. Topics • Amazon RDS basic operational guidelines • DB instance RAM recommendations • Keeping database engine versions up to date • AWS database drivers • Using Enhanced Monitoring to identify operating system issues • Using metrics to identify performance issues • Tuning queries • Best practices for working with MySQL • Best practices for working with MariaDB • Best practices for working with Oracle • Best practices for working with PostgreSQL • Best practices for working with SQL Server • Working with DB parameter groups • Best practices for automating DB instance creation • Amazon RDS new features video Note For common recommendations for Amazon RDS, see Recommendations from Amazon RDS. Amazon RDS basic operational guidelines The following are basic operational guidelines that everyone should follow when working with Amazon RDS. Note that the Amazon RDS Service Level Agreement requires that you follow these guidelines: Amazon RDS basic operational guidelines 484 Amazon Relational Database Service User Guide • Use metrics to monitor your memory, CPU, replica lag, and storage usage."} +{"global_id": 1137, "doc_id": "rds", "chunk_id": "3", "question_id": 2, "question": "How can you access your DB instances?", "answer_span": "You can access your DB instances by using the same tools and applications that you use with a standalone database instance.", "chunk": "shows a virtual private cloud (VPC) that contains two Availability Zones, with each AZ containing two DB instances. You can access your DB instances by using the same tools and applications that you use with a standalone database instance. You can create and modify a DB instance by using the AWS Command Line Interface (AWS CLI), the Amazon RDS API, or the AWS Management Console. Topics • Amazon RDS application architecture: example • DB engines • DB instance classes • DB instance storage • DB instances in an Amazon Virtual Private Cloud (Amazon VPC) DB instances 4 Amazon Relational Database Service User Guide Best practices for Amazon RDS Learn best practices for working with Amazon RDS. As new best practices are identified, we will keep this section up to date. Topics • Amazon RDS basic operational guidelines • DB instance RAM recommendations • Keeping database engine versions up to date • AWS database drivers • Using Enhanced Monitoring to identify operating system issues • Using metrics to identify performance issues • Tuning queries • Best practices for working with MySQL • Best practices for working with MariaDB • Best practices for working with Oracle • Best practices for working with PostgreSQL • Best practices for working with SQL Server • Working with DB parameter groups • Best practices for automating DB instance creation • Amazon RDS new features video Note For common recommendations for Amazon RDS, see Recommendations from Amazon RDS. Amazon RDS basic operational guidelines The following are basic operational guidelines that everyone should follow when working with Amazon RDS. Note that the Amazon RDS Service Level Agreement requires that you follow these guidelines: Amazon RDS basic operational guidelines 484 Amazon Relational Database Service User Guide • Use metrics to monitor your memory, CPU, replica lag, and storage usage."} +{"global_id": 1138, "doc_id": "rds", "chunk_id": "3", "question_id": 3, "question": "What are some topics covered in best practices for Amazon RDS?", "answer_span": "Topics • Amazon RDS basic operational guidelines • DB instance RAM recommendations • Keeping database engine versions up to date • AWS database drivers • Using Enhanced Monitoring to identify operating system issues • Using metrics to identify performance issues • Tuning queries • Best practices for working with MySQL • Best practices for working with MariaDB • Best practices for working with Oracle • Best practices for working with PostgreSQL • Best practices for working with SQL Server • Working with DB parameter groups • Best practices for automating DB instance creation • Amazon RDS new features video", "chunk": "shows a virtual private cloud (VPC) that contains two Availability Zones, with each AZ containing two DB instances. You can access your DB instances by using the same tools and applications that you use with a standalone database instance. You can create and modify a DB instance by using the AWS Command Line Interface (AWS CLI), the Amazon RDS API, or the AWS Management Console. Topics • Amazon RDS application architecture: example • DB engines • DB instance classes • DB instance storage • DB instances in an Amazon Virtual Private Cloud (Amazon VPC) DB instances 4 Amazon Relational Database Service User Guide Best practices for Amazon RDS Learn best practices for working with Amazon RDS. As new best practices are identified, we will keep this section up to date. Topics • Amazon RDS basic operational guidelines • DB instance RAM recommendations • Keeping database engine versions up to date • AWS database drivers • Using Enhanced Monitoring to identify operating system issues • Using metrics to identify performance issues • Tuning queries • Best practices for working with MySQL • Best practices for working with MariaDB • Best practices for working with Oracle • Best practices for working with PostgreSQL • Best practices for working with SQL Server • Working with DB parameter groups • Best practices for automating DB instance creation • Amazon RDS new features video Note For common recommendations for Amazon RDS, see Recommendations from Amazon RDS. Amazon RDS basic operational guidelines The following are basic operational guidelines that everyone should follow when working with Amazon RDS. Note that the Amazon RDS Service Level Agreement requires that you follow these guidelines: Amazon RDS basic operational guidelines 484 Amazon Relational Database Service User Guide • Use metrics to monitor your memory, CPU, replica lag, and storage usage."} +{"global_id": 1139, "doc_id": "rds", "chunk_id": "3", "question_id": 4, "question": "What should everyone follow when working with Amazon RDS?", "answer_span": "The following are basic operational guidelines that everyone should follow when working with Amazon RDS.", "chunk": "shows a virtual private cloud (VPC) that contains two Availability Zones, with each AZ containing two DB instances. You can access your DB instances by using the same tools and applications that you use with a standalone database instance. You can create and modify a DB instance by using the AWS Command Line Interface (AWS CLI), the Amazon RDS API, or the AWS Management Console. Topics • Amazon RDS application architecture: example • DB engines • DB instance classes • DB instance storage • DB instances in an Amazon Virtual Private Cloud (Amazon VPC) DB instances 4 Amazon Relational Database Service User Guide Best practices for Amazon RDS Learn best practices for working with Amazon RDS. As new best practices are identified, we will keep this section up to date. Topics • Amazon RDS basic operational guidelines • DB instance RAM recommendations • Keeping database engine versions up to date • AWS database drivers • Using Enhanced Monitoring to identify operating system issues • Using metrics to identify performance issues • Tuning queries • Best practices for working with MySQL • Best practices for working with MariaDB • Best practices for working with Oracle • Best practices for working with PostgreSQL • Best practices for working with SQL Server • Working with DB parameter groups • Best practices for automating DB instance creation • Amazon RDS new features video Note For common recommendations for Amazon RDS, see Recommendations from Amazon RDS. Amazon RDS basic operational guidelines The following are basic operational guidelines that everyone should follow when working with Amazon RDS. Note that the Amazon RDS Service Level Agreement requires that you follow these guidelines: Amazon RDS basic operational guidelines 484 Amazon Relational Database Service User Guide • Use metrics to monitor your memory, CPU, replica lag, and storage usage."} +{"global_id": 1140, "doc_id": "rds", "chunk_id": "4", "question_id": 1, "question": "What should you use to monitor your memory, CPU, replica lag, and storage usage?", "answer_span": "Use metrics to monitor your memory, CPU, replica lag, and storage usage.", "chunk": "operational guidelines that everyone should follow when working with Amazon RDS. Note that the Amazon RDS Service Level Agreement requires that you follow these guidelines: Amazon RDS basic operational guidelines 484 Amazon Relational Database Service User Guide • Use metrics to monitor your memory, CPU, replica lag, and storage usage. You can set up Amazon CloudWatch to notify you when the usage patterns change or when your deployment approaches capacity limits. This allows you to maintain system performance and availability. • Scale up your DB instance when you are approaching storage capacity limits. You should have some buffer in storage and memory to accommodate unforeseen increases in demand from your applications. • Enable automatic backups and set the backup window to occur during the daily low in write IOPS. That's when a backup is least disruptive to your database usage. • If your database workload requires more I/O than you have provisioned, recovery after a failover or database failure will be slow. To increase the I/O capacity of a DB instance, do any or all of the following: • Migrate to a different DB instance class with high I/O capacity. • Convert from magnetic storage to either General Purpose or Provisioned IOPS storage, depending on how much of an increase you need. For information on available storage types, see Amazon RDS storage types. If you convert to Provisioned IOPS storage, make sure you also use a DB instance class that is optimized for Provisioned IOPS. For information on Provisioned IOPS, see Provisioned IOPS SSD storage. • If you are already using Provisioned IOPS storage, provision additional throughput capacity. • If your client application is caching the Domain Name Service (DNS) data of your DB instances, set a time-to-live (TTL) value of less than 30 seconds. The underlying IP address of"} +{"global_id": 1141, "doc_id": "rds", "chunk_id": "4", "question_id": 2, "question": "When should you scale up your DB instance?", "answer_span": "Scale up your DB instance when you are approaching storage capacity limits.", "chunk": "operational guidelines that everyone should follow when working with Amazon RDS. Note that the Amazon RDS Service Level Agreement requires that you follow these guidelines: Amazon RDS basic operational guidelines 484 Amazon Relational Database Service User Guide • Use metrics to monitor your memory, CPU, replica lag, and storage usage. You can set up Amazon CloudWatch to notify you when the usage patterns change or when your deployment approaches capacity limits. This allows you to maintain system performance and availability. • Scale up your DB instance when you are approaching storage capacity limits. You should have some buffer in storage and memory to accommodate unforeseen increases in demand from your applications. • Enable automatic backups and set the backup window to occur during the daily low in write IOPS. That's when a backup is least disruptive to your database usage. • If your database workload requires more I/O than you have provisioned, recovery after a failover or database failure will be slow. To increase the I/O capacity of a DB instance, do any or all of the following: • Migrate to a different DB instance class with high I/O capacity. • Convert from magnetic storage to either General Purpose or Provisioned IOPS storage, depending on how much of an increase you need. For information on available storage types, see Amazon RDS storage types. If you convert to Provisioned IOPS storage, make sure you also use a DB instance class that is optimized for Provisioned IOPS. For information on Provisioned IOPS, see Provisioned IOPS SSD storage. • If you are already using Provisioned IOPS storage, provision additional throughput capacity. • If your client application is caching the Domain Name Service (DNS) data of your DB instances, set a time-to-live (TTL) value of less than 30 seconds. The underlying IP address of"} +{"global_id": 1142, "doc_id": "rds", "chunk_id": "4", "question_id": 3, "question": "What should you enable to minimize disruption during backups?", "answer_span": "Enable automatic backups and set the backup window to occur during the daily low in write IOPS.", "chunk": "operational guidelines that everyone should follow when working with Amazon RDS. Note that the Amazon RDS Service Level Agreement requires that you follow these guidelines: Amazon RDS basic operational guidelines 484 Amazon Relational Database Service User Guide • Use metrics to monitor your memory, CPU, replica lag, and storage usage. You can set up Amazon CloudWatch to notify you when the usage patterns change or when your deployment approaches capacity limits. This allows you to maintain system performance and availability. • Scale up your DB instance when you are approaching storage capacity limits. You should have some buffer in storage and memory to accommodate unforeseen increases in demand from your applications. • Enable automatic backups and set the backup window to occur during the daily low in write IOPS. That's when a backup is least disruptive to your database usage. • If your database workload requires more I/O than you have provisioned, recovery after a failover or database failure will be slow. To increase the I/O capacity of a DB instance, do any or all of the following: • Migrate to a different DB instance class with high I/O capacity. • Convert from magnetic storage to either General Purpose or Provisioned IOPS storage, depending on how much of an increase you need. For information on available storage types, see Amazon RDS storage types. If you convert to Provisioned IOPS storage, make sure you also use a DB instance class that is optimized for Provisioned IOPS. For information on Provisioned IOPS, see Provisioned IOPS SSD storage. • If you are already using Provisioned IOPS storage, provision additional throughput capacity. • If your client application is caching the Domain Name Service (DNS) data of your DB instances, set a time-to-live (TTL) value of less than 30 seconds. The underlying IP address of"} +{"global_id": 1143, "doc_id": "rds", "chunk_id": "4", "question_id": 4, "question": "What should you do if your database workload requires more I/O than you have provisioned?", "answer_span": "If your database workload requires more I/O than you have provisioned, recovery after a failover or database failure will be slow.", "chunk": "operational guidelines that everyone should follow when working with Amazon RDS. Note that the Amazon RDS Service Level Agreement requires that you follow these guidelines: Amazon RDS basic operational guidelines 484 Amazon Relational Database Service User Guide • Use metrics to monitor your memory, CPU, replica lag, and storage usage. You can set up Amazon CloudWatch to notify you when the usage patterns change or when your deployment approaches capacity limits. This allows you to maintain system performance and availability. • Scale up your DB instance when you are approaching storage capacity limits. You should have some buffer in storage and memory to accommodate unforeseen increases in demand from your applications. • Enable automatic backups and set the backup window to occur during the daily low in write IOPS. That's when a backup is least disruptive to your database usage. • If your database workload requires more I/O than you have provisioned, recovery after a failover or database failure will be slow. To increase the I/O capacity of a DB instance, do any or all of the following: • Migrate to a different DB instance class with high I/O capacity. • Convert from magnetic storage to either General Purpose or Provisioned IOPS storage, depending on how much of an increase you need. For information on available storage types, see Amazon RDS storage types. If you convert to Provisioned IOPS storage, make sure you also use a DB instance class that is optimized for Provisioned IOPS. For information on Provisioned IOPS, see Provisioned IOPS SSD storage. • If you are already using Provisioned IOPS storage, provision additional throughput capacity. • If your client application is caching the Domain Name Service (DNS) data of your DB instances, set a time-to-live (TTL) value of less than 30 seconds. The underlying IP address of"} +{"global_id": 1144, "doc_id": "rds", "chunk_id": "5", "question_id": 1, "question": "What should you set the time-to-live (TTL) value to if your client application is caching the Domain Name Service (DNS) data?", "answer_span": "set a time-to-live (TTL) value of less than 30 seconds.", "chunk": "see Provisioned IOPS SSD storage. • If you are already using Provisioned IOPS storage, provision additional throughput capacity. • If your client application is caching the Domain Name Service (DNS) data of your DB instances, set a time-to-live (TTL) value of less than 30 seconds. The underlying IP address of a DB instance can change after a failover. Caching the DNS data for an extended time can thus lead to connection failures. Your application might try to connect to an IP address that's no longer in service. • Test failover for your DB instance to understand how long the process takes for your particular use case. Also test failover to ensure that the application that accesses your DB instance can automatically connect to the new DB instance after failover occurs. DB instance RAM recommendations An Amazon RDS performance best practice is to allocate enough RAM so that your working set resides almost completely in memory. The working set is the data and indexes that are frequently in use on your instance. The more you use the DB instance, the more the working set will grow. DB instance RAM recommendations 485 Amazon Relational Database Service User Guide To tell if your working set is almost all in memory, check the ReadIOPS metric (using Amazon CloudWatch) while the DB instance is under load. The value of ReadIOPS should be small and stable. In some cases, scaling up the DB instance class to a class with more RAM results in a dramatic drop in ReadIOPS. In these cases, your working set was not almost completely in memory. Continue to scale up until ReadIOPS no longer drops dramatically after a scaling operation, or ReadIOPS is reduced to a very small amount. For information on monitoring a DB instance's metrics, see Viewing metrics in the"} +{"global_id": 1145, "doc_id": "rds", "chunk_id": "5", "question_id": 2, "question": "What is a best practice regarding RAM allocation for an Amazon RDS instance?", "answer_span": "An Amazon RDS performance best practice is to allocate enough RAM so that your working set resides almost completely in memory.", "chunk": "see Provisioned IOPS SSD storage. • If you are already using Provisioned IOPS storage, provision additional throughput capacity. • If your client application is caching the Domain Name Service (DNS) data of your DB instances, set a time-to-live (TTL) value of less than 30 seconds. The underlying IP address of a DB instance can change after a failover. Caching the DNS data for an extended time can thus lead to connection failures. Your application might try to connect to an IP address that's no longer in service. • Test failover for your DB instance to understand how long the process takes for your particular use case. Also test failover to ensure that the application that accesses your DB instance can automatically connect to the new DB instance after failover occurs. DB instance RAM recommendations An Amazon RDS performance best practice is to allocate enough RAM so that your working set resides almost completely in memory. The working set is the data and indexes that are frequently in use on your instance. The more you use the DB instance, the more the working set will grow. DB instance RAM recommendations 485 Amazon Relational Database Service User Guide To tell if your working set is almost all in memory, check the ReadIOPS metric (using Amazon CloudWatch) while the DB instance is under load. The value of ReadIOPS should be small and stable. In some cases, scaling up the DB instance class to a class with more RAM results in a dramatic drop in ReadIOPS. In these cases, your working set was not almost completely in memory. Continue to scale up until ReadIOPS no longer drops dramatically after a scaling operation, or ReadIOPS is reduced to a very small amount. For information on monitoring a DB instance's metrics, see Viewing metrics in the"} +{"global_id": 1146, "doc_id": "rds", "chunk_id": "5", "question_id": 3, "question": "How can you tell if your working set is almost all in memory?", "answer_span": "check the ReadIOPS metric (using Amazon CloudWatch) while the DB instance is under load.", "chunk": "see Provisioned IOPS SSD storage. • If you are already using Provisioned IOPS storage, provision additional throughput capacity. • If your client application is caching the Domain Name Service (DNS) data of your DB instances, set a time-to-live (TTL) value of less than 30 seconds. The underlying IP address of a DB instance can change after a failover. Caching the DNS data for an extended time can thus lead to connection failures. Your application might try to connect to an IP address that's no longer in service. • Test failover for your DB instance to understand how long the process takes for your particular use case. Also test failover to ensure that the application that accesses your DB instance can automatically connect to the new DB instance after failover occurs. DB instance RAM recommendations An Amazon RDS performance best practice is to allocate enough RAM so that your working set resides almost completely in memory. The working set is the data and indexes that are frequently in use on your instance. The more you use the DB instance, the more the working set will grow. DB instance RAM recommendations 485 Amazon Relational Database Service User Guide To tell if your working set is almost all in memory, check the ReadIOPS metric (using Amazon CloudWatch) while the DB instance is under load. The value of ReadIOPS should be small and stable. In some cases, scaling up the DB instance class to a class with more RAM results in a dramatic drop in ReadIOPS. In these cases, your working set was not almost completely in memory. Continue to scale up until ReadIOPS no longer drops dramatically after a scaling operation, or ReadIOPS is reduced to a very small amount. For information on monitoring a DB instance's metrics, see Viewing metrics in the"} +{"global_id": 1147, "doc_id": "rds", "chunk_id": "5", "question_id": 4, "question": "What should you do if ReadIOPS is reduced to a very small amount after scaling up the DB instance class?", "answer_span": "Continue to scale up until ReadIOPS no longer drops dramatically after a scaling operation.", "chunk": "see Provisioned IOPS SSD storage. • If you are already using Provisioned IOPS storage, provision additional throughput capacity. • If your client application is caching the Domain Name Service (DNS) data of your DB instances, set a time-to-live (TTL) value of less than 30 seconds. The underlying IP address of a DB instance can change after a failover. Caching the DNS data for an extended time can thus lead to connection failures. Your application might try to connect to an IP address that's no longer in service. • Test failover for your DB instance to understand how long the process takes for your particular use case. Also test failover to ensure that the application that accesses your DB instance can automatically connect to the new DB instance after failover occurs. DB instance RAM recommendations An Amazon RDS performance best practice is to allocate enough RAM so that your working set resides almost completely in memory. The working set is the data and indexes that are frequently in use on your instance. The more you use the DB instance, the more the working set will grow. DB instance RAM recommendations 485 Amazon Relational Database Service User Guide To tell if your working set is almost all in memory, check the ReadIOPS metric (using Amazon CloudWatch) while the DB instance is under load. The value of ReadIOPS should be small and stable. In some cases, scaling up the DB instance class to a class with more RAM results in a dramatic drop in ReadIOPS. In these cases, your working set was not almost completely in memory. Continue to scale up until ReadIOPS no longer drops dramatically after a scaling operation, or ReadIOPS is reduced to a very small amount. For information on monitoring a DB instance's metrics, see Viewing metrics in the"} +{"global_id": 1148, "doc_id": "rds", "chunk_id": "6", "question_id": 1, "question": "What should you do until ReadIOPS no longer drops dramatically after a scaling operation?", "answer_span": "Continue to scale up until ReadIOPS no longer drops dramatically after a scaling operation", "chunk": "in ReadIOPS. In these cases, your working set was not almost completely in memory. Continue to scale up until ReadIOPS no longer drops dramatically after a scaling operation, or ReadIOPS is reduced to a very small amount. For information on monitoring a DB instance's metrics, see Viewing metrics in the Amazon RDS console. Keeping database engine versions up to date Regularly upgrade your database engine version to maintain security, performance, and compliance. Amazon RDS releases new minor and major versions that include security patches, performance enhancements, and new features. Running an outdated database engine can expose your workloads to known vulnerabilities, compatibility issues, and reduced support from AWS and database vendors. To minimize disruption, consider the following when you plan upgrades: • Test in a staging environment – Validate the new version against your workload before you upgrade production databases. • Use Amazon RDS managed upgrades – Enable automatic minor version upgrades for easier patching. • Schedule major version upgrades – Review release notes, test application compatibility, and plan a controlled upgrade window. Regular upgrades help ensure your database remains secure, optimized, and aligned with AWS best practices. AWS database drivers We recommend the AWS suite of drivers for application connectivity. The drivers have been designed to provide support for faster switchover and failover times, and authentication with AWS Secrets Manager, AWS Identity and Access Management (IAM), and Federated Identity. The AWS drivers rely on monitoring DB instance status and being aware of the instance topology to determine the new writer. This approach reduces switchover and failover times to single-digit seconds, compared to tens of seconds for open-source drivers. Keeping database engine versions up to date 486 Amazon Relational Database Service User Guide As new service features are introduced, the goal of the AWS suite of drivers is to have"} +{"global_id": 1149, "doc_id": "rds", "chunk_id": "6", "question_id": 2, "question": "What is recommended to maintain security, performance, and compliance?", "answer_span": "Regularly upgrade your database engine version to maintain security, performance, and compliance", "chunk": "in ReadIOPS. In these cases, your working set was not almost completely in memory. Continue to scale up until ReadIOPS no longer drops dramatically after a scaling operation, or ReadIOPS is reduced to a very small amount. For information on monitoring a DB instance's metrics, see Viewing metrics in the Amazon RDS console. Keeping database engine versions up to date Regularly upgrade your database engine version to maintain security, performance, and compliance. Amazon RDS releases new minor and major versions that include security patches, performance enhancements, and new features. Running an outdated database engine can expose your workloads to known vulnerabilities, compatibility issues, and reduced support from AWS and database vendors. To minimize disruption, consider the following when you plan upgrades: • Test in a staging environment – Validate the new version against your workload before you upgrade production databases. • Use Amazon RDS managed upgrades – Enable automatic minor version upgrades for easier patching. • Schedule major version upgrades – Review release notes, test application compatibility, and plan a controlled upgrade window. Regular upgrades help ensure your database remains secure, optimized, and aligned with AWS best practices. AWS database drivers We recommend the AWS suite of drivers for application connectivity. The drivers have been designed to provide support for faster switchover and failover times, and authentication with AWS Secrets Manager, AWS Identity and Access Management (IAM), and Federated Identity. The AWS drivers rely on monitoring DB instance status and being aware of the instance topology to determine the new writer. This approach reduces switchover and failover times to single-digit seconds, compared to tens of seconds for open-source drivers. Keeping database engine versions up to date 486 Amazon Relational Database Service User Guide As new service features are introduced, the goal of the AWS suite of drivers is to have"} +{"global_id": 1150, "doc_id": "rds", "chunk_id": "6", "question_id": 3, "question": "What should you validate before upgrading production databases?", "answer_span": "Validate the new version against your workload before you upgrade production databases", "chunk": "in ReadIOPS. In these cases, your working set was not almost completely in memory. Continue to scale up until ReadIOPS no longer drops dramatically after a scaling operation, or ReadIOPS is reduced to a very small amount. For information on monitoring a DB instance's metrics, see Viewing metrics in the Amazon RDS console. Keeping database engine versions up to date Regularly upgrade your database engine version to maintain security, performance, and compliance. Amazon RDS releases new minor and major versions that include security patches, performance enhancements, and new features. Running an outdated database engine can expose your workloads to known vulnerabilities, compatibility issues, and reduced support from AWS and database vendors. To minimize disruption, consider the following when you plan upgrades: • Test in a staging environment – Validate the new version against your workload before you upgrade production databases. • Use Amazon RDS managed upgrades – Enable automatic minor version upgrades for easier patching. • Schedule major version upgrades – Review release notes, test application compatibility, and plan a controlled upgrade window. Regular upgrades help ensure your database remains secure, optimized, and aligned with AWS best practices. AWS database drivers We recommend the AWS suite of drivers for application connectivity. The drivers have been designed to provide support for faster switchover and failover times, and authentication with AWS Secrets Manager, AWS Identity and Access Management (IAM), and Federated Identity. The AWS drivers rely on monitoring DB instance status and being aware of the instance topology to determine the new writer. This approach reduces switchover and failover times to single-digit seconds, compared to tens of seconds for open-source drivers. Keeping database engine versions up to date 486 Amazon Relational Database Service User Guide As new service features are introduced, the goal of the AWS suite of drivers is to have"} +{"global_id": 1151, "doc_id": "rds", "chunk_id": "6", "question_id": 4, "question": "What do the AWS drivers rely on to determine the new writer?", "answer_span": "The AWS drivers rely on monitoring DB instance status and being aware of the instance topology to determine the new writer", "chunk": "in ReadIOPS. In these cases, your working set was not almost completely in memory. Continue to scale up until ReadIOPS no longer drops dramatically after a scaling operation, or ReadIOPS is reduced to a very small amount. For information on monitoring a DB instance's metrics, see Viewing metrics in the Amazon RDS console. Keeping database engine versions up to date Regularly upgrade your database engine version to maintain security, performance, and compliance. Amazon RDS releases new minor and major versions that include security patches, performance enhancements, and new features. Running an outdated database engine can expose your workloads to known vulnerabilities, compatibility issues, and reduced support from AWS and database vendors. To minimize disruption, consider the following when you plan upgrades: • Test in a staging environment – Validate the new version against your workload before you upgrade production databases. • Use Amazon RDS managed upgrades – Enable automatic minor version upgrades for easier patching. • Schedule major version upgrades – Review release notes, test application compatibility, and plan a controlled upgrade window. Regular upgrades help ensure your database remains secure, optimized, and aligned with AWS best practices. AWS database drivers We recommend the AWS suite of drivers for application connectivity. The drivers have been designed to provide support for faster switchover and failover times, and authentication with AWS Secrets Manager, AWS Identity and Access Management (IAM), and Federated Identity. The AWS drivers rely on monitoring DB instance status and being aware of the instance topology to determine the new writer. This approach reduces switchover and failover times to single-digit seconds, compared to tens of seconds for open-source drivers. Keeping database engine versions up to date 486 Amazon Relational Database Service User Guide As new service features are introduced, the goal of the AWS suite of drivers is to have"} +{"global_id": 1152, "doc_id": "rds", "chunk_id": "7", "question_id": 1, "question": "What does the approach reduce switchover and failover times to?", "answer_span": "This approach reduces switchover and failover times to single-digit seconds", "chunk": "writer. This approach reduces switchover and failover times to single-digit seconds, compared to tens of seconds for open-source drivers. Keeping database engine versions up to date 486 Amazon Relational Database Service User Guide As new service features are introduced, the goal of the AWS suite of drivers is to have built-in support for these service features. For more information, see Connecting to DB instances with the AWS drivers. Using Enhanced Monitoring to identify operating system issues When Enhanced Monitoring is enabled, Amazon RDS provides metrics in real time for the operating system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the console. You can also consume the Enhanced Monitoring JSON output from Amazon CloudWatch Logs in a monitoring system of your choice. For more information about Enhanced Monitoring, see Monitoring OS metrics with Enhanced Monitoring. Using metrics to identify performance issues To identify performance issues caused by insufficient resources and other common bottlenecks, you can monitor the metrics available for your Amazon RDS DB instance. Viewing performance metrics You should monitor performance metrics on a regular basis to see the average, maximum, and minimum values for a variety of time ranges. If you do so, you can identify when performance is degraded. You can also set Amazon CloudWatch alarms for particular metric thresholds so you are alerted if they are reached. To troubleshoot performance issues, it's important to understand the baseline performance of the system. When you set up a DB instance and run it with a typical workload, capture the average, maximum, and minimum values of all performance metrics. Do so at a number of different intervals (for example, one hour, 24 hours, one week, two weeks). This can give you an idea of what is normal. It helps to"} +{"global_id": 1153, "doc_id": "rds", "chunk_id": "7", "question_id": 2, "question": "What does Amazon RDS provide when Enhanced Monitoring is enabled?", "answer_span": "Amazon RDS provides metrics in real time for the operating system (OS) that your DB instance runs on", "chunk": "writer. This approach reduces switchover and failover times to single-digit seconds, compared to tens of seconds for open-source drivers. Keeping database engine versions up to date 486 Amazon Relational Database Service User Guide As new service features are introduced, the goal of the AWS suite of drivers is to have built-in support for these service features. For more information, see Connecting to DB instances with the AWS drivers. Using Enhanced Monitoring to identify operating system issues When Enhanced Monitoring is enabled, Amazon RDS provides metrics in real time for the operating system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the console. You can also consume the Enhanced Monitoring JSON output from Amazon CloudWatch Logs in a monitoring system of your choice. For more information about Enhanced Monitoring, see Monitoring OS metrics with Enhanced Monitoring. Using metrics to identify performance issues To identify performance issues caused by insufficient resources and other common bottlenecks, you can monitor the metrics available for your Amazon RDS DB instance. Viewing performance metrics You should monitor performance metrics on a regular basis to see the average, maximum, and minimum values for a variety of time ranges. If you do so, you can identify when performance is degraded. You can also set Amazon CloudWatch alarms for particular metric thresholds so you are alerted if they are reached. To troubleshoot performance issues, it's important to understand the baseline performance of the system. When you set up a DB instance and run it with a typical workload, capture the average, maximum, and minimum values of all performance metrics. Do so at a number of different intervals (for example, one hour, 24 hours, one week, two weeks). This can give you an idea of what is normal. It helps to"} +{"global_id": 1154, "doc_id": "rds", "chunk_id": "7", "question_id": 3, "question": "What should you monitor to identify performance issues?", "answer_span": "you can monitor the metrics available for your Amazon RDS DB instance", "chunk": "writer. This approach reduces switchover and failover times to single-digit seconds, compared to tens of seconds for open-source drivers. Keeping database engine versions up to date 486 Amazon Relational Database Service User Guide As new service features are introduced, the goal of the AWS suite of drivers is to have built-in support for these service features. For more information, see Connecting to DB instances with the AWS drivers. Using Enhanced Monitoring to identify operating system issues When Enhanced Monitoring is enabled, Amazon RDS provides metrics in real time for the operating system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the console. You can also consume the Enhanced Monitoring JSON output from Amazon CloudWatch Logs in a monitoring system of your choice. For more information about Enhanced Monitoring, see Monitoring OS metrics with Enhanced Monitoring. Using metrics to identify performance issues To identify performance issues caused by insufficient resources and other common bottlenecks, you can monitor the metrics available for your Amazon RDS DB instance. Viewing performance metrics You should monitor performance metrics on a regular basis to see the average, maximum, and minimum values for a variety of time ranges. If you do so, you can identify when performance is degraded. You can also set Amazon CloudWatch alarms for particular metric thresholds so you are alerted if they are reached. To troubleshoot performance issues, it's important to understand the baseline performance of the system. When you set up a DB instance and run it with a typical workload, capture the average, maximum, and minimum values of all performance metrics. Do so at a number of different intervals (for example, one hour, 24 hours, one week, two weeks). This can give you an idea of what is normal. It helps to"} +{"global_id": 1155, "doc_id": "rds", "chunk_id": "7", "question_id": 4, "question": "What is important to understand to troubleshoot performance issues?", "answer_span": "it's important to understand the baseline performance of the system", "chunk": "writer. This approach reduces switchover and failover times to single-digit seconds, compared to tens of seconds for open-source drivers. Keeping database engine versions up to date 486 Amazon Relational Database Service User Guide As new service features are introduced, the goal of the AWS suite of drivers is to have built-in support for these service features. For more information, see Connecting to DB instances with the AWS drivers. Using Enhanced Monitoring to identify operating system issues When Enhanced Monitoring is enabled, Amazon RDS provides metrics in real time for the operating system (OS) that your DB instance runs on. You can view the metrics for your DB instance using the console. You can also consume the Enhanced Monitoring JSON output from Amazon CloudWatch Logs in a monitoring system of your choice. For more information about Enhanced Monitoring, see Monitoring OS metrics with Enhanced Monitoring. Using metrics to identify performance issues To identify performance issues caused by insufficient resources and other common bottlenecks, you can monitor the metrics available for your Amazon RDS DB instance. Viewing performance metrics You should monitor performance metrics on a regular basis to see the average, maximum, and minimum values for a variety of time ranges. If you do so, you can identify when performance is degraded. You can also set Amazon CloudWatch alarms for particular metric thresholds so you are alerted if they are reached. To troubleshoot performance issues, it's important to understand the baseline performance of the system. When you set up a DB instance and run it with a typical workload, capture the average, maximum, and minimum values of all performance metrics. Do so at a number of different intervals (for example, one hour, 24 hours, one week, two weeks). This can give you an idea of what is normal. It helps to"} +{"global_id": 1156, "doc_id": "rds", "chunk_id": "8", "question_id": 1, "question": "What should you capture to understand performance metrics?", "answer_span": "capture the average, maximum, and minimum values of all performance metrics.", "chunk": "instance and run it with a typical workload, capture the average, maximum, and minimum values of all performance metrics. Do so at a number of different intervals (for example, one hour, 24 hours, one week, two weeks). This can give you an idea of what is normal. It helps to get comparisons for both peak and off-peak hours of operation. You can then use this information to identify when performance is dropping below standard levels. If you use Multi-AZ DB clusters, monitor the time difference between the latest transaction on the writer DB instance and the latest applied transaction on a reader DB instance. This difference is called replica lag. For more information, see Replica lag and Multi-AZ DB clusters. You can view the combined Performance Insights and CloudWatch metrics in the Performance Insights dashboard and monitor your DB instance. To use this monitoring view, Performance Using Enhanced Monitoring 487"} +{"global_id": 1157, "doc_id": "rds", "chunk_id": "8", "question_id": 2, "question": "What intervals are suggested for monitoring performance?", "answer_span": "at a number of different intervals (for example, one hour, 24 hours, one week, two weeks).", "chunk": "instance and run it with a typical workload, capture the average, maximum, and minimum values of all performance metrics. Do so at a number of different intervals (for example, one hour, 24 hours, one week, two weeks). This can give you an idea of what is normal. It helps to get comparisons for both peak and off-peak hours of operation. You can then use this information to identify when performance is dropping below standard levels. If you use Multi-AZ DB clusters, monitor the time difference between the latest transaction on the writer DB instance and the latest applied transaction on a reader DB instance. This difference is called replica lag. For more information, see Replica lag and Multi-AZ DB clusters. You can view the combined Performance Insights and CloudWatch metrics in the Performance Insights dashboard and monitor your DB instance. To use this monitoring view, Performance Using Enhanced Monitoring 487"} +{"global_id": 1158, "doc_id": "rds", "chunk_id": "8", "question_id": 3, "question": "What is the term for the time difference between the latest transaction on the writer DB instance and the latest applied transaction on a reader DB instance?", "answer_span": "This difference is called replica lag.", "chunk": "instance and run it with a typical workload, capture the average, maximum, and minimum values of all performance metrics. Do so at a number of different intervals (for example, one hour, 24 hours, one week, two weeks). This can give you an idea of what is normal. It helps to get comparisons for both peak and off-peak hours of operation. You can then use this information to identify when performance is dropping below standard levels. If you use Multi-AZ DB clusters, monitor the time difference between the latest transaction on the writer DB instance and the latest applied transaction on a reader DB instance. This difference is called replica lag. For more information, see Replica lag and Multi-AZ DB clusters. You can view the combined Performance Insights and CloudWatch metrics in the Performance Insights dashboard and monitor your DB instance. To use this monitoring view, Performance Using Enhanced Monitoring 487"} +{"global_id": 1159, "doc_id": "rds", "chunk_id": "8", "question_id": 4, "question": "Where can you view the combined Performance Insights and CloudWatch metrics?", "answer_span": "in the Performance Insights dashboard and monitor your DB instance.", "chunk": "instance and run it with a typical workload, capture the average, maximum, and minimum values of all performance metrics. Do so at a number of different intervals (for example, one hour, 24 hours, one week, two weeks). This can give you an idea of what is normal. It helps to get comparisons for both peak and off-peak hours of operation. You can then use this information to identify when performance is dropping below standard levels. If you use Multi-AZ DB clusters, monitor the time difference between the latest transaction on the writer DB instance and the latest applied transaction on a reader DB instance. This difference is called replica lag. For more information, see Replica lag and Multi-AZ DB clusters. You can view the combined Performance Insights and CloudWatch metrics in the Performance Insights dashboard and monitor your DB instance. To use this monitoring view, Performance Using Enhanced Monitoring 487"} +{"global_id": 1160, "doc_id": "vpc", "chunk_id": "0", "question_id": 1, "question": "What is Amazon VPC?", "answer_span": "With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources in a logically isolated virtual network that you've defined.", "chunk": "Amazon Virtual Private Cloud User Guide What is Amazon VPC? With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources in a logically isolated virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS. The following diagram shows an example VPC. The VPC has one subnet in each of the Availability Zones in the Region, EC2 instances in each subnet, and an internet gateway to allow communication between the resources in your VPC and the internet. For more information, see Amazon Virtual Private Cloud (Amazon VPC). Features The following features help you configure a VPC to provide the connectivity that your applications need: Virtual private clouds (VPC) A VPC is a virtual network that closely resembles a traditional network that you'd operate in your own data center. After you create a VPC, you can add subnets. Features 1 Amazon Virtual Private Cloud User Guide Subnets A subnet is a range of IP addresses in your VPC. A subnet must reside in a single Availability Zone. After you add subnets, you can deploy AWS resources in your VPC. IP addressing You can assign IP addresses, both IPv4 and IPv6, to your VPCs and subnets. You can also bring your public IPv4 addresses and IPv6 GUA addresses to AWS and allocate them to resources in your VPC, such as EC2 instances, NAT gateways, and Network Load Balancers. Routing Use route tables to determine where network traffic from your subnet or gateway is directed. Gateways and endpoints A gateway connects your VPC to another network. For example, use an internet gateway to connect your VPC to the internet. Use a VPC endpoint to connect to AWS services privately, without the"} +{"global_id": 1161, "doc_id": "vpc", "chunk_id": "0", "question_id": 2, "question": "What is a subnet?", "answer_span": "A subnet is a range of IP addresses in your VPC.", "chunk": "Amazon Virtual Private Cloud User Guide What is Amazon VPC? With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources in a logically isolated virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS. The following diagram shows an example VPC. The VPC has one subnet in each of the Availability Zones in the Region, EC2 instances in each subnet, and an internet gateway to allow communication between the resources in your VPC and the internet. For more information, see Amazon Virtual Private Cloud (Amazon VPC). Features The following features help you configure a VPC to provide the connectivity that your applications need: Virtual private clouds (VPC) A VPC is a virtual network that closely resembles a traditional network that you'd operate in your own data center. After you create a VPC, you can add subnets. Features 1 Amazon Virtual Private Cloud User Guide Subnets A subnet is a range of IP addresses in your VPC. A subnet must reside in a single Availability Zone. After you add subnets, you can deploy AWS resources in your VPC. IP addressing You can assign IP addresses, both IPv4 and IPv6, to your VPCs and subnets. You can also bring your public IPv4 addresses and IPv6 GUA addresses to AWS and allocate them to resources in your VPC, such as EC2 instances, NAT gateways, and Network Load Balancers. Routing Use route tables to determine where network traffic from your subnet or gateway is directed. Gateways and endpoints A gateway connects your VPC to another network. For example, use an internet gateway to connect your VPC to the internet. Use a VPC endpoint to connect to AWS services privately, without the"} +{"global_id": 1162, "doc_id": "vpc", "chunk_id": "0", "question_id": 3, "question": "What can you assign to your VPCs and subnets?", "answer_span": "You can assign IP addresses, both IPv4 and IPv6, to your VPCs and subnets.", "chunk": "Amazon Virtual Private Cloud User Guide What is Amazon VPC? With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources in a logically isolated virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS. The following diagram shows an example VPC. The VPC has one subnet in each of the Availability Zones in the Region, EC2 instances in each subnet, and an internet gateway to allow communication between the resources in your VPC and the internet. For more information, see Amazon Virtual Private Cloud (Amazon VPC). Features The following features help you configure a VPC to provide the connectivity that your applications need: Virtual private clouds (VPC) A VPC is a virtual network that closely resembles a traditional network that you'd operate in your own data center. After you create a VPC, you can add subnets. Features 1 Amazon Virtual Private Cloud User Guide Subnets A subnet is a range of IP addresses in your VPC. A subnet must reside in a single Availability Zone. After you add subnets, you can deploy AWS resources in your VPC. IP addressing You can assign IP addresses, both IPv4 and IPv6, to your VPCs and subnets. You can also bring your public IPv4 addresses and IPv6 GUA addresses to AWS and allocate them to resources in your VPC, such as EC2 instances, NAT gateways, and Network Load Balancers. Routing Use route tables to determine where network traffic from your subnet or gateway is directed. Gateways and endpoints A gateway connects your VPC to another network. For example, use an internet gateway to connect your VPC to the internet. Use a VPC endpoint to connect to AWS services privately, without the"} +{"global_id": 1163, "doc_id": "vpc", "chunk_id": "0", "question_id": 4, "question": "What does a gateway do in a VPC?", "answer_span": "A gateway connects your VPC to another network.", "chunk": "Amazon Virtual Private Cloud User Guide What is Amazon VPC? With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources in a logically isolated virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS. The following diagram shows an example VPC. The VPC has one subnet in each of the Availability Zones in the Region, EC2 instances in each subnet, and an internet gateway to allow communication between the resources in your VPC and the internet. For more information, see Amazon Virtual Private Cloud (Amazon VPC). Features The following features help you configure a VPC to provide the connectivity that your applications need: Virtual private clouds (VPC) A VPC is a virtual network that closely resembles a traditional network that you'd operate in your own data center. After you create a VPC, you can add subnets. Features 1 Amazon Virtual Private Cloud User Guide Subnets A subnet is a range of IP addresses in your VPC. A subnet must reside in a single Availability Zone. After you add subnets, you can deploy AWS resources in your VPC. IP addressing You can assign IP addresses, both IPv4 and IPv6, to your VPCs and subnets. You can also bring your public IPv4 addresses and IPv6 GUA addresses to AWS and allocate them to resources in your VPC, such as EC2 instances, NAT gateways, and Network Load Balancers. Routing Use route tables to determine where network traffic from your subnet or gateway is directed. Gateways and endpoints A gateway connects your VPC to another network. For example, use an internet gateway to connect your VPC to the internet. Use a VPC endpoint to connect to AWS services privately, without the"} +{"global_id": 1164, "doc_id": "vpc", "chunk_id": "1", "question_id": 1, "question": "What connects your VPC to another network?", "answer_span": "A gateway connects your VPC to another network.", "chunk": "route tables to determine where network traffic from your subnet or gateway is directed. Gateways and endpoints A gateway connects your VPC to another network. For example, use an internet gateway to connect your VPC to the internet. Use a VPC endpoint to connect to AWS services privately, without the use of an internet gateway or NAT device. Peering connections Use a VPC peering connection to route traffic between the resources in two VPCs. Traffic Mirroring Copy network traffic from network interfaces and send it to security and monitoring appliances for deep packet inspection. Transit gateways Use a transit gateway, which acts as a central hub, to route traffic between your VPCs, VPN connections, and AWS Direct Connect connections. VPC Flow Logs A flow log captures information about the IP traffic going to and from network interfaces in your VPC. VPN connections Connect your VPCs to your on-premises networks using AWS Virtual Private Network (AWS VPN). Features 2 Amazon Virtual Private Cloud User Guide Getting started with Amazon VPC Your AWS account includes a default VPC in each AWS Region. Your default VPCs are configured such that you can immediately start launching and connecting to EC2 instances. For more information, see Plan your VPC. You can choose to create additional VPCs with the subnets, IP addresses, gateways and routing that you need. For more information, see the section called “Create a VPC”. Working with Amazon VPC You can create and manage your VPCs using any of the following interfaces: • AWS Management Console — Provides a web interface that you can use to access your VPCs. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, Mac, and Linux. For more information, see AWS Command"} +{"global_id": 1165, "doc_id": "vpc", "chunk_id": "1", "question_id": 2, "question": "What is the purpose of a VPC endpoint?", "answer_span": "Use a VPC endpoint to connect to AWS services privately, without the use of an internet gateway or NAT device.", "chunk": "route tables to determine where network traffic from your subnet or gateway is directed. Gateways and endpoints A gateway connects your VPC to another network. For example, use an internet gateway to connect your VPC to the internet. Use a VPC endpoint to connect to AWS services privately, without the use of an internet gateway or NAT device. Peering connections Use a VPC peering connection to route traffic between the resources in two VPCs. Traffic Mirroring Copy network traffic from network interfaces and send it to security and monitoring appliances for deep packet inspection. Transit gateways Use a transit gateway, which acts as a central hub, to route traffic between your VPCs, VPN connections, and AWS Direct Connect connections. VPC Flow Logs A flow log captures information about the IP traffic going to and from network interfaces in your VPC. VPN connections Connect your VPCs to your on-premises networks using AWS Virtual Private Network (AWS VPN). Features 2 Amazon Virtual Private Cloud User Guide Getting started with Amazon VPC Your AWS account includes a default VPC in each AWS Region. Your default VPCs are configured such that you can immediately start launching and connecting to EC2 instances. For more information, see Plan your VPC. You can choose to create additional VPCs with the subnets, IP addresses, gateways and routing that you need. For more information, see the section called “Create a VPC”. Working with Amazon VPC You can create and manage your VPCs using any of the following interfaces: • AWS Management Console — Provides a web interface that you can use to access your VPCs. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, Mac, and Linux. For more information, see AWS Command"} +{"global_id": 1166, "doc_id": "vpc", "chunk_id": "1", "question_id": 3, "question": "What does a flow log capture?", "answer_span": "A flow log captures information about the IP traffic going to and from network interfaces in your VPC.", "chunk": "route tables to determine where network traffic from your subnet or gateway is directed. Gateways and endpoints A gateway connects your VPC to another network. For example, use an internet gateway to connect your VPC to the internet. Use a VPC endpoint to connect to AWS services privately, without the use of an internet gateway or NAT device. Peering connections Use a VPC peering connection to route traffic between the resources in two VPCs. Traffic Mirroring Copy network traffic from network interfaces and send it to security and monitoring appliances for deep packet inspection. Transit gateways Use a transit gateway, which acts as a central hub, to route traffic between your VPCs, VPN connections, and AWS Direct Connect connections. VPC Flow Logs A flow log captures information about the IP traffic going to and from network interfaces in your VPC. VPN connections Connect your VPCs to your on-premises networks using AWS Virtual Private Network (AWS VPN). Features 2 Amazon Virtual Private Cloud User Guide Getting started with Amazon VPC Your AWS account includes a default VPC in each AWS Region. Your default VPCs are configured such that you can immediately start launching and connecting to EC2 instances. For more information, see Plan your VPC. You can choose to create additional VPCs with the subnets, IP addresses, gateways and routing that you need. For more information, see the section called “Create a VPC”. Working with Amazon VPC You can create and manage your VPCs using any of the following interfaces: • AWS Management Console — Provides a web interface that you can use to access your VPCs. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, Mac, and Linux. For more information, see AWS Command"} +{"global_id": 1167, "doc_id": "vpc", "chunk_id": "1", "question_id": 4, "question": "What interfaces can you use to manage your VPCs?", "answer_span": "You can create and manage your VPCs using any of the following interfaces: • AWS Management Console — Provides a web interface that you can use to access your VPCs. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, Mac, and Linux.", "chunk": "route tables to determine where network traffic from your subnet or gateway is directed. Gateways and endpoints A gateway connects your VPC to another network. For example, use an internet gateway to connect your VPC to the internet. Use a VPC endpoint to connect to AWS services privately, without the use of an internet gateway or NAT device. Peering connections Use a VPC peering connection to route traffic between the resources in two VPCs. Traffic Mirroring Copy network traffic from network interfaces and send it to security and monitoring appliances for deep packet inspection. Transit gateways Use a transit gateway, which acts as a central hub, to route traffic between your VPCs, VPN connections, and AWS Direct Connect connections. VPC Flow Logs A flow log captures information about the IP traffic going to and from network interfaces in your VPC. VPN connections Connect your VPCs to your on-premises networks using AWS Virtual Private Network (AWS VPN). Features 2 Amazon Virtual Private Cloud User Guide Getting started with Amazon VPC Your AWS account includes a default VPC in each AWS Region. Your default VPCs are configured such that you can immediately start launching and connecting to EC2 instances. For more information, see Plan your VPC. You can choose to create additional VPCs with the subnets, IP addresses, gateways and routing that you need. For more information, see the section called “Create a VPC”. Working with Amazon VPC You can create and manage your VPCs using any of the following interfaces: • AWS Management Console — Provides a web interface that you can use to access your VPCs. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, Mac, and Linux. For more information, see AWS Command"} +{"global_id": 1168, "doc_id": "vpc", "chunk_id": "2", "question_id": 1, "question": "What provides a web interface to access your VPCs?", "answer_span": "AWS Management Console — Provides a web interface that you can use to access your VPCs.", "chunk": "AWS Management Console — Provides a web interface that you can use to access your VPCs. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. • Query API — Provides low-level API actions that you call using HTTPS requests. Using the Query API is the most direct way to access Amazon VPC, but it requires that your application handle low-level details such as generating the hash to sign the request, and error handling. For more information, see Amazon VPC actions in the Amazon EC2 API Reference. Pricing for Amazon VPC There's no additional charge for using a VPC. There are, however, charges for some VPC components, such as NAT gateways, IP Address Manager, traffic mirroring, Reachability Analyzer, and Network Access Analyzer. For more information, see Amazon VPC Pricing. Nearly all resources that you launch in your virtual private cloud (VPC) provide you with an IP address for connectivity. The vast majority of resources in your VPC use private IPv4 addresses. Resources that require direct access to the internet over IPv4, however, use public IPv4 addresses. Amazon VPC enables you to launch managed services, such as Elastic Load Balancing, Amazon RDS, and Amazon EMR, without having a VPC set up beforehand. It does this by using the default Getting started with Amazon VPC 3 Amazon Virtual Private Cloud User Guide VPC in your account if you have one. Any public IPv4 addresses provisioned to your account by the managed service will be"} +{"global_id": 1169, "doc_id": "vpc", "chunk_id": "2", "question_id": 2, "question": "What is the AWS Command Line Interface supported on?", "answer_span": "AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, Mac, and Linux.", "chunk": "AWS Management Console — Provides a web interface that you can use to access your VPCs. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-speci���c APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. • Query API — Provides low-level API actions that you call using HTTPS requests. Using the Query API is the most direct way to access Amazon VPC, but it requires that your application handle low-level details such as generating the hash to sign the request, and error handling. For more information, see Amazon VPC actions in the Amazon EC2 API Reference. Pricing for Amazon VPC There's no additional charge for using a VPC. There are, however, charges for some VPC components, such as NAT gateways, IP Address Manager, traffic mirroring, Reachability Analyzer, and Network Access Analyzer. For more information, see Amazon VPC Pricing. Nearly all resources that you launch in your virtual private cloud (VPC) provide you with an IP address for connectivity. The vast majority of resources in your VPC use private IPv4 addresses. Resources that require direct access to the internet over IPv4, however, use public IPv4 addresses. Amazon VPC enables you to launch managed services, such as Elastic Load Balancing, Amazon RDS, and Amazon EMR, without having a VPC set up beforehand. It does this by using the default Getting started with Amazon VPC 3 Amazon Virtual Private Cloud User Guide VPC in your account if you have one. Any public IPv4 addresses provisioned to your account by the managed service will be"} +{"global_id": 1170, "doc_id": "vpc", "chunk_id": "2", "question_id": 3, "question": "Are there any charges for using a VPC?", "answer_span": "There's no additional charge for using a VPC.", "chunk": "AWS Management Console — Provides a web interface that you can use to access your VPCs. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. • Query API — Provides low-level API actions that you call using HTTPS requests. Using the Query API is the most direct way to access Amazon VPC, but it requires that your application handle low-level details such as generating the hash to sign the request, and error handling. For more information, see Amazon VPC actions in the Amazon EC2 API Reference. Pricing for Amazon VPC There's no additional charge for using a VPC. There are, however, charges for some VPC components, such as NAT gateways, IP Address Manager, traffic mirroring, Reachability Analyzer, and Network Access Analyzer. For more information, see Amazon VPC Pricing. Nearly all resources that you launch in your virtual private cloud (VPC) provide you with an IP address for connectivity. The vast majority of resources in your VPC use private IPv4 addresses. Resources that require direct access to the internet over IPv4, however, use public IPv4 addresses. Amazon VPC enables you to launch managed services, such as Elastic Load Balancing, Amazon RDS, and Amazon EMR, without having a VPC set up beforehand. It does this by using the default Getting started with Amazon VPC 3 Amazon Virtual Private Cloud User Guide VPC in your account if you have one. Any public IPv4 addresses provisioned to your account by the managed service will be"} +{"global_id": 1171, "doc_id": "vpc", "chunk_id": "2", "question_id": 4, "question": "What type of addresses do the vast majority of resources in your VPC use?", "answer_span": "The vast majority of resources in your VPC use private IPv4 addresses.", "chunk": "AWS Management Console — Provides a web interface that you can use to access your VPCs. • AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services, including Amazon VPC, and is supported on Windows, Mac, and Linux. For more information, see AWS Command Line Interface. • AWS SDKs — Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and error handling. For more information, see AWS SDKs. • Query API — Provides low-level API actions that you call using HTTPS requests. Using the Query API is the most direct way to access Amazon VPC, but it requires that your application handle low-level details such as generating the hash to sign the request, and error handling. For more information, see Amazon VPC actions in the Amazon EC2 API Reference. Pricing for Amazon VPC There's no additional charge for using a VPC. There are, however, charges for some VPC components, such as NAT gateways, IP Address Manager, traffic mirroring, Reachability Analyzer, and Network Access Analyzer. For more information, see Amazon VPC Pricing. Nearly all resources that you launch in your virtual private cloud (VPC) provide you with an IP address for connectivity. The vast majority of resources in your VPC use private IPv4 addresses. Resources that require direct access to the internet over IPv4, however, use public IPv4 addresses. Amazon VPC enables you to launch managed services, such as Elastic Load Balancing, Amazon RDS, and Amazon EMR, without having a VPC set up beforehand. It does this by using the default Getting started with Amazon VPC 3 Amazon Virtual Private Cloud User Guide VPC in your account if you have one. Any public IPv4 addresses provisioned to your account by the managed service will be"} +{"global_id": 1172, "doc_id": "vpc", "chunk_id": "3", "question_id": 1, "question": "What is necessary for a resource to be directly reachable from the internet over IPv4?", "answer_span": "A public IPv4 address is necessary for a resource to be directly reachable from the internet over IPv4.", "chunk": "Amazon EMR, without having a VPC set up beforehand. It does this by using the default Getting started with Amazon VPC 3 Amazon Virtual Private Cloud User Guide VPC in your account if you have one. Any public IPv4 addresses provisioned to your account by the managed service will be charged. These charges will be associated with Amazon VPC service in your AWS Cost and Usage Report. Pricing for public IPv4 addresses A public IPv4 address is an IPv4 address that is routable from the internet. A public IPv4 address is necessary for a resource to be directly reachable from the internet over IPv4. If you are an existing or new AWS Free Tier customer, you get 750 hours of public IPv4 address usage with the EC2 service at no charge. If you are not using the EC2 service in the AWS Free Tier, Public IPv4 addresses are charged. For specific pricing information, see the Public IPv4 address tab in Amazon VPC Pricing. Private IPv4 addresses (RFC 1918) are not charged. For more information about how public IPv4 addresses are charged for shared VPCs, see Billing and metering for the owner and participants. Public IPv4 addresses have the following types: • Elastic IP addresses (EIPs): Static, public IPv4 addresses provided by Amazon that you can associate with an EC2 instance, elastic network interface, or AWS resource. • EC2 public IPv4 addresses: Public IPv4 addresses assigned to an EC2 instance by Amazon (if the EC2 instance is launched into a default subnet or if the instance is launched into a subnet that’s been configured to automatically assign a public IPv4 address). • BYOIPv4 addresses: Public IPv4 addresses in the IPv4 address range that you’ve brought to AWS using Bring your own IP addresses (BYOIP). • Service-managed IPv4 addresses: Public IPv4 addresses"} +{"global_id": 1173, "doc_id": "vpc", "chunk_id": "3", "question_id": 2, "question": "How many hours of public IPv4 address usage do AWS Free Tier customers get with the EC2 service at no charge?", "answer_span": "you get 750 hours of public IPv4 address usage with the EC2 service at no charge.", "chunk": "Amazon EMR, without having a VPC set up beforehand. It does this by using the default Getting started with Amazon VPC 3 Amazon Virtual Private Cloud User Guide VPC in your account if you have one. Any public IPv4 addresses provisioned to your account by the managed service will be charged. These charges will be associated with Amazon VPC service in your AWS Cost and Usage Report. Pricing for public IPv4 addresses A public IPv4 address is an IPv4 address that is routable from the internet. A public IPv4 address is necessary for a resource to be directly reachable from the internet over IPv4. If you are an existing or new AWS Free Tier customer, you get 750 hours of public IPv4 address usage with the EC2 service at no charge. If you are not using the EC2 service in the AWS Free Tier, Public IPv4 addresses are charged. For specific pricing information, see the Public IPv4 address tab in Amazon VPC Pricing. Private IPv4 addresses (RFC 1918) are not charged. For more information about how public IPv4 addresses are charged for shared VPCs, see Billing and metering for the owner and participants. Public IPv4 addresses have the following types: • Elastic IP addresses (EIPs): Static, public IPv4 addresses provided by Amazon that you can associate with an EC2 instance, elastic network interface, or AWS resource. • EC2 public IPv4 addresses: Public IPv4 addresses assigned to an EC2 instance by Amazon (if the EC2 instance is launched into a default subnet or if the instance is launched into a subnet that’s been configured to automatically assign a public IPv4 address). • BYOIPv4 addresses: Public IPv4 addresses in the IPv4 address range that you’ve brought to AWS using Bring your own IP addresses (BYOIP). • Service-managed IPv4 addresses: Public IPv4 addresses"} +{"global_id": 1174, "doc_id": "vpc", "chunk_id": "3", "question_id": 3, "question": "What types of public IPv4 addresses are mentioned in the chunk?", "answer_span": "Public IPv4 addresses have the following types: • Elastic IP addresses (EIPs): Static, public IPv4 addresses provided by Amazon that you can associate with an EC2 instance, elastic network interface, or AWS resource. • EC2 public IPv4 addresses: Public IPv4 addresses assigned to an EC2 instance by Amazon (if the EC2 instance is launched into a default subnet or if the instance is launched into a subnet that’s been configured to automatically assign a public IPv4 address). • BYOIPv4 addresses: Public IPv4 addresses in the IPv4 address range that you’ve brought to AWS using Bring your own IP addresses (BYOIP). • Service-managed IPv4 addresses: Public IPv4 addresses", "chunk": "Amazon EMR, without having a VPC set up beforehand. It does this by using the default Getting started with Amazon VPC 3 Amazon Virtual Private Cloud User Guide VPC in your account if you have one. Any public IPv4 addresses provisioned to your account by the managed service will be charged. These charges will be associated with Amazon VPC service in your AWS Cost and Usage Report. Pricing for public IPv4 addresses A public IPv4 address is an IPv4 address that is routable from the internet. A public IPv4 address is necessary for a resource to be directly reachable from the internet over IPv4. If you are an existing or new AWS Free Tier customer, you get 750 hours of public IPv4 address usage with the EC2 service at no charge. If you are not using the EC2 service in the AWS Free Tier, Public IPv4 addresses are charged. For specific pricing information, see the Public IPv4 address tab in Amazon VPC Pricing. Private IPv4 addresses (RFC 1918) are not charged. For more information about how public IPv4 addresses are charged for shared VPCs, see Billing and metering for the owner and participants. Public IPv4 addresses have the following types: • Elastic IP addresses (EIPs): Static, public IPv4 addresses provided by Amazon that you can associate with an EC2 instance, elastic network interface, or AWS resource. • EC2 public IPv4 addresses: Public IPv4 addresses assigned to an EC2 instance by Amazon (if the EC2 instance is launched into a default subnet or if the instance is launched into a subnet that’s been configured to automatically assign a public IPv4 address). • BYOIPv4 addresses: Public IPv4 addresses in the IPv4 address range that you’ve brought to AWS using Bring your own IP addresses (BYOIP). • Service-managed IPv4 addresses: Public IPv4 addresses"} +{"global_id": 1175, "doc_id": "vpc", "chunk_id": "3", "question_id": 4, "question": "Are private IPv4 addresses charged?", "answer_span": "Private IPv4 addresses (RFC 1918) are not charged.", "chunk": "Amazon EMR, without having a VPC set up beforehand. It does this by using the default Getting started with Amazon VPC 3 Amazon Virtual Private Cloud User Guide VPC in your account if you have one. Any public IPv4 addresses provisioned to your account by the managed service will be charged. These charges will be associated with Amazon VPC service in your AWS Cost and Usage Report. Pricing for public IPv4 addresses A public IPv4 address is an IPv4 address that is routable from the internet. A public IPv4 address is necessary for a resource to be directly reachable from the internet over IPv4. If you are an existing or new AWS Free Tier customer, you get 750 hours of public IPv4 address usage with the EC2 service at no charge. If you are not using the EC2 service in the AWS Free Tier, Public IPv4 addresses are charged. For specific pricing information, see the Public IPv4 address tab in Amazon VPC Pricing. Private IPv4 addresses (RFC 1918) are not charged. For more information about how public IPv4 addresses are charged for shared VPCs, see Billing and metering for the owner and participants. Public IPv4 addresses have the following types: • Elastic IP addresses (EIPs): Static, public IPv4 addresses provided by Amazon that you can associate with an EC2 instance, elastic network interface, or AWS resource. • EC2 public IPv4 addresses: Public IPv4 addresses assigned to an EC2 instance by Amazon (if the EC2 instance is launched into a default subnet or if the instance is launched into a subnet that’s been configured to automatically assign a public IPv4 address). • BYOIPv4 addresses: Public IPv4 addresses in the IPv4 address range that you’ve brought to AWS using Bring your own IP addresses (BYOIP). • Service-managed IPv4 addresses: Public IPv4 addresses"} +{"global_id": 1176, "doc_id": "vpc", "chunk_id": "4", "question_id": 1, "question": "What are BYOIPv4 addresses?", "answer_span": "Public IPv4 addresses in the IPv4 address range that you’ve brought to AWS using Bring your own IP addresses (BYOIP).", "chunk": "subnet or if the instance is launched into a subnet that’s been configured to automatically assign a public IPv4 address). • BYOIPv4 addresses: Public IPv4 addresses in the IPv4 address range that you’ve brought to AWS using Bring your own IP addresses (BYOIP). • Service-managed IPv4 addresses: Public IPv4 addresses automatically provisioned on AWS resources and managed by an AWS service. For example, public IPv4 addresses on Amazon ECS, Amazon RDS, or Amazon WorkSpaces. The following list shows the most common AWS services that can use public IPv4 addresses. • Amazon AppStream 2.0 • AWS Client VPN • AWS Database Migration Service • Amazon EC2 • Amazon Elastic Container Service Pricing for Amazon VPC 4 Amazon Virtual Private Cloud User Guide • Amazon EKS • Amazon EMR • Amazon GameLift Servers • AWS Global Accelerator • AWS Mainframe Modernization • Amazon Managed Streaming for Apache Kafka • Amazon MQ • Amazon RDS • Amazon Redshift • AWS Site-to-Site VPN • Amazon VPC NAT gateway • Amazon WorkSpaces • Elastic Load Balancing Pricing for Amazon VPC 5 Amazon Virtual Private Cloud User Guide How Amazon VPC works With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources in a logically isolated virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS. The following is a visual representation of a VPC and its resources from the Preview pane shown when you create a VPC using the AWS Management Console. For an existing VPC, you can access this visualization on the Resource map tab. This example shows the resources that are initially selected on the Create VPC page when you choose to create the VPC plus other networking resources. This"} +{"global_id": 1177, "doc_id": "vpc", "chunk_id": "4", "question_id": 2, "question": "What are service-managed IPv4 addresses?", "answer_span": "Public IPv4 addresses automatically provisioned on AWS resources and managed by an AWS service.", "chunk": "subnet or if the instance is launched into a subnet that’s been configured to automatically assign a public IPv4 address). • BYOIPv4 addresses: Public IPv4 addresses in the IPv4 address range that you’ve brought to AWS using Bring your own IP addresses (BYOIP). • Service-managed IPv4 addresses: Public IPv4 addresses automatically provisioned on AWS resources and managed by an AWS service. For example, public IPv4 addresses on Amazon ECS, Amazon RDS, or Amazon WorkSpaces. The following list shows the most common AWS services that can use public IPv4 addresses. • Amazon AppStream 2.0 • AWS Client VPN • AWS Database Migration Service • Amazon EC2 • Amazon Elastic Container Service Pricing for Amazon VPC 4 Amazon Virtual Private Cloud User Guide • Amazon EKS • Amazon EMR • Amazon GameLift Servers • AWS Global Accelerator • AWS Mainframe Modernization • Amazon Managed Streaming for Apache Kafka • Amazon MQ • Amazon RDS • Amazon Redshift • AWS Site-to-Site VPN • Amazon VPC NAT gateway • Amazon WorkSpaces • Elastic Load Balancing Pricing for Amazon VPC 5 Amazon Virtual Private Cloud User Guide How Amazon VPC works With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources in a logically isolated virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS. The following is a visual representation of a VPC and its resources from the Preview pane shown when you create a VPC using the AWS Management Console. For an existing VPC, you can access this visualization on the Resource map tab. This example shows the resources that are initially selected on the Create VPC page when you choose to create the VPC plus other networking resources. This"} +{"global_id": 1178, "doc_id": "vpc", "chunk_id": "4", "question_id": 3, "question": "Which AWS services can use public IPv4 addresses?", "answer_span": "The following list shows the most common AWS services that can use public IPv4 addresses.", "chunk": "subnet or if the instance is launched into a subnet that’s been configured to automatically assign a public IPv4 address). • BYOIPv4 addresses: Public IPv4 addresses in the IPv4 address range that you’ve brought to AWS using Bring your own IP addresses (BYOIP). • Service-managed IPv4 addresses: Public IPv4 addresses automatically provisioned on AWS resources and managed by an AWS service. For example, public IPv4 addresses on Amazon ECS, Amazon RDS, or Amazon WorkSpaces. The following list shows the most common AWS services that can use public IPv4 addresses. • Amazon AppStream 2.0 • AWS Client VPN • AWS Database Migration Service • Amazon EC2 • Amazon Elastic Container Service Pricing for Amazon VPC 4 Amazon Virtual Private Cloud User Guide • Amazon EKS • Amazon EMR • Amazon GameLift Servers • AWS Global Accelerator • AWS Mainframe Modernization • Amazon Managed Streaming for Apache Kafka • Amazon MQ • Amazon RDS • Amazon Redshift • AWS Site-to-Site VPN • Amazon VPC NAT gateway • Amazon WorkSpaces • Elastic Load Balancing Pricing for Amazon VPC 5 Amazon Virtual Private Cloud User Guide How Amazon VPC works With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources in a logically isolated virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS. The following is a visual representation of a VPC and its resources from the Preview pane shown when you create a VPC using the AWS Management Console. For an existing VPC, you can access this visualization on the Resource map tab. This example shows the resources that are initially selected on the Create VPC page when you choose to create the VPC plus other networking resources. This"} +{"global_id": 1179, "doc_id": "vpc", "chunk_id": "4", "question_id": 4, "question": "What does Amazon VPC allow you to do?", "answer_span": "With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources in a logically isolated virtual network that you've defined.", "chunk": "subnet or if the instance is launched into a subnet that’s been configured to automatically assign a public IPv4 address). • BYOIPv4 addresses: Public IPv4 addresses in the IPv4 address range that you’ve brought to AWS using Bring your own IP addresses (BYOIP). • Service-managed IPv4 addresses: Public IPv4 addresses automatically provisioned on AWS resources and managed by an AWS service. For example, public IPv4 addresses on Amazon ECS, Amazon RDS, or Amazon WorkSpaces. The following list shows the most common AWS services that can use public IPv4 addresses. • Amazon AppStream 2.0 • AWS Client VPN • AWS Database Migration Service • Amazon EC2 • Amazon Elastic Container Service Pricing for Amazon VPC 4 Amazon Virtual Private Cloud User Guide • Amazon EKS • Amazon EMR • Amazon GameLift Servers • AWS Global Accelerator • AWS Mainframe Modernization • Amazon Managed Streaming for Apache Kafka • Amazon MQ • Amazon RDS • Amazon Redshift • AWS Site-to-Site VPN • Amazon VPC NAT gateway • Amazon WorkSpaces • Elastic Load Balancing Pricing for Amazon VPC 5 Amazon Virtual Private Cloud User Guide How Amazon VPC works With Amazon Virtual Private Cloud (Amazon VPC), you can launch AWS resources in a logically isolated virtual network that you've defined. This virtual network closely resembles a traditional network that you'd operate in your own data center, with the benefits of using the scalable infrastructure of AWS. The following is a visual representation of a VPC and its resources from the Preview pane shown when you create a VPC using the AWS Management Console. For an existing VPC, you can access this visualization on the Resource map tab. This example shows the resources that are initially selected on the Create VPC page when you choose to create the VPC plus other networking resources. This"} +{"global_id": 1180, "doc_id": "vpc", "chunk_id": "5", "question_id": 1, "question": "What is a virtual private cloud (VPC)?", "answer_span": "A virtual private cloud (VPC) is a virtual network dedicated to your AWS account.", "chunk": "when you create a VPC using the AWS Management Console. For an existing VPC, you can access this visualization on the Resource map tab. This example shows the resources that are initially selected on the Create VPC page when you choose to create the VPC plus other networking resources. This VPC is configured with an IPv4 CIDR and an Amazon-provided IPv6 CIDR, subnets in two Availability Zones, three route tables, an internet gateway, and a gateway endpoint. Because we've selected the internet gateway, the visualization indicates that traffic from the public subnets is routed to the internet because the corresponding route table sends the traffic to the internet gateway. Concepts • VPCs and subnets • Default and nondefault VPCs • Route tables • Access the internet • Access a corporate or home network • Connect VPCs and networks 6 Amazon Virtual Private Cloud User Guide • AWS private global network VPCs and subnets A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can specify an IP address range for the VPC, add subnets, add gateways, and associate security groups. A subnet is a range of IP addresses in your VPC. You launch AWS resources, such as Amazon EC2 instances, into your subnets. You can connect a subnet to the internet, other VPCs, and your own data centers, and route traffic to and from your subnets using route tables. Learn more • IP addressing • Virtual private clouds • Subnets Default and nondefault VPCs If your account was created after December 4, 2013, it comes with a default VPC in each Region. A default VPC is configured and ready for you to use. For example, it has a default subnet in each Availability"} +{"global_id": 1181, "doc_id": "vpc", "chunk_id": "5", "question_id": 2, "question": "What can you specify for the VPC?", "answer_span": "You can specify an IP address range for the VPC, add subnets, add gateways, and associate security groups.", "chunk": "when you create a VPC using the AWS Management Console. For an existing VPC, you can access this visualization on the Resource map tab. This example shows the resources that are initially selected on the Create VPC page when you choose to create the VPC plus other networking resources. This VPC is configured with an IPv4 CIDR and an Amazon-provided IPv6 CIDR, subnets in two Availability Zones, three route tables, an internet gateway, and a gateway endpoint. Because we've selected the internet gateway, the visualization indicates that traffic from the public subnets is routed to the internet because the corresponding route table sends the traffic to the internet gateway. Concepts • VPCs and subnets • Default and nondefault VPCs • Route tables • Access the internet • Access a corporate or home network • Connect VPCs and networks 6 Amazon Virtual Private Cloud User Guide • AWS private global network VPCs and subnets A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can specify an IP address range for the VPC, add subnets, add gateways, and associate security groups. A subnet is a range of IP addresses in your VPC. You launch AWS resources, such as Amazon EC2 instances, into your subnets. You can connect a subnet to the internet, other VPCs, and your own data centers, and route traffic to and from your subnets using route tables. Learn more • IP addressing • Virtual private clouds • Subnets Default and nondefault VPCs If your account was created after December 4, 2013, it comes with a default VPC in each Region. A default VPC is configured and ready for you to use. For example, it has a default subnet in each Availability"} +{"global_id": 1182, "doc_id": "vpc", "chunk_id": "5", "question_id": 3, "question": "What does a default VPC come with?", "answer_span": "A default VPC is configured and ready for you to use.", "chunk": "when you create a VPC using the AWS Management Console. For an existing VPC, you can access this visualization on the Resource map tab. This example shows the resources that are initially selected on the Create VPC page when you choose to create the VPC plus other networking resources. This VPC is configured with an IPv4 CIDR and an Amazon-provided IPv6 CIDR, subnets in two Availability Zones, three route tables, an internet gateway, and a gateway endpoint. Because we've selected the internet gateway, the visualization indicates that traffic from the public subnets is routed to the internet because the corresponding route table sends the traffic to the internet gateway. Concepts • VPCs and subnets • Default and nondefault VPCs • Route tables • Access the internet • Access a corporate or home network • Connect VPCs and networks 6 Amazon Virtual Private Cloud User Guide • AWS private global network VPCs and subnets A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can specify an IP address range for the VPC, add subnets, add gateways, and associate security groups. A subnet is a range of IP addresses in your VPC. You launch AWS resources, such as Amazon EC2 instances, into your subnets. You can connect a subnet to the internet, other VPCs, and your own data centers, and route traffic to and from your subnets using route tables. Learn more • IP addressing • Virtual private clouds • Subnets Default and nondefault VPCs If your account was created after December 4, 2013, it comes with a default VPC in each Region. A default VPC is configured and ready for you to use. For example, it has a default subnet in each Availability"} +{"global_id": 1183, "doc_id": "vpc", "chunk_id": "5", "question_id": 4, "question": "What happens if your account was created after December 4, 2013?", "answer_span": "If your account was created after December 4, 2013, it comes with a default VPC in each Region.", "chunk": "when you create a VPC using the AWS Management Console. For an existing VPC, you can access this visualization on the Resource map tab. This example shows the resources that are initially selected on the Create VPC page when you choose to create the VPC plus other networking resources. This VPC is configured with an IPv4 CIDR and an Amazon-provided IPv6 CIDR, subnets in two Availability Zones, three route tables, an internet gateway, and a gateway endpoint. Because we've selected the internet gateway, the visualization indicates that traffic from the public subnets is routed to the internet because the corresponding route table sends the traffic to the internet gateway. Concepts • VPCs and subnets • Default and nondefault VPCs • Route tables • Access the internet • Access a corporate or home network • Connect VPCs and networks 6 Amazon Virtual Private Cloud User Guide • AWS private global network VPCs and subnets A virtual private cloud (VPC) is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can specify an IP address range for the VPC, add subnets, add gateways, and associate security groups. A subnet is a range of IP addresses in your VPC. You launch AWS resources, such as Amazon EC2 instances, into your subnets. You can connect a subnet to the internet, other VPCs, and your own data centers, and route traffic to and from your subnets using route tables. Learn more • IP addressing • Virtual private clouds • Subnets Default and nondefault VPCs If your account was created after December 4, 2013, it comes with a default VPC in each Region. A default VPC is configured and ready for you to use. For example, it has a default subnet in each Availability"} +{"global_id": 1184, "doc_id": "vpc", "chunk_id": "6", "question_id": 1, "question": "What is a default VPC?", "answer_span": "A default VPC is configured and ready for you to use.", "chunk": "addressing • Virtual private clouds • Subnets Default and nondefault VPCs If your account was created after December 4, 2013, it comes with a default VPC in each Region. A default VPC is configured and ready for you to use. For example, it has a default subnet in each Availability Zone in the Region, an attached internet gateway, a route in the main route table that sends all traffic to the internet gateway, and DNS settings that automatically assign public DNS hostnames to instances with public IP addresses and enable DNS resolution through the Amazonprovided DNS server (see DNS attributes for your VPC). Therefore, an EC2 instance that is launched in a default subnet automatically has access to the internet. If you have a default VPC in a Region and you don't specify a subnet when you launch an EC2 instance into that Region, we choose one of the default subnets and launch the instance into that subnet. You can also create your own VPC, and configure it as you need. This is known as a nondefault VPC. Subnets that you create in your nondefault VPC and additional subnets that you create in your default VPC are called nondefault subnets. Learn more • the section called “Default VPCs” VPCs and subnets 7 Amazon Virtual Private Cloud User Guide • the section called “Create a VPC” Route tables A route table contains a set of rules, called routes, that are used to determine where network traffic from your VPC is directed. You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table. Each route in a route table specifies the range of IP addresses where you want the traffic to go (the destination) and the gateway, network interface, or connection"} +{"global_id": 1185, "doc_id": "vpc", "chunk_id": "6", "question_id": 2, "question": "What happens if you don't specify a subnet when launching an EC2 instance in a default VPC?", "answer_span": "we choose one of the default subnets and launch the instance into that subnet.", "chunk": "addressing • Virtual private clouds • Subnets Default and nondefault VPCs If your account was created after December 4, 2013, it comes with a default VPC in each Region. A default VPC is configured and ready for you to use. For example, it has a default subnet in each Availability Zone in the Region, an attached internet gateway, a route in the main route table that sends all traffic to the internet gateway, and DNS settings that automatically assign public DNS hostnames to instances with public IP addresses and enable DNS resolution through the Amazonprovided DNS server (see DNS attributes for your VPC). Therefore, an EC2 instance that is launched in a default subnet automatically has access to the internet. If you have a default VPC in a Region and you don't specify a subnet when you launch an EC2 instance into that Region, we choose one of the default subnets and launch the instance into that subnet. You can also create your own VPC, and configure it as you need. This is known as a nondefault VPC. Subnets that you create in your nondefault VPC and additional subnets that you create in your default VPC are called nondefault subnets. Learn more • the section called “Default VPCs” VPCs and subnets 7 Amazon Virtual Private Cloud User Guide • the section called “Create a VPC” Route tables A route table contains a set of rules, called routes, that are used to determine where network traffic from your VPC is directed. You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table. Each route in a route table specifies the range of IP addresses where you want the traffic to go (the destination) and the gateway, network interface, or connection"} +{"global_id": 1186, "doc_id": "vpc", "chunk_id": "6", "question_id": 3, "question": "What are nondefault subnets?", "answer_span": "Subnets that you create in your nondefault VPC and additional subnets that you create in your default VPC are called nondefault subnets.", "chunk": "addressing • Virtual private clouds • Subnets Default and nondefault VPCs If your account was created after December 4, 2013, it comes with a default VPC in each Region. A default VPC is configured and ready for you to use. For example, it has a default subnet in each Availability Zone in the Region, an attached internet gateway, a route in the main route table that sends all traffic to the internet gateway, and DNS settings that automatically assign public DNS hostnames to instances with public IP addresses and enable DNS resolution through the Amazonprovided DNS server (see DNS attributes for your VPC). Therefore, an EC2 instance that is launched in a default subnet automatically has access to the internet. If you have a default VPC in a Region and you don't specify a subnet when you launch an EC2 instance into that Region, we choose one of the default subnets and launch the instance into that subnet. You can also create your own VPC, and configure it as you need. This is known as a nondefault VPC. Subnets that you create in your nondefault VPC and additional subnets that you create in your default VPC are called nondefault subnets. Learn more • the section called “Default VPCs” VPCs and subnets 7 Amazon Virtual Private Cloud User Guide • the section called “Create a VPC” Route tables A route table contains a set of rules, called routes, that are used to determine where network traffic from your VPC is directed. You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table. Each route in a route table specifies the range of IP addresses where you want the traffic to go (the destination) and the gateway, network interface, or connection"} +{"global_id": 1187, "doc_id": "vpc", "chunk_id": "6", "question_id": 4, "question": "What does a route table contain?", "answer_span": "A route table contains a set of rules, called routes, that are used to determine where network traffic from your VPC is directed.", "chunk": "addressing • Virtual private clouds • Subnets Default and nondefault VPCs If your account was created after December 4, 2013, it comes with a default VPC in each Region. A default VPC is configured and ready for you to use. For example, it has a default subnet in each Availability Zone in the Region, an attached internet gateway, a route in the main route table that sends all traffic to the internet gateway, and DNS settings that automatically assign public DNS hostnames to instances with public IP addresses and enable DNS resolution through the Amazonprovided DNS server (see DNS attributes for your VPC). Therefore, an EC2 instance that is launched in a default subnet automatically has access to the internet. If you have a default VPC in a Region and you don't specify a subnet when you launch an EC2 instance into that Region, we choose one of the default subnets and launch the instance into that subnet. You can also create your own VPC, and configure it as you need. This is known as a nondefault VPC. Subnets that you create in your nondefault VPC and additional subnets that you create in your default VPC are called nondefault subnets. Learn more • the section called “Default VPCs” VPCs and subnets 7 Amazon Virtual Private Cloud User Guide • the section called “Create a VPC” Route tables A route table contains a set of rules, called routes, that are used to determine where network traffic from your VPC is directed. You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table. Each route in a route table specifies the range of IP addresses where you want the traffic to go (the destination) and the gateway, network interface, or connection"} +{"global_id": 1188, "doc_id": "vpc", "chunk_id": "7", "question_id": 1, "question": "How can you associate a subnet with a particular route table?", "answer_span": "You can explicitly associate a subnet with a particular route table.", "chunk": "You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table. Each route in a route table specifies the range of IP addresses where you want the traffic to go (the destination) and the gateway, network interface, or connection through which to send the traffic (the target). Learn more • Configure route tables Access the internet You control how the instances that you launch into a VPC access resources outside the VPC. A default VPC includes an internet gateway, and each default subnet is a public subnet. Each instance that you launch into a default subnet has a private IPv4 address and a public IPv4 address. These instances can communicate with the internet through the internet gateway. An internet gateway enables your instances to connect to the internet through the Amazon EC2 network edge. By default, each instance that you launch into a nondefault subnet has a private IPv4 address, but no public IPv4 address, unless you specifically assign one at launch, or you modify the subnet's public IP address attribute. These instances can communicate with each other, but can't access the internet. You can enable internet access for an instance launched into a nondefault subnet by attaching an internet gateway to its VPC (if its VPC is not a default VPC) and associating an Elastic IP address with the instance. Alternatively, to allow an instance in your VPC to initiate outbound connections to the internet but prevent unsolicited inbound connections from the internet, you can use a network address translation (NAT) device. NAT maps multiple private IPv4 addresses to a single public IPv4 address. You can configure the NAT device with an Elastic IP address and connect it to the internet through an internet gateway. This makes it"} +{"global_id": 1189, "doc_id": "vpc", "chunk_id": "7", "question_id": 2, "question": "What does each route in a route table specify?", "answer_span": "Each route in a route table specifies the range of IP addresses where you want the traffic to go (the destination) and the gateway, network interface, or connection through which to send the traffic (the target).", "chunk": "You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table. Each route in a route table specifies the range of IP addresses where you want the traffic to go (the destination) and the gateway, network interface, or connection through which to send the traffic (the target). Learn more • Configure route tables Access the internet You control how the instances that you launch into a VPC access resources outside the VPC. A default VPC includes an internet gateway, and each default subnet is a public subnet. Each instance that you launch into a default subnet has a private IPv4 address and a public IPv4 address. These instances can communicate with the internet through the internet gateway. An internet gateway enables your instances to connect to the internet through the Amazon EC2 network edge. By default, each instance that you launch into a nondefault subnet has a private IPv4 address, but no public IPv4 address, unless you specifically assign one at launch, or you modify the subnet's public IP address attribute. These instances can communicate with each other, but can't access the internet. You can enable internet access for an instance launched into a nondefault subnet by attaching an internet gateway to its VPC (if its VPC is not a default VPC) and associating an Elastic IP address with the instance. Alternatively, to allow an instance in your VPC to initiate outbound connections to the internet but prevent unsolicited inbound connections from the internet, you can use a network address translation (NAT) device. NAT maps multiple private IPv4 addresses to a single public IPv4 address. You can configure the NAT device with an Elastic IP address and connect it to the internet through an internet gateway. This makes it"} +{"global_id": 1190, "doc_id": "vpc", "chunk_id": "7", "question_id": 3, "question": "What does a default VPC include?", "answer_span": "A default VPC includes an internet gateway, and each default subnet is a public subnet.", "chunk": "You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table. Each route in a route table specifies the range of IP addresses where you want the traffic to go (the destination) and the gateway, network interface, or connection through which to send the traffic (the target). Learn more • Configure route tables Access the internet You control how the instances that you launch into a VPC access resources outside the VPC. A default VPC includes an internet gateway, and each default subnet is a public subnet. Each instance that you launch into a default subnet has a private IPv4 address and a public IPv4 address. These instances can communicate with the internet through the internet gateway. An internet gateway enables your instances to connect to the internet through the Amazon EC2 network edge. By default, each instance that you launch into a nondefault subnet has a private IPv4 address, but no public IPv4 address, unless you specifically assign one at launch, or you modify the subnet's public IP address attribute. These instances can communicate with each other, but can't access the internet. You can enable internet access for an instance launched into a nondefault subnet by attaching an internet gateway to its VPC (if its VPC is not a default VPC) and associating an Elastic IP address with the instance. Alternatively, to allow an instance in your VPC to initiate outbound connections to the internet but prevent unsolicited inbound connections from the internet, you can use a network address translation (NAT) device. NAT maps multiple private IPv4 addresses to a single public IPv4 address. You can configure the NAT device with an Elastic IP address and connect it to the internet through an internet gateway. This makes it"} +{"global_id": 1191, "doc_id": "vpc", "chunk_id": "7", "question_id": 4, "question": "How can you enable internet access for an instance launched into a nondefault subnet?", "answer_span": "You can enable internet access for an instance launched into a nondefault subnet by attaching an internet gateway to its VPC (if its VPC is not a default VPC) and associating an Elastic IP address with the instance.", "chunk": "You can explicitly associate a subnet with a particular route table. Otherwise, the subnet is implicitly associated with the main route table. Each route in a route table specifies the range of IP addresses where you want the traffic to go (the destination) and the gateway, network interface, or connection through which to send the traffic (the target). Learn more • Configure route tables Access the internet You control how the instances that you launch into a VPC access resources outside the VPC. A default VPC includes an internet gateway, and each default subnet is a public subnet. Each instance that you launch into a default subnet has a private IPv4 address and a public IPv4 address. These instances can communicate with the internet through the internet gateway. An internet gateway enables your instances to connect to the internet through the Amazon EC2 network edge. By default, each instance that you launch into a nondefault subnet has a private IPv4 address, but no public IPv4 address, unless you specifically assign one at launch, or you modify the subnet's public IP address attribute. These instances can communicate with each other, but can't access the internet. You can enable internet access for an instance launched into a nondefault subnet by attaching an internet gateway to its VPC (if its VPC is not a default VPC) and associating an Elastic IP address with the instance. Alternatively, to allow an instance in your VPC to initiate outbound connections to the internet but prevent unsolicited inbound connections from the internet, you can use a network address translation (NAT) device. NAT maps multiple private IPv4 addresses to a single public IPv4 address. You can configure the NAT device with an Elastic IP address and connect it to the internet through an internet gateway. This makes it"} +{"global_id": 1192, "doc_id": "vpc", "chunk_id": "8", "question_id": 1, "question": "What device can be used for inbound connections from the internet?", "answer_span": "you can use a network address translation (NAT) device.", "chunk": "inbound connections from the internet, you can use a network address translation (NAT) device. NAT maps multiple private IPv4 addresses to a single public IPv4 address. You can configure the NAT device with an Elastic IP address and connect it to the internet through an internet gateway. This makes it possible for an instance in a private subnet to connect to the Route tables 8 Amazon Virtual Private Cloud User Guide internet through the NAT device, routing traffic from the instance to the internet gateway and any responses to the instance. If you associate an IPv6 CIDR block with your VPC and assign IPv6 addresses to your instances, instances can connect to the internet over IPv6 through an internet gateway. Alternatively, instances can initiate outbound connections to the internet over IPv6 using an egress-only internet gateway. IPv6 traffic is separate from IPv4 traffic; your route tables must include separate routes for IPv6 traffic. Learn more • Enable internet access for a VPC using an internet gateway • Enable outbound IPv6 traffic using an egress-only internet gateway • Connect to the internet or other networks using NAT devices Access a corporate or home network You can optionally connect your VPC to your own corporate data center using an IPsec AWS Siteto-Site VPN connection, making the AWS Cloud an extension of your data center. A Site-to-Site VPN connection consists of two VPN tunnels between a virtual private gateway or transit gateway on the AWS side, and a customer gateway device located in your data center. A customer gateway device is a physical device or software appliance that you configure on your side of the Site-to-Site VPN connection. Learn more • AWS Site-to-Site VPN User Guide • Amazon VPC Transit Gateways Connect VPCs and networks You can create a VPC peering connection between"} +{"global_id": 1193, "doc_id": "vpc", "chunk_id": "8", "question_id": 2, "question": "What does NAT map multiple private IPv4 addresses to?", "answer_span": "NAT maps multiple private IPv4 addresses to a single public IPv4 address.", "chunk": "inbound connections from the internet, you can use a network address translation (NAT) device. NAT maps multiple private IPv4 addresses to a single public IPv4 address. You can configure the NAT device with an Elastic IP address and connect it to the internet through an internet gateway. This makes it possible for an instance in a private subnet to connect to the Route tables 8 Amazon Virtual Private Cloud User Guide internet through the NAT device, routing traffic from the instance to the internet gateway and any responses to the instance. If you associate an IPv6 CIDR block with your VPC and assign IPv6 addresses to your instances, instances can connect to the internet over IPv6 through an internet gateway. Alternatively, instances can initiate outbound connections to the internet over IPv6 using an egress-only internet gateway. IPv6 traffic is separate from IPv4 traffic; your route tables must include separate routes for IPv6 traffic. Learn more • Enable internet access for a VPC using an internet gateway • Enable outbound IPv6 traffic using an egress-only internet gateway • Connect to the internet or other networks using NAT devices Access a corporate or home network You can optionally connect your VPC to your own corporate data center using an IPsec AWS Siteto-Site VPN connection, making the AWS Cloud an extension of your data center. A Site-to-Site VPN connection consists of two VPN tunnels between a virtual private gateway or transit gateway on the AWS side, and a customer gateway device located in your data center. A customer gateway device is a physical device or software appliance that you configure on your side of the Site-to-Site VPN connection. Learn more • AWS Site-to-Site VPN User Guide • Amazon VPC Transit Gateways Connect VPCs and networks You can create a VPC peering connection between"} +{"global_id": 1194, "doc_id": "vpc", "chunk_id": "8", "question_id": 3, "question": "How can instances connect to the internet over IPv6?", "answer_span": "instances can connect to the internet over IPv6 through an internet gateway.", "chunk": "inbound connections from the internet, you can use a network address translation (NAT) device. NAT maps multiple private IPv4 addresses to a single public IPv4 address. You can configure the NAT device with an Elastic IP address and connect it to the internet through an internet gateway. This makes it possible for an instance in a private subnet to connect to the Route tables 8 Amazon Virtual Private Cloud User Guide internet through the NAT device, routing traffic from the instance to the internet gateway and any responses to the instance. If you associate an IPv6 CIDR block with your VPC and assign IPv6 addresses to your instances, instances can connect to the internet over IPv6 through an internet gateway. Alternatively, instances can initiate outbound connections to the internet over IPv6 using an egress-only internet gateway. IPv6 traffic is separate from IPv4 traffic; your route tables must include separate routes for IPv6 traffic. Learn more • Enable internet access for a VPC using an internet gateway • Enable outbound IPv6 traffic using an egress-only internet gateway • Connect to the internet or other networks using NAT devices Access a corporate or home network You can optionally connect your VPC to your own corporate data center using an IPsec AWS Siteto-Site VPN connection, making the AWS Cloud an extension of your data center. A Site-to-Site VPN connection consists of two VPN tunnels between a virtual private gateway or transit gateway on the AWS side, and a customer gateway device located in your data center. A customer gateway device is a physical device or software appliance that you configure on your side of the Site-to-Site VPN connection. Learn more • AWS Site-to-Site VPN User Guide • Amazon VPC Transit Gateways Connect VPCs and networks You can create a VPC peering connection between"} +{"global_id": 1195, "doc_id": "vpc", "chunk_id": "8", "question_id": 4, "question": "What is a Site-to-Site VPN connection?", "answer_span": "A Site-to-Site VPN connection consists of two VPN tunnels between a virtual private gateway or transit gateway on the AWS side, and a customer gateway device located in your data center.", "chunk": "inbound connections from the internet, you can use a network address translation (NAT) device. NAT maps multiple private IPv4 addresses to a single public IPv4 address. You can configure the NAT device with an Elastic IP address and connect it to the internet through an internet gateway. This makes it possible for an instance in a private subnet to connect to the Route tables 8 Amazon Virtual Private Cloud User Guide internet through the NAT device, routing traffic from the instance to the internet gateway and any responses to the instance. If you associate an IPv6 CIDR block with your VPC and assign IPv6 addresses to your instances, instances can connect to the internet over IPv6 through an internet gateway. Alternatively, instances can initiate outbound connections to the internet over IPv6 using an egress-only internet gateway. IPv6 traffic is separate from IPv4 traffic; your route tables must include separate routes for IPv6 traffic. Learn more • Enable internet access for a VPC using an internet gateway • Enable outbound IPv6 traffic using an egress-only internet gateway • Connect to the internet or other networks using NAT devices Access a corporate or home network You can optionally connect your VPC to your own corporate data center using an IPsec AWS Siteto-Site VPN connection, making the AWS Cloud an extension of your data center. A Site-to-Site VPN connection consists of two VPN tunnels between a virtual private gateway or transit gateway on the AWS side, and a customer gateway device located in your data center. A customer gateway device is a physical device or software appliance that you configure on your side of the Site-to-Site VPN connection. Learn more • AWS Site-to-Site VPN User Guide • Amazon VPC Transit Gateways Connect VPCs and networks You can create a VPC peering connection between"} +{"global_id": 1196, "doc_id": "vpc", "chunk_id": "9", "question_id": 1, "question": "What is a customer gateway device?", "answer_span": "A customer gateway device is a physical device or software appliance that you configure on your side of the Site-to-Site VPN connection.", "chunk": "your data center. A customer gateway device is a physical device or software appliance that you configure on your side of the Site-to-Site VPN connection. Learn more • AWS Site-to-Site VPN User Guide • Amazon VPC Transit Gateways Connect VPCs and networks You can create a VPC peering connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can also create a transit gateway and use it to interconnect your VPCs and on-premises networks. The transit gateway acts as a Regional virtual router for traffic flowing between its Access a corporate or home network 9"} +{"global_id": 1197, "doc_id": "vpc", "chunk_id": "9", "question_id": 2, "question": "What can you create to route traffic between two VPCs privately?", "answer_span": "You can create a VPC peering connection between two VPCs that enables you to route traffic between them privately.", "chunk": "your data center. A customer gateway device is a physical device or software appliance that you configure on your side of the Site-to-Site VPN connection. Learn more • AWS Site-to-Site VPN User Guide • Amazon VPC Transit Gateways Connect VPCs and networks You can create a VPC peering connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can also create a transit gateway and use it to interconnect your VPCs and on-premises networks. The transit gateway acts as a Regional virtual router for traffic flowing between its Access a corporate or home network 9"} +{"global_id": 1198, "doc_id": "vpc", "chunk_id": "9", "question_id": 3, "question": "How can instances in either VPC communicate with each other?", "answer_span": "Instances in either VPC can communicate with each other as if they are within the same network.", "chunk": "your data center. A customer gateway device is a physical device or software appliance that you configure on your side of the Site-to-Site VPN connection. Learn more • AWS Site-to-Site VPN User Guide • Amazon VPC Transit Gateways Connect VPCs and networks You can create a VPC peering connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can also create a transit gateway and use it to interconnect your VPCs and on-premises networks. The transit gateway acts as a Regional virtual router for traffic flowing between its Access a corporate or home network 9"} +{"global_id": 1199, "doc_id": "vpc", "chunk_id": "9", "question_id": 4, "question": "What does the transit gateway act as?", "answer_span": "The transit gateway acts as a Regional virtual router for traffic flowing between its Access a corporate or home network.", "chunk": "your data center. A customer gateway device is a physical device or software appliance that you configure on your side of the Site-to-Site VPN connection. Learn more • AWS Site-to-Site VPN User Guide • Amazon VPC Transit Gateways Connect VPCs and networks You can create a VPC peering connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can also create a transit gateway and use it to interconnect your VPCs and on-premises networks. The transit gateway acts as a Regional virtual router for traffic flowing between its Access a corporate or home network 9"} +{"global_id": 1200, "doc_id": "lambda", "chunk_id": "0", "question_id": 1, "question": "What is AWS Lambda?", "answer_span": "You can use AWS Lambda to run code without provisioning or managing servers.", "chunk": "AWS Lambda Developer Guide What is AWS Lambda? You can use AWS Lambda to run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and manages all the computing resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging. You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. For pricing information, see AWS Lambda Pricing for details. When using Lambda, you are responsible only for your code. Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources to run your code. Because Lambda manages these resources, you cannot log in to compute instances or customize the operating system on provided runtimes. When to use Lambda Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand. For example, you can use Lambda for: • Stream processing: Use Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, clickstream analysis, data cleansing, log filtering, indexing, social media analysis, Internet of Things (IoT) device data telemetry, and metering. • Web applications: Combine Lambda with other AWS services to build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers. To build web applications with AWS services, developers can use infrastructure as code (IaC) and orchestration tools such as AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React"} +{"global_id": 1201, "doc_id": "lambda", "chunk_id": "0", "question_id": 2, "question": "What does Lambda manage?", "answer_span": "Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources to run your code.", "chunk": "AWS Lambda Developer Guide What is AWS Lambda? You can use AWS Lambda to run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and manages all the computing resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging. You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. For pricing information, see AWS Lambda Pricing for details. When using Lambda, you are responsible only for your code. Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources to run your code. Because Lambda manages these resources, you cannot log in to compute instances or customize the operating system on provided runtimes. When to use Lambda Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand. For example, you can use Lambda for: • Stream processing: Use Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, clickstream analysis, data cleansing, log filtering, indexing, social media analysis, Internet of Things (IoT) device data telemetry, and metering. • Web applications: Combine Lambda with other AWS services to build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers. To build web applications with AWS services, developers can use infrastructure as code (IaC) and orchestration tools such as AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React"} +{"global_id": 1202, "doc_id": "lambda", "chunk_id": "0", "question_id": 3, "question": "When is Lambda an ideal compute service?", "answer_span": "Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand.", "chunk": "AWS Lambda Developer Guide What is AWS Lambda? You can use AWS Lambda to run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and manages all the computing resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging. You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. For pricing information, see AWS Lambda Pricing for details. When using Lambda, you are responsible only for your code. Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources to run your code. Because Lambda manages these resources, you cannot log in to compute instances or customize the operating system on provided runtimes. When to use Lambda Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand. For example, you can use Lambda for: • Stream processing: Use Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, clickstream analysis, data cleansing, log filtering, indexing, social media analysis, Internet of Things (IoT) device data telemetry, and metering. • Web applications: Combine Lambda with other AWS services to build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers. To build web applications with AWS services, developers can use infrastructure as code (IaC) and orchestration tools such as AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React"} +{"global_id": 1203, "doc_id": "lambda", "chunk_id": "0", "question_id": 4, "question": "What can developers use to build web applications with AWS services?", "answer_span": "developers can use infrastructure as code (IaC) and orchestration tools such as AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions.", "chunk": "AWS Lambda Developer Guide What is AWS Lambda? You can use AWS Lambda to run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and manages all the computing resources, including server and operating system maintenance, capacity provisioning, automatic scaling, and logging. You organize your code into Lambda functions. The Lambda service runs your function only when needed and scales automatically. For pricing information, see AWS Lambda Pricing for details. When using Lambda, you are responsible only for your code. Lambda manages the compute fleet that offers a balance of memory, CPU, network, and other resources to run your code. Because Lambda manages these resources, you cannot log in to compute instances or customize the operating system on provided runtimes. When to use Lambda Lambda is an ideal compute service for application scenarios that need to scale up rapidly, and scale down to zero when not in demand. For example, you can use Lambda for: • Stream processing: Use Lambda and Amazon Kinesis to process real-time streaming data for application activity tracking, transaction order processing, clickstream analysis, data cleansing, log filtering, indexing, social media analysis, Internet of Things (IoT) device data telemetry, and metering. • Web applications: Combine Lambda with other AWS services to build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers. To build web applications with AWS services, developers can use infrastructure as code (IaC) and orchestration tools such as AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React"} +{"global_id": 1204, "doc_id": "lambda", "chunk_id": "1", "question_id": 1, "question": "What services can be coordinated using AWS Step Functions?", "answer_span": "AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions.", "chunk": "AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React Native frontends. • IoT backends: Build serverless backends using Lambda to handle web, mobile, IoT, and thirdparty API requests. • File processing: Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload. When to use Lambda 1 AWS Lambda Developer Guide • Database Operations and Integration: Use Lambda to process database interactions both reactively and proactively, from handling queue messages for Amazon RDS operations like user registrations and order submissions, to responding to DynamoDB changes for audit logging, data replication, and automated workflows. • Scheduled and Periodic Tasks: Use Lambda with EventBridge rules to execute time-based operations such as database maintenance, data archiving, report generation, and other scheduled business processes using cron-like expressions. How Lambda works Because Lambda is a serverless, event-driven compute service, it uses a different programming paradigm than traditional web applications. The following model illustrates how Lambda fundamentally works: 1. You write and organize your code in Lambda functions, which are the basic building blocks you use to create a Lambda application. 2. You control security and access through Lambda permissions, using execution roles to manage what AWS services your functions can interact with and what resource policies can interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and"} +{"global_id": 1205, "doc_id": "lambda", "chunk_id": "1", "question_id": 2, "question": "What can be built using Lambda and Amazon API Gateway?", "answer_span": "Build backends using Lambda and Amazon API Gateway to authenticate and process API requests.", "chunk": "AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React Native frontends. • IoT backends: Build serverless backends using Lambda to handle web, mobile, IoT, and thirdparty API requests. • File processing: Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload. When to use Lambda 1 AWS Lambda Developer Guide • Database Operations and Integration: Use Lambda to process database interactions both reactively and proactively, from handling queue messages for Amazon RDS operations like user registrations and order submissions, to responding to DynamoDB changes for audit logging, data replication, and automated workflows. • Scheduled and Periodic Tasks: Use Lambda with EventBridge rules to execute time-based operations such as database maintenance, data archiving, report generation, and other scheduled business processes using cron-like expressions. How Lambda works Because Lambda is a serverless, event-driven compute service, it uses a different programming paradigm than traditional web applications. The following model illustrates how Lambda fundamentally works: 1. You write and organize your code in Lambda functions, which are the basic building blocks you use to create a Lambda application. 2. You control security and access through Lambda permissions, using execution roles to manage what AWS services your functions can interact with and what resource policies can interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and"} +{"global_id": 1206, "doc_id": "lambda", "chunk_id": "1", "question_id": 3, "question": "What is used to trigger Lambda data processing in real time after an upload?", "answer_span": "Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload.", "chunk": "AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React Native frontends. • IoT backends: Build serverless backends using Lambda to handle web, mobile, IoT, and thirdparty API requests. • File processing: Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload. When to use Lambda 1 AWS Lambda Developer Guide • Database Operations and Integration: Use Lambda to process database interactions both reactively and proactively, from handling queue messages for Amazon RDS operations like user registrations and order submissions, to responding to DynamoDB changes for audit logging, data replication, and automated workflows. • Scheduled and Periodic Tasks: Use Lambda with EventBridge rules to execute time-based operations such as database maintenance, data archiving, report generation, and other scheduled business processes using cron-like expressions. How Lambda works Because Lambda is a serverless, event-driven compute service, it uses a different programming paradigm than traditional web applications. The following model illustrates how Lambda fundamentally works: 1. You write and organize your code in Lambda functions, which are the basic building blocks you use to create a Lambda application. 2. You control security and access through Lambda permissions, using execution roles to manage what AWS services your functions can interact with and what resource policies can interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and"} +{"global_id": 1207, "doc_id": "lambda", "chunk_id": "1", "question_id": 4, "question": "How does Lambda control security and access?", "answer_span": "You control security and access through Lambda permissions, using execution roles to manage what AWS services your functions can interact with and what resource policies can interact with your code.", "chunk": "AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), AWS Serverless Application Model, or coordinate complex workflows using AWS Step Functions. • Mobile backends: Build backends using Lambda and Amazon API Gateway to authenticate and process API requests. Use AWS Amplify to easily integrate with your iOS, Android, Web, and React Native frontends. • IoT backends: Build serverless backends using Lambda to handle web, mobile, IoT, and thirdparty API requests. • File processing: Use Amazon Simple Storage Service (Amazon S3) to trigger Lambda data processing in real time after an upload. When to use Lambda 1 AWS Lambda Developer Guide • Database Operations and Integration: Use Lambda to process database interactions both reactively and proactively, from handling queue messages for Amazon RDS operations like user registrations and order submissions, to responding to DynamoDB changes for audit logging, data replication, and automated workflows. • Scheduled and Periodic Tasks: Use Lambda with EventBridge rules to execute time-based operations such as database maintenance, data archiving, report generation, and other scheduled business processes using cron-like expressions. How Lambda works Because Lambda is a serverless, event-driven compute service, it uses a different programming paradigm than traditional web applications. The following model illustrates how Lambda fundamentally works: 1. You write and organize your code in Lambda functions, which are the basic building blocks you use to create a Lambda application. 2. You control security and access through Lambda permissions, using execution roles to manage what AWS services your functions can interact with and what resource policies can interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and"} +{"global_id": 1208, "doc_id": "lambda", "chunk_id": "2", "question_id": 1, "question": "What format does event data come in when triggered by event sources and AWS services?", "answer_span": "passing event data in JSON format", "chunk": "interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and extensions. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Key features Configure, control, and deploy secure applications: • Environment variables modify application behavior without new code deployments. • Versions safely test new features while maintaining stable production environments. How Lambda works 2 AWS Lambda Developer Guide • Lambda layers optimize code reuse and maintenance by sharing common components across multiple functions. • Code signing enforce security compliance by ensuring only approved code reaches production systems. Scale and perform reliably: • Concurrency and scaling controls precisely manage application responsiveness and resource utilization during traffic spikes. • Lambda SnapStart significantly reduce cold start times. Lambda SnapStart can provide as low as sub-second startup performance, typically with no changes to your function code. • Response streaming optimize function performance by delivering large payloads incrementally for real-time processing. • Container images package functions with complex dependencies using container workflows. Connect and integrate seamlessly: • VPC networks secure sensitive resources and internal services. • File system integration that shares persistent data and manage stateful operations across function invocations. • Function URLs create public-facing APIs and endpoints without additional services. • Lambda extensions augment functions with monitoring, security, and operational tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks"} +{"global_id": 1209, "doc_id": "lambda", "chunk_id": "2", "question_id": 2, "question": "What do Lambda layers optimize?", "answer_span": "Lambda layers optimize code reuse and maintenance", "chunk": "interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and extensions. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Key features Configure, control, and deploy secure applications: • Environment variables modify application behavior without new code deployments. • Versions safely test new features while maintaining stable production environments. How Lambda works 2 AWS Lambda Developer Guide • Lambda layers optimize code reuse and maintenance by sharing common components across multiple functions. • Code signing enforce security compliance by ensuring only approved code reaches production systems. Scale and perform reliably: • Concurrency and scaling controls precisely manage application responsiveness and resource utilization during traffic spikes. • Lambda SnapStart significantly reduce cold start times. Lambda SnapStart can provide as low as sub-second startup performance, typically with no changes to your function code. • Response streaming optimize function performance by delivering large payloads incrementally for real-time processing. • Container images package functions with complex dependencies using container workflows. Connect and integrate seamlessly: • VPC networks secure sensitive resources and internal services. • File system integration that shares persistent data and manage stateful operations across function invocations. • Function URLs create public-facing APIs and endpoints without additional services. • Lambda extensions augment functions with monitoring, security, and operational tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks"} +{"global_id": 1210, "doc_id": "lambda", "chunk_id": "2", "question_id": 3, "question": "What do environment variables modify?", "answer_span": "Environment variables modify application behavior without new code deployments", "chunk": "interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and extensions. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Key features Configure, control, and deploy secure applications: • Environment variables modify application behavior without new code deployments. • Versions safely test new features while maintaining stable production environments. How Lambda works 2 AWS Lambda Developer Guide • Lambda layers optimize code reuse and maintenance by sharing common components across multiple functions. • Code signing enforce security compliance by ensuring only approved code reaches production systems. Scale and perform reliably: • Concurrency and scaling controls precisely manage application responsiveness and resource utilization during traffic spikes. • Lambda SnapStart significantly reduce cold start times. Lambda SnapStart can provide as low as sub-second startup performance, typically with no changes to your function code. • Response streaming optimize function performance by delivering large payloads incrementally for real-time processing. • Container images package functions with complex dependencies using container workflows. Connect and integrate seamlessly: • VPC networks secure sensitive resources and internal services. • File system integration that shares persistent data and manage stateful operations across function invocations. • Function URLs create public-facing APIs and endpoints without additional services. • Lambda extensions augment functions with monitoring, security, and operational tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks"} +{"global_id": 1211, "doc_id": "lambda", "chunk_id": "2", "question_id": 4, "question": "What can Lambda SnapStart provide in terms of startup performance?", "answer_span": "Lambda SnapStart can provide as low as sub-second startup performance", "chunk": "interact with your code. 3. Event sources and AWS services trigger your Lambda functions, passing event data in JSON format, which your functions process (this includes event source mappings). 4. Lambda runs your code with language-specific runtimes (like Node.js and Python) in execution environments that package your runtime, layers, and extensions. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Key features Configure, control, and deploy secure applications: • Environment variables modify application behavior without new code deployments. • Versions safely test new features while maintaining stable production environments. How Lambda works 2 AWS Lambda Developer Guide • Lambda layers optimize code reuse and maintenance by sharing common components across multiple functions. • Code signing enforce security compliance by ensuring only approved code reaches production systems. Scale and perform reliably: • Concurrency and scaling controls precisely manage application responsiveness and resource utilization during traffic spikes. • Lambda SnapStart significantly reduce cold start times. Lambda SnapStart can provide as low as sub-second startup performance, typically with no changes to your function code. • Response streaming optimize function performance by delivering large payloads incrementally for real-time processing. • Container images package functions with complex dependencies using container workflows. Connect and integrate seamlessly: • VPC networks secure sensitive resources and internal services. • File system integration that shares persistent data and manage stateful operations across function invocations. • Function URLs create public-facing APIs and endpoints without additional services. • Lambda extensions augment functions with monitoring, security, and operational tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks"} +{"global_id": 1212, "doc_id": "lambda", "chunk_id": "3", "question_id": 1, "question": "What are Lambda functions?", "answer_span": "A Lambda function is a small block of code that runs in response to events.", "chunk": "tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks you use to build Lambda applications. To write functions, it's essential to understand the core concepts and components that make up the Lambda Related information 3 AWS Lambda Developer Guide programming model. This section will guide you through the fundamental elements you need to know to start building serverless applications with Lambda. • Lambda functions and function handlers - A Lambda function is a small block of code that runs in response to events. functions are the basic building blocks you use to build applications. Function handlers are the entry point for event objects that your Lambda function code processes. • Lambda execution environment and runtimes - Lambda execution environments manage the resources required to run your function. Run times are the language-specific environments your functions run in. • Events and triggers - how other AWS services invoke your functions in response to specific events. • Lambda permissions and roles - how you control who can access your functions and what other AWS services your functions can interact with. Tip If you want to start by understanding serverless development more generally, see Understanding the difference between traditional and serverless development in the AWS Serverless Developer Guide. Lambda functions and function handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon"} +{"global_id": 1213, "doc_id": "lambda", "chunk_id": "3", "question_id": 2, "question": "What do function handlers do in Lambda?", "answer_span": "Function handlers are the entry point for event objects that your Lambda function code processes.", "chunk": "tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks you use to build Lambda applications. To write functions, it's essential to understand the core concepts and components that make up the Lambda Related information 3 AWS Lambda Developer Guide programming model. This section will guide you through the fundamental elements you need to know to start building serverless applications with Lambda. • Lambda functions and function handlers - A Lambda function is a small block of code that runs in response to events. functions are the basic building blocks you use to build applications. Function handlers are the entry point for event objects that your Lambda function code processes. • Lambda execution environment and runtimes - Lambda execution environments manage the resources required to run your function. Run times are the language-specific environments your functions run in. • Events and triggers - how other AWS services invoke your functions in response to specific events. • Lambda permissions and roles - how you control who can access your functions and what other AWS services your functions can interact with. Tip If you want to start by understanding serverless development more generally, see Understanding the difference between traditional and serverless development in the AWS Serverless Developer Guide. Lambda functions and function handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon"} +{"global_id": 1214, "doc_id": "lambda", "chunk_id": "3", "question_id": 3, "question": "What manages the resources required to run a Lambda function?", "answer_span": "Lambda execution environments manage the resources required to run your function.", "chunk": "tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks you use to build Lambda applications. To write functions, it's essential to understand the core concepts and components that make up the Lambda Related information 3 AWS Lambda Developer Guide programming model. This section will guide you through the fundamental elements you need to know to start building serverless applications with Lambda. • Lambda functions and function handlers - A Lambda function is a small block of code that runs in response to events. functions are the basic building blocks you use to build applications. Function handlers are the entry point for event objects that your Lambda function code processes. • Lambda execution environment and runtimes - Lambda execution environments manage the resources required to run your function. Run times are the language-specific environments your functions run in. • Events and triggers - how other AWS services invoke your functions in response to specific events. • Lambda permissions and roles - how you control who can access your functions and what other AWS services your functions can interact with. Tip If you want to start by understanding serverless development more generally, see Understanding the difference between traditional and serverless development in the AWS Serverless Developer Guide. Lambda functions and function handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon"} +{"global_id": 1215, "doc_id": "lambda", "chunk_id": "3", "question_id": 4, "question": "Where can you find information on how Lambda works?", "answer_span": "For information on how Lambda works, see How Lambda works.", "chunk": "tools. Related information • For information on how Lambda works, see How Lambda works. • To start using Lambda, see Create your first Lambda function. • For a list of example applications, see Getting started with example applications and patterns. How Lambda works Lambda functions are the basic building blocks you use to build Lambda applications. To write functions, it's essential to understand the core concepts and components that make up the Lambda Related information 3 AWS Lambda Developer Guide programming model. This section will guide you through the fundamental elements you need to know to start building serverless applications with Lambda. • Lambda functions and function handlers - A Lambda function is a small block of code that runs in response to events. functions are the basic building blocks you use to build applications. Function handlers are the entry point for event objects that your Lambda function code processes. • Lambda execution environment and runtimes - Lambda execution environments manage the resources required to run your function. Run times are the language-specific environments your functions run in. • Events and triggers - how other AWS services invoke your functions in response to specific events. • Lambda permissions and roles - how you control who can access your functions and what other AWS services your functions can interact with. Tip If you want to start by understanding serverless development more generally, see Understanding the difference between traditional and serverless development in the AWS Serverless Developer Guide. Lambda functions and function handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon"} +{"global_id": 1216, "doc_id": "lambda", "chunk_id": "4", "question_id": 1, "question": "What are the fundamental building blocks you use to create applications in Lambda?", "answer_span": "functions are the fundamental building blocks you use to create applications.", "chunk": "handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. You can think of a function as a kind of self-contained program with the following properties. A Lambda function handler is the method in your function code that processes events. When a function runs in response to an event, Lambda runs the function handler. Data about the event that caused the function to run is passed directly to the handler. While the code in a Lambda function can contain more than one method or function, Lambda functions can only have one handler. To create a Lambda function, you bundle your function code and its dependencies in a deployment package. Lambda supports two types of deployment package, .zip file archives and container images. Lambda functions and function handlers 4 AWS Lambda Developer Guide • A function has one specific job or purpose • They run only when needed in response to specific events • They automatically stop running when finished Lambda execution environment and runtimes Lambda functions run inside a secure, isolated execution environment which Lambda manages for you. This execution environment manages the processes and resources that are needed to run your function. When a function is first invoked, Lambda creates a new execution environment for the function to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and"} +{"global_id": 1217, "doc_id": "lambda", "chunk_id": "4", "question_id": 2, "question": "What is a Lambda function?", "answer_span": "A Lambda function is a piece of code that runs in response to events.", "chunk": "handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. You can think of a function as a kind of self-contained program with the following properties. A Lambda function handler is the method in your function code that processes events. When a function runs in response to an event, Lambda runs the function handler. Data about the event that caused the function to run is passed directly to the handler. While the code in a Lambda function can contain more than one method or function, Lambda functions can only have one handler. To create a Lambda function, you bundle your function code and its dependencies in a deployment package. Lambda supports two types of deployment package, .zip file archives and container images. Lambda functions and function handlers 4 AWS Lambda Developer Guide • A function has one specific job or purpose • They run only when needed in response to specific events • They automatically stop running when finished Lambda execution environment and runtimes Lambda functions run inside a secure, isolated execution environment which Lambda manages for you. This execution environment manages the processes and resources that are needed to run your function. When a function is first invoked, Lambda creates a new execution environment for the function to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and"} +{"global_id": 1218, "doc_id": "lambda", "chunk_id": "4", "question_id": 3, "question": "What is a Lambda function handler?", "answer_span": "A Lambda function handler is the method in your function code that processes events.", "chunk": "handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. You can think of a function as a kind of self-contained program with the following properties. A Lambda function handler is the method in your function code that processes events. When a function runs in response to an event, Lambda runs the function handler. Data about the event that caused the function to run is passed directly to the handler. While the code in a Lambda function can contain more than one method or function, Lambda functions can only have one handler. To create a Lambda function, you bundle your function code and its dependencies in a deployment package. Lambda supports two types of deployment package, .zip file archives and container images. Lambda functions and function handlers 4 AWS Lambda Developer Guide • A function has one specific job or purpose • They run only when needed in response to specific events • They automatically stop running when finished Lambda execution environment and runtimes Lambda functions run inside a secure, isolated execution environment which Lambda manages for you. This execution environment manages the processes and resources that are needed to run your function. When a function is first invoked, Lambda creates a new execution environment for the function to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and"} +{"global_id": 1219, "doc_id": "lambda", "chunk_id": "4", "question_id": 4, "question": "What types of deployment package does Lambda support?", "answer_span": "Lambda supports two types of deployment package, .zip file archives and container images.", "chunk": "handlers In Lambda, functions are the fundamental building blocks you use to create applications. A Lambda function is a piece of code that runs in response to events, such as a user clicking a button on a website or a file being uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. You can think of a function as a kind of self-contained program with the following properties. A Lambda function handler is the method in your function code that processes events. When a function runs in response to an event, Lambda runs the function handler. Data about the event that caused the function to run is passed directly to the handler. While the code in a Lambda function can contain more than one method or function, Lambda functions can only have one handler. To create a Lambda function, you bundle your function code and its dependencies in a deployment package. Lambda supports two types of deployment package, .zip file archives and container images. Lambda functions and function handlers 4 AWS Lambda Developer Guide • A function has one specific job or purpose • They run only when needed in response to specific events • They automatically stop running when finished Lambda execution environment and runtimes Lambda functions run inside a secure, isolated execution environment which Lambda manages for you. This execution environment manages the processes and resources that are needed to run your function. When a function is first invoked, Lambda creates a new execution environment for the function to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and"} +{"global_id": 1220, "doc_id": "lambda", "chunk_id": "5", "question_id": 1, "question": "What happens to the execution environment after the function has finished running?", "answer_span": "if the function is invoked again, Lambda can re-use the existing execution environment.", "chunk": "to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and your function. Lambda provides a number of managed runtimes for the most popular programming languages, or you can create your own. For managed runtimes, Lambda automatically applies security updates and patches to functions using the runtime. Events and triggers You can also invoke a Lambda function directly by using the Lambda console, AWS CLI, or one of the AWS Software Development Kits (SDKs). It's more usual in a production application for your function to be invoked by another AWS service in response to a particular event. For example, you might want a function to run whenever an item is added to an Amazon DynamoDB table. To make your function respond to events, you set up a trigger. A trigger connects your function to an event source, and your function can have multiple triggers. When an event occurs, Lambda receives event data as a JSON document and converts it into an object that your code can process. You might define the following JSON format for your event and the Lambda runtime converts this JSON to an object before passing it to your function's handler. Example custom Lambda event { \"Location\": \"SEA\", \"WeatherData\":{ Lambda execution environment and runtimes 5 AWS Lambda Developer Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then"} +{"global_id": 1221, "doc_id": "lambda", "chunk_id": "5", "question_id": 2, "question": "What does the Lambda execution environment contain?", "answer_span": "The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and your function.", "chunk": "to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and your function. Lambda provides a number of managed runtimes for the most popular programming languages, or you can create your own. For managed runtimes, Lambda automatically applies security updates and patches to functions using the runtime. Events and triggers You can also invoke a Lambda function directly by using the Lambda console, AWS CLI, or one of the AWS Software Development Kits (SDKs). It's more usual in a production application for your function to be invoked by another AWS service in response to a particular event. For example, you might want a function to run whenever an item is added to an Amazon DynamoDB table. To make your function respond to events, you set up a trigger. A trigger connects your function to an event source, and your function can have multiple triggers. When an event occurs, Lambda receives event data as a JSON document and converts it into an object that your code can process. You might define the following JSON format for your event and the Lambda runtime converts this JSON to an object before passing it to your function's handler. Example custom Lambda event { \"Location\": \"SEA\", \"WeatherData\":{ Lambda execution environment and runtimes 5 AWS Lambda Developer Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then"} +{"global_id": 1222, "doc_id": "lambda", "chunk_id": "5", "question_id": 3, "question": "How can you invoke a Lambda function directly?", "answer_span": "You can also invoke a Lambda function directly by using the Lambda console, AWS CLI, or one of the AWS Software Development Kits (SDKs).", "chunk": "to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and your function. Lambda provides a number of managed runtimes for the most popular programming languages, or you can create your own. For managed runtimes, Lambda automatically applies security updates and patches to functions using the runtime. Events and triggers You can also invoke a Lambda function directly by using the Lambda console, AWS CLI, or one of the AWS Software Development Kits (SDKs). It's more usual in a production application for your function to be invoked by another AWS service in response to a particular event. For example, you might want a function to run whenever an item is added to an Amazon DynamoDB table. To make your function respond to events, you set up a trigger. A trigger connects your function to an event source, and your function can have multiple triggers. When an event occurs, Lambda receives event data as a JSON document and converts it into an object that your code can process. You might define the following JSON format for your event and the Lambda runtime converts this JSON to an object before passing it to your function's handler. Example custom Lambda event { \"Location\": \"SEA\", \"WeatherData\":{ Lambda execution environment and runtimes 5 AWS Lambda Developer Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then"} +{"global_id": 1223, "doc_id": "lambda", "chunk_id": "5", "question_id": 4, "question": "What connects your function to an event source?", "answer_span": "A trigger connects your function to an event source, and your function can have multiple triggers.", "chunk": "to run in. After the function has finished running, Lambda doesn't stop the execution environment right away; if the function is invoked again, Lambda can re-use the existing execution environment. The Lambda execution environment also contains a runtime, a language-specific environment that relays event information and responses between Lambda and your function. Lambda provides a number of managed runtimes for the most popular programming languages, or you can create your own. For managed runtimes, Lambda automatically applies security updates and patches to functions using the runtime. Events and triggers You can also invoke a Lambda function directly by using the Lambda console, AWS CLI, or one of the AWS Software Development Kits (SDKs). It's more usual in a production application for your function to be invoked by another AWS service in response to a particular event. For example, you might want a function to run whenever an item is added to an Amazon DynamoDB table. To make your function respond to events, you set up a trigger. A trigger connects your function to an event source, and your function can have multiple triggers. When an event occurs, Lambda receives event data as a JSON document and converts it into an object that your code can process. You might define the following JSON format for your event and the Lambda runtime converts this JSON to an object before passing it to your function's handler. Example custom Lambda event { \"Location\": \"SEA\", \"WeatherData\":{ Lambda execution environment and runtimes 5 AWS Lambda Developer Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then"} +{"global_id": 1224, "doc_id": "lambda", "chunk_id": "6", "question_id": 1, "question": "What are the two main types of permissions that you need to configure for Lambda?", "answer_span": "Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function", "chunk": "Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then invoke your function with the batched events. For more information, see How event source mappings differ from direct triggers. To understand how a trigger works, start by completing the Use an Amazon S3 trigger tutorial, or for a general overview of using triggers and instructions on creating a trigger using the Lambda console, see Integrating other services. Lambda permissions and roles For Lambda, there are two main types of permissions that you need to configure: • Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function The following sections describe both of these permission types and discuss best practices for applying least-privilege permissions. Permissions for functions to access other AWS resources Lambda functions often need to access other AWS resources and perform actions on them. For example, a function might read items from a DynamoDB table, store an object in an S3 bucket, or write to an Amazon SQS queue. To give functions the permissions they need to perform these actions, you use an execution role. A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the"} +{"global_id": 1225, "doc_id": "lambda", "chunk_id": "6", "question_id": 2, "question": "What is a Lambda execution role?", "answer_span": "A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy.", "chunk": "Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then invoke your function with the batched events. For more information, see How event source mappings differ from direct triggers. To understand how a trigger works, start by completing the Use an Amazon S3 trigger tutorial, or for a general overview of using triggers and instructions on creating a trigger using the Lambda console, see Integrating other services. Lambda permissions and roles For Lambda, there are two main types of permissions that you need to configure: • Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function The following sections describe both of these permission types and discuss best practices for applying least-privilege permissions. Permissions for functions to access other AWS resources Lambda functions often need to access other AWS resources and perform actions on them. For example, a function might read items from a DynamoDB table, store an object in an S3 bucket, or write to an Amazon SQS queue. To give functions the permissions they need to perform these actions, you use an execution role. A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the"} +{"global_id": 1226, "doc_id": "lambda", "chunk_id": "6", "question_id": 3, "question": "What must every Lambda function have?", "answer_span": "Every Lambda function must have an execution role", "chunk": "Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then invoke your function with the batched events. For more information, see How event source mappings differ from direct triggers. To understand how a trigger works, start by completing the Use an Amazon S3 trigger tutorial, or for a general overview of using triggers and instructions on creating a trigger using the Lambda console, see Integrating other services. Lambda permissions and roles For Lambda, there are two main types of permissions that you need to configure: • Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function The following sections describe both of these permission types and discuss best practices for applying least-privilege permissions. Permissions for functions to access other AWS resources Lambda functions often need to access other AWS resources and perform actions on them. For example, a function might read items from a DynamoDB table, store an object in an S3 bucket, or write to an Amazon SQS queue. To give functions the permissions they need to perform these actions, you use an execution role. A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the"} +{"global_id": 1227, "doc_id": "lambda", "chunk_id": "6", "question_id": 4, "question": "Can a single role be used by more than one function?", "answer_span": "a single role can be used by more than one function", "chunk": "Guide \"TemperaturesF\":{ \"MinTempF\": 22, \"MaxTempF\": 78 }, \"PressuresHPa\":{ \"MinPressureHPa\": 1015, \"MaxPressureHPa\": 1027 } } } Stream and queue services like Amazon Kinesis or Amazon SQS, Lambda use an event source mapping instead of a standard trigger. Event source mappings poll the source for new data, batch records together, and then invoke your function with the batched events. For more information, see How event source mappings differ from direct triggers. To understand how a trigger works, start by completing the Use an Amazon S3 trigger tutorial, or for a general overview of using triggers and instructions on creating a trigger using the Lambda console, see Integrating other services. Lambda permissions and roles For Lambda, there are two main types of permissions that you need to configure: • Permissions that your function needs to access other AWS services • Permissions that other users and AWS services need to access your function The following sections describe both of these permission types and discuss best practices for applying least-privilege permissions. Permissions for functions to access other AWS resources Lambda functions often need to access other AWS resources and perform actions on them. For example, a function might read items from a DynamoDB table, store an object in an S3 bucket, or write to an Amazon SQS queue. To give functions the permissions they need to perform these actions, you use an execution role. A Lambda execution role is a special kind of AWS Identity and Access Management (IAM) role, an identity you create in your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the"} +{"global_id": 1228, "doc_id": "lambda", "chunk_id": "7", "question_id": 1, "question": "What must every Lambda function have?", "answer_span": "Every Lambda function must have an execution role", "chunk": "your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the function's execution role and is granted permission to take the actions defined in the role's policy. When you create a function in the Lambda console, Lambda automatically creates an execution role for your function. The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs. To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions. The easiest way to add permissions is to use an AWS managed policy. Managed policies are created and administered by AWS and provide permissions for many common use cases. For example, if your function performs CRUD operations on a DynamoDB table, you can add the AmazonDynamoDBFullAccess policy to your role. Permissions for other users and resources to access your function To grant other AWS service permission to access your Lambda function, you use a resourcebased policy. In IAM, resource-based policies are attached to a resource (in this case, your Lambda function) and define who can access the resource and what actions they are allowed to take. For another AWS service to invoke your function through a trigger, your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or"} +{"global_id": 1229, "doc_id": "lambda", "chunk_id": "7", "question_id": 2, "question": "What does the role's policy give your function?", "answer_span": "The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs", "chunk": "your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the function's execution role and is granted permission to take the actions defined in the role's policy. When you create a function in the Lambda console, Lambda automatically creates an execution role for your function. The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs. To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions. The easiest way to add permissions is to use an AWS managed policy. Managed policies are created and administered by AWS and provide permissions for many common use cases. For example, if your function performs CRUD operations on a DynamoDB table, you can add the AmazonDynamoDBFullAccess policy to your role. Permissions for other users and resources to access your function To grant other AWS service permission to access your Lambda function, you use a resourcebased policy. In IAM, resource-based policies are attached to a resource (in this case, your Lambda function) and define who can access the resource and what actions they are allowed to take. For another AWS service to invoke your function through a trigger, your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or"} +{"global_id": 1230, "doc_id": "lambda", "chunk_id": "7", "question_id": 3, "question": "How can you add extra permissions to your function's role?", "answer_span": "To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions", "chunk": "your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the function's execution role and is granted permission to take the actions defined in the role's policy. When you create a function in the Lambda console, Lambda automatically creates an execution role for your function. The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs. To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions. The easiest way to add permissions is to use an AWS managed policy. Managed policies are created and administered by AWS and provide permissions for many common use cases. For example, if your function performs CRUD operations on a DynamoDB table, you can add the AmazonDynamoDBFullAccess policy to your role. Permissions for other users and resources to access your function To grant other AWS service permission to access your Lambda function, you use a resourcebased policy. In IAM, resource-based policies are attached to a resource (in this case, your Lambda function) and define who can access the resource and what actions they are allowed to take. For another AWS service to invoke your function through a trigger, your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or"} +{"global_id": 1231, "doc_id": "lambda", "chunk_id": "7", "question_id": 4, "question": "What must your function's resource-based policy grant for another AWS service to invoke your function?", "answer_span": "your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action", "chunk": "your account that has specific permissions associated with it defined in a policy. Lambda permissions and roles 6 AWS Lambda Developer Guide Every Lambda function must have an execution role, and a single role can be used by more than one function. When a function is invoked, Lambda assumes the function's execution role and is granted permission to take the actions defined in the role's policy. When you create a function in the Lambda console, Lambda automatically creates an execution role for your function. The role's policy gives your function basic permissions to write log outputs to Amazon CloudWatch Logs. To give your function permission to perform actions on other AWS resources, you need to edit the role to add the extra permissions. The easiest way to add permissions is to use an AWS managed policy. Managed policies are created and administered by AWS and provide permissions for many common use cases. For example, if your function performs CRUD operations on a DynamoDB table, you can add the AmazonDynamoDBFullAccess policy to your role. Permissions for other users and resources to access your function To grant other AWS service permission to access your Lambda function, you use a resourcebased policy. In IAM, resource-based policies are attached to a resource (in this case, your Lambda function) and define who can access the resource and what actions they are allowed to take. For another AWS service to invoke your function through a trigger, your function's resource-based policy must grant that service permission to use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or"} +{"global_id": 1232, "doc_id": "lambda", "chunk_id": "8", "question_id": 1, "question": "What action is used to invoke a Lambda function?", "answer_span": "use the lambda:InvokeFunction action.", "chunk": "use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or resource. You can also use an identity-based policy that's associated with the user. Best practices for Lambda permissions When you set permissions using IAM policies, security best practice is to grant only the permissions required to perform a task. This is known as the principle of least privilege. To get started granting permissions for your function, you might choose to use an AWS managed policy. Managed policies can be the quickest and easiest way to grant permissions to perform a task, but they might also include other permissions you don't need. As you move from early development through test and production, we recommend you reduce permissions to only those needed by defining your own customer-managed policies. The same principle applies when granting permissions to access your function using a resourcebased policy. For example, if you want to give permission to Amazon S3 to invoke your function, Lambda permissions and roles 7 AWS Lambda Developer Guide Create your first Lambda function To get started with Lambda, use the Lambda console to create a function. In a few minutes, you can create and deploy a function and test it in the console. As you carry out the tutorial, you'll learn some fundamental Lambda concepts, like how to pass arguments to your function using the Lambda event object. You'll also learn how to return log outputs from your function, and how to view your function's invocation logs in Amazon CloudWatch Logs. To keep things simple, you create your function using either the Python"} +{"global_id": 1233, "doc_id": "lambda", "chunk_id": "8", "question_id": 2, "question": "How does Lambda add permissions when creating a trigger using the console?", "answer_span": "Lambda automatically adds this permission for you.", "chunk": "use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or resource. You can also use an identity-based policy that's associated with the user. Best practices for Lambda permissions When you set permissions using IAM policies, security best practice is to grant only the permissions required to perform a task. This is known as the principle of least privilege. To get started granting permissions for your function, you might choose to use an AWS managed policy. Managed policies can be the quickest and easiest way to grant permissions to perform a task, but they might also include other permissions you don't need. As you move from early development through test and production, we recommend you reduce permissions to only those needed by defining your own customer-managed policies. The same principle applies when granting permissions to access your function using a resourcebased policy. For example, if you want to give permission to Amazon S3 to invoke your function, Lambda permissions and roles 7 AWS Lambda Developer Guide Create your first Lambda function To get started with Lambda, use the Lambda console to create a function. In a few minutes, you can create and deploy a function and test it in the console. As you carry out the tutorial, you'll learn some fundamental Lambda concepts, like how to pass arguments to your function using the Lambda event object. You'll also learn how to return log outputs from your function, and how to view your function's invocation logs in Amazon CloudWatch Logs. To keep things simple, you create your function using either the Python"} +{"global_id": 1234, "doc_id": "lambda", "chunk_id": "8", "question_id": 3, "question": "What is the principle of least privilege?", "answer_span": "This is known as the principle of least privilege.", "chunk": "use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or resource. You can also use an identity-based policy that's associated with the user. Best practices for Lambda permissions When you set permissions using IAM policies, security best practice is to grant only the permissions required to perform a task. This is known as the principle of least privilege. To get started granting permissions for your function, you might choose to use an AWS managed policy. Managed policies can be the quickest and easiest way to grant permissions to perform a task, but they might also include other permissions you don't need. As you move from early development through test and production, we recommend you reduce permissions to only those needed by defining your own customer-managed policies. The same principle applies when granting permissions to access your function using a resourcebased policy. For example, if you want to give permission to Amazon S3 to invoke your function, Lambda permissions and roles 7 AWS Lambda Developer Guide Create your first Lambda function To get started with Lambda, use the Lambda console to create a function. In a few minutes, you can create and deploy a function and test it in the console. As you carry out the tutorial, you'll learn some fundamental Lambda concepts, like how to pass arguments to your function using the Lambda event object. You'll also learn how to return log outputs from your function, and how to view your function's invocation logs in Amazon CloudWatch Logs. To keep things simple, you create your function using either the Python"} +{"global_id": 1235, "doc_id": "lambda", "chunk_id": "8", "question_id": 4, "question": "What can you use to create a function in Lambda?", "answer_span": "use the Lambda console to create a function.", "chunk": "use the lambda:InvokeFunction action. If you create the trigger using the console, Lambda automatically adds this permission for you. To grant permission to other AWS users to access your function, you can define this in your function's resource-based policy in exactly the same way as for another AWS service or resource. You can also use an identity-based policy that's associated with the user. Best practices for Lambda permissions When you set permissions using IAM policies, security best practice is to grant only the permissions required to perform a task. This is known as the principle of least privilege. To get started granting permissions for your function, you might choose to use an AWS managed policy. Managed policies can be the quickest and easiest way to grant permissions to perform a task, but they might also include other permissions you don't need. As you move from early development through test and production, we recommend you reduce permissions to only those needed by defining your own customer-managed policies. The same principle applies when granting permissions to access your function using a resourcebased policy. For example, if you want to give permission to Amazon S3 to invoke your function, Lambda permissions and roles 7 AWS Lambda Developer Guide Create your first Lambda function To get started with Lambda, use the Lambda console to create a function. In a few minutes, you can create and deploy a function and test it in the console. As you carry out the tutorial, you'll learn some fundamental Lambda concepts, like how to pass arguments to your function using the Lambda event object. You'll also learn how to return log outputs from your function, and how to view your function's invocation logs in Amazon CloudWatch Logs. To keep things simple, you create your function using either the Python"} +{"global_id": 1236, "doc_id": "lambda", "chunk_id": "9", "question_id": 1, "question": "What can you learn about returning log outputs?", "answer_span": "You'll also learn how to return log outputs from your function", "chunk": "Lambda concepts, like how to pass arguments to your function using the Lambda event object. You'll also learn how to return log outputs from your function, and how to view your function's invocation logs in Amazon CloudWatch Logs. To keep things simple, you create your function using either the Python or Node.js runtime. With these interpreted languages, you can edit function code directly in the console's built-in code editor. With compiled languages like Java and C#, you must create a deployment package on your local build machine and upload it to Lambda. To learn about deploying functions to Lambda using other runtimes, see the links in the the section called “Next steps” section. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Prerequisites Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. Prerequisites 42 AWS Lambda Developer Guide AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user,"} +{"global_id": 1237, "doc_id": "lambda", "chunk_id": "9", "question_id": 2, "question": "What must you do to create a deployment package with compiled languages?", "answer_span": "you must create a deployment package on your local build machine and upload it to Lambda", "chunk": "Lambda concepts, like how to pass arguments to your function using the Lambda event object. You'll also learn how to return log outputs from your function, and how to view your function's invocation logs in Amazon CloudWatch Logs. To keep things simple, you create your function using either the Python or Node.js runtime. With these interpreted languages, you can edit function code directly in the console's built-in code editor. With compiled languages like Java and C#, you must create a deployment package on your local build machine and upload it to Lambda. To learn about deploying functions to Lambda using other runtimes, see the links in the the section called “Next steps” section. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Prerequisites Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. Prerequisites 42 AWS Lambda Developer Guide AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user,"} +{"global_id": 1238, "doc_id": "lambda", "chunk_id": "9", "question_id": 3, "question": "What is created when you sign up for an AWS account?", "answer_span": "an AWS account root user is created", "chunk": "Lambda concepts, like how to pass arguments to your function using the Lambda event object. You'll also learn how to return log outputs from your function, and how to view your function's invocation logs in Amazon CloudWatch Logs. To keep things simple, you create your function using either the Python or Node.js runtime. With these interpreted languages, you can edit function code directly in the console's built-in code editor. With compiled languages like Java and C#, you must create a deployment package on your local build machine and upload it to Lambda. To learn about deploying functions to Lambda using other runtimes, see the links in the the section called “Next steps” section. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Prerequisites Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. Prerequisites 42 AWS Lambda Developer Guide AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user,"} +{"global_id": 1239, "doc_id": "lambda", "chunk_id": "9", "question_id": 4, "question": "What is a security best practice regarding the root user?", "answer_span": "As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access", "chunk": "Lambda concepts, like how to pass arguments to your function using the Lambda event object. You'll also learn how to return log outputs from your function, and how to view your function's invocation logs in Amazon CloudWatch Logs. To keep things simple, you create your function using either the Python or Node.js runtime. With these interpreted languages, you can edit function code directly in the console's built-in code editor. With compiled languages like Java and C#, you must create a deployment package on your local build machine and upload it to Lambda. To learn about deploying functions to Lambda using other runtimes, see the links in the the section called “Next steps” section. Tip To learn how to build serverless solutions, check out the Serverless Developer Guide. Prerequisites Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account 1. Open https://portal.aws.amazon.com/billing/signup. 2. Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access. Prerequisites 42 AWS Lambda Developer Guide AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user,"} +{"global_id": 1240, "doc_id": "lambda", "chunk_id": "10", "question_id": 1, "question": "What can you do at any time regarding your account?", "answer_span": "you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account.", "chunk": "confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. Prerequisites 43 AWS Lambda Developer Guide Getting started with example applications and patterns The following resources can be used to quickly create and deploy serverless apps that implement some common Lambda uses cases. For"} +{"global_id": 1241, "doc_id": "lambda", "chunk_id": "10", "question_id": 2, "question": "What should you do after signing up for an AWS account?", "answer_span": "secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks.", "chunk": "confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. Prerequisites 43 AWS Lambda Developer Guide Getting started with example applications and patterns The following resources can be used to quickly create and deploy serverless apps that implement some common Lambda uses cases. For"} +{"global_id": 1242, "doc_id": "lambda", "chunk_id": "10", "question_id": 3, "question": "How do you sign in as the account owner?", "answer_span": "Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address.", "chunk": "confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. Prerequisites 43 AWS Lambda Developer Guide Getting started with example applications and patterns The following resources can be used to quickly create and deploy serverless apps that implement some common Lambda uses cases. For"} +{"global_id": 1243, "doc_id": "lambda", "chunk_id": "10", "question_id": 4, "question": "What is the first step to create a user with administrative access?", "answer_span": "Enable IAM Identity Center.", "chunk": "confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account. Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user 1. Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide. 2. Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide. Create a user with administrative access 1. Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide. 2. In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide. Sign in as the user with administrative access • To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. Prerequisites 43 AWS Lambda Developer Guide Getting started with example applications and patterns The following resources can be used to quickly create and deploy serverless apps that implement some common Lambda uses cases. For"} +{"global_id": 1244, "doc_id": "lambda", "chunk_id": "11", "question_id": 1, "question": "What is the URL that was sent to your email address when you created the IAM Identity Center user?", "answer_span": "URL that was sent to your email address when you created the IAM Identity Center user.", "chunk": "URL that was sent to your email address when you created the IAM Identity Center user. Prerequisites 43 AWS Lambda Developer Guide Getting started with example applications and patterns The following resources can be used to quickly create and deploy serverless apps that implement some common Lambda uses cases. For each of the example apps, we provide instructions to either create and configure resources manually using the AWS Management Console, or to use the AWS Serverless Application Model to deploy the resources using IaC. Follow the console intructions to learn more about configuring the individual AWS resources for each app, or use to AWS SAM to quickly deploy resources as you would in a production environment. File Processing • PDF Encryption Application: Create a serverless application that encrypts PDF files when they are uploaded to an Amazon Simple Storage Service bucket and saves them to another bucket, which is useful for securing sensitive documents upon upload. • Image Analysis Application: Create a serverless application that extracts text from images using Amazon Rekognition, which is useful for document processing, content moderation, and automated image analysis. Database Integration • Queue-to-Database Application: Create a serverless application that writes queue messages to an Amazon RDS database, which is useful for processing user registrations and handling order submissions. • Database Event Handler: Create a serverless application that responds to Amazon DynamoDB table changes, which is useful for audit logging, data replication, and automated workflows. Scheduled Tasks • Database Maintenance Application: Create a serverless application that automatically deletes entries more than 12 months old from an Amazon DynamoDB table using a cron schedule, which is useful for automated database maintenance and data lifecycle management. File Processing 56 AWS Lambda Developer Guide • Create an EventBridge scheduled rule for Lambda functions: Use scheduled expressions for rules in"} +{"global_id": 1245, "doc_id": "lambda", "chunk_id": "11", "question_id": 2, "question": "What is the purpose of the PDF Encryption Application?", "answer_span": "Create a serverless application that encrypts PDF files when they are uploaded to an Amazon Simple Storage Service bucket and saves them to another bucket, which is useful for securing sensitive documents upon upload.", "chunk": "URL that was sent to your email address when you created the IAM Identity Center user. Prerequisites 43 AWS Lambda Developer Guide Getting started with example applications and patterns The following resources can be used to quickly create and deploy serverless apps that implement some common Lambda uses cases. For each of the example apps, we provide instructions to either create and configure resources manually using the AWS Management Console, or to use the AWS Serverless Application Model to deploy the resources using IaC. Follow the console intructions to learn more about configuring the individual AWS resources for each app, or use to AWS SAM to quickly deploy resources as you would in a production environment. File Processing • PDF Encryption Application: Create a serverless application that encrypts PDF files when they are uploaded to an Amazon Simple Storage Service bucket and saves them to another bucket, which is useful for securing sensitive documents upon upload. • Image Analysis Application: Create a serverless application that extracts text from images using Amazon Rekognition, which is useful for document processing, content moderation, and automated image analysis. Database Integration • Queue-to-Database Application: Create a serverless application that writes queue messages to an Amazon RDS database, which is useful for processing user registrations and handling order submissions. • Database Event Handler: Create a serverless application that responds to Amazon DynamoDB table changes, which is useful for audit logging, data replication, and automated workflows. Scheduled Tasks • Database Maintenance Application: Create a serverless application that automatically deletes entries more than 12 months old from an Amazon DynamoDB table using a cron schedule, which is useful for automated database maintenance and data lifecycle management. File Processing 56 AWS Lambda Developer Guide • Create an EventBridge scheduled rule for Lambda functions: Use scheduled expressions for rules in"} +{"global_id": 1246, "doc_id": "lambda", "chunk_id": "11", "question_id": 3, "question": "What does the Queue-to-Database Application do?", "answer_span": "Create a serverless application that writes queue messages to an Amazon RDS database, which is useful for processing user registrations and handling order submissions.", "chunk": "URL that was sent to your email address when you created the IAM Identity Center user. Prerequisites 43 AWS Lambda Developer Guide Getting started with example applications and patterns The following resources can be used to quickly create and deploy serverless apps that implement some common Lambda uses cases. For each of the example apps, we provide instructions to either create and configure resources manually using the AWS Management Console, or to use the AWS Serverless Application Model to deploy the resources using IaC. Follow the console intructions to learn more about configuring the individual AWS resources for each app, or use to AWS SAM to quickly deploy resources as you would in a production environment. File Processing • PDF Encryption Application: Create a serverless application that encrypts PDF files when they are uploaded to an Amazon Simple Storage Service bucket and saves them to another bucket, which is useful for securing sensitive documents upon upload. • Image Analysis Application: Create a serverless application that extracts text from images using Amazon Rekognition, which is useful for document processing, content moderation, and automated image analysis. Database Integration • Queue-to-Database Application: Create a serverless application that writes queue messages to an Amazon RDS database, which is useful for processing user registrations and handling order submissions. • Database Event Handler: Create a serverless application that responds to Amazon DynamoDB table changes, which is useful for audit logging, data replication, and automated workflows. Scheduled Tasks • Database Maintenance Application: Create a serverless application that automatically deletes entries more than 12 months old from an Amazon DynamoDB table using a cron schedule, which is useful for automated database maintenance and data lifecycle management. File Processing 56 AWS Lambda Developer Guide • Create an EventBridge scheduled rule for Lambda functions: Use scheduled expressions for rules in"} +{"global_id": 1247, "doc_id": "lambda", "chunk_id": "11", "question_id": 4, "question": "What is the function of the Database Maintenance Application?", "answer_span": "Create a serverless application that automatically deletes entries more than 12 months old from an Amazon DynamoDB table using a cron schedule, which is useful for automated database maintenance and data lifecycle management.", "chunk": "URL that was sent to your email address when you created the IAM Identity Center user. Prerequisites 43 AWS Lambda Developer Guide Getting started with example applications and patterns The following resources can be used to quickly create and deploy serverless apps that implement some common Lambda uses cases. For each of the example apps, we provide instructions to either create and configure resources manually using the AWS Management Console, or to use the AWS Serverless Application Model to deploy the resources using IaC. Follow the console intructions to learn more about configuring the individual AWS resources for each app, or use to AWS SAM to quickly deploy resources as you would in a production environment. File Processing • PDF Encryption Application: Create a serverless application that encrypts PDF files when they are uploaded to an Amazon Simple Storage Service bucket and saves them to another bucket, which is useful for securing sensitive documents upon upload. • Image Analysis Application: Create a serverless application that extracts text from images using Amazon Rekognition, which is useful for document processing, content moderation, and automated image analysis. Database Integration • Queue-to-Database Application: Create a serverless application that writes queue messages to an Amazon RDS database, which is useful for processing user registrations and handling order submissions. • Database Event Handler: Create a serverless application that responds to Amazon DynamoDB table changes, which is useful for audit logging, data replication, and automated workflows. Scheduled Tasks • Database Maintenance Application: Create a serverless application that automatically deletes entries more than 12 months old from an Amazon DynamoDB table using a cron schedule, which is useful for automated database maintenance and data lifecycle management. File Processing 56 AWS Lambda Developer Guide • Create an EventBridge scheduled rule for Lambda functions: Use scheduled expressions for rules in"} +{"global_id": 1248, "doc_id": "lambda", "chunk_id": "12", "question_id": 1, "question": "What does the system automatically delete from an Amazon DynamoDB table?", "answer_span": "automatically deletes entries more than 12 months old from an Amazon DynamoDB table", "chunk": "automatically deletes entries more than 12 months old from an Amazon DynamoDB table using a cron schedule, which is useful for automated database maintenance and data lifecycle management. File Processing 56 AWS Lambda Developer Guide • Create an EventBridge scheduled rule for Lambda functions: Use scheduled expressions for rules in EventBridge to trigger a Lambda function on a timed schedule. This format uses cron syntax and can be set with a one-minute granularity. Additional resources Use the following resources to further explore Lambda and serverless application development: • Serverless Land: a library of ready-to-use patterns for building serverless apps. It helps developers create applications faster using AWS services like Lambda, API Gateway, and EventBridge. The site offers pre-built solutions and best practices, making it easier to develop serverless systems. • Lambda sample applications: Applications that are available in the GitHub repository for this guide. These samples demonstrate the use of various languages and AWS services. Each sample application includes scripts for easy deployment and cleanup and supporting resources. • Code examples for Lambda using AWS SDKs: Examples that show you how to use Lambda with AWS software development kits (SDKs). These examples include basics, actions, scenarios, and AWS community contributions. Examples cover essential operations, individual service functions, and specific tasks using multiple functions or AWS services. Create a serverless file-processing app One of the most common use cases for Lambda is to perform file processing tasks. For example, you might use a Lambda function to automatically create PDF files from HTML files or images, or to create thumbnails when a user uploads an image. In this example, you create an app which automatically encrypts PDF files when they are uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. To implement this app, you create the following resources: • An"} +{"global_id": 1249, "doc_id": "lambda", "chunk_id": "12", "question_id": 2, "question": "What is the purpose of creating an EventBridge scheduled rule for Lambda functions?", "answer_span": "Use scheduled expressions for rules in EventBridge to trigger a Lambda function on a timed schedule", "chunk": "automatically deletes entries more than 12 months old from an Amazon DynamoDB table using a cron schedule, which is useful for automated database maintenance and data lifecycle management. File Processing 56 AWS Lambda Developer Guide • Create an EventBridge scheduled rule for Lambda functions: Use scheduled expressions for rules in EventBridge to trigger a Lambda function on a timed schedule. This format uses cron syntax and can be set with a one-minute granularity. Additional resources Use the following resources to further explore Lambda and serverless application development: • Serverless Land: a library of ready-to-use patterns for building serverless apps. It helps developers create applications faster using AWS services like Lambda, API Gateway, and EventBridge. The site offers pre-built solutions and best practices, making it easier to develop serverless systems. • Lambda sample applications: Applications that are available in the GitHub repository for this guide. These samples demonstrate the use of various languages and AWS services. Each sample application includes scripts for easy deployment and cleanup and supporting resources. • Code examples for Lambda using AWS SDKs: Examples that show you how to use Lambda with AWS software development kits (SDKs). These examples include basics, actions, scenarios, and AWS community contributions. Examples cover essential operations, individual service functions, and specific tasks using multiple functions or AWS services. Create a serverless file-processing app One of the most common use cases for Lambda is to perform file processing tasks. For example, you might use a Lambda function to automatically create PDF files from HTML files or images, or to create thumbnails when a user uploads an image. In this example, you create an app which automatically encrypts PDF files when they are uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. To implement this app, you create the following resources: • An"} +{"global_id": 1250, "doc_id": "lambda", "chunk_id": "12", "question_id": 3, "question": "What is one of the most common use cases for Lambda?", "answer_span": "One of the most common use cases for Lambda is to perform file processing tasks", "chunk": "automatically deletes entries more than 12 months old from an Amazon DynamoDB table using a cron schedule, which is useful for automated database maintenance and data lifecycle management. File Processing 56 AWS Lambda Developer Guide • Create an EventBridge scheduled rule for Lambda functions: Use scheduled expressions for rules in EventBridge to trigger a Lambda function on a timed schedule. This format uses cron syntax and can be set with a one-minute granularity. Additional resources Use the following resources to further explore Lambda and serverless application development: • Serverless Land: a library of ready-to-use patterns for building serverless apps. It helps developers create applications faster using AWS services like Lambda, API Gateway, and EventBridge. The site offers pre-built solutions and best practices, making it easier to develop serverless systems. • Lambda sample applications: Applications that are available in the GitHub repository for this guide. These samples demonstrate the use of various languages and AWS services. Each sample application includes scripts for easy deployment and cleanup and supporting resources. • Code examples for Lambda using AWS SDKs: Examples that show you how to use Lambda with AWS software development kits (SDKs). These examples include basics, actions, scenarios, and AWS community contributions. Examples cover essential operations, individual service functions, and specific tasks using multiple functions or AWS services. Create a serverless file-processing app One of the most common use cases for Lambda is to perform file processing tasks. For example, you might use a Lambda function to automatically create PDF files from HTML files or images, or to create thumbnails when a user uploads an image. In this example, you create an app which automatically encrypts PDF files when they are uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. To implement this app, you create the following resources: • An"} +{"global_id": 1251, "doc_id": "lambda", "chunk_id": "12", "question_id": 4, "question": "What happens when a user uploads an image in the file-processing app example?", "answer_span": "to create thumbnails when a user uploads an image", "chunk": "automatically deletes entries more than 12 months old from an Amazon DynamoDB table using a cron schedule, which is useful for automated database maintenance and data lifecycle management. File Processing 56 AWS Lambda Developer Guide • Create an EventBridge scheduled rule for Lambda functions: Use scheduled expressions for rules in EventBridge to trigger a Lambda function on a timed schedule. This format uses cron syntax and can be set with a one-minute granularity. Additional resources Use the following resources to further explore Lambda and serverless application development: • Serverless Land: a library of ready-to-use patterns for building serverless apps. It helps developers create applications faster using AWS services like Lambda, API Gateway, and EventBridge. The site offers pre-built solutions and best practices, making it easier to develop serverless systems. • Lambda sample applications: Applications that are available in the GitHub repository for this guide. These samples demonstrate the use of various languages and AWS services. Each sample application includes scripts for easy deployment and cleanup and supporting resources. • Code examples for Lambda using AWS SDKs: Examples that show you how to use Lambda with AWS software development kits (SDKs). These examples include basics, actions, scenarios, and AWS community contributions. Examples cover essential operations, individual service functions, and specific tasks using multiple functions or AWS services. Create a serverless file-processing app One of the most common use cases for Lambda is to perform file processing tasks. For example, you might use a Lambda function to automatically create PDF files from HTML files or images, or to create thumbnails when a user uploads an image. In this example, you create an app which automatically encrypts PDF files when they are uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. To implement this app, you create the following resources: • An"} +{"global_id": 1252, "doc_id": "lambda", "chunk_id": "13", "question_id": 1, "question": "What type of files does the app automatically encrypt?", "answer_span": "you create an app which automatically encrypts PDF files when they are uploaded", "chunk": "HTML files or images, or to create thumbnails when a user uploads an image. In this example, you create an app which automatically encrypts PDF files when they are uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. To implement this app, you create the following resources: • An S3 bucket for users to upload PDF files to • A Lambda function in Python which reads the uploaded file and creates an encrypted, passwordprotected version of it • A second S3 bucket for Lambda to save the encrypted file in Additional resources 57 AWS Lambda Developer Guide You also create an AWS Identity and Access Management (IAM) policy to give your Lambda function permission to perform read and write operations on your S3 buckets. Tip If you’re brand new to Lambda, we recommend that you start with the tutorial Create your first function before creating this example app. You can deploy your app manually by creating and configuring resources with the AWS Management Console or the AWS Command Line Interface (AWS CLI). You can also deploy the app by using the AWS Serverless Application Model (AWS SAM). AWS SAM is an infrastructure as code (IaC) tool. With IaC, you don’t create resources manually, but define them in code and then deploy them automatically. If you want to learn more about using Lambda with IaC before deploying this example app, see the section called “Infrastructure as code (IaC)”. File-processing app 58 AWS Lambda Developer Guide Understanding Lambda function invocation methods After you deploy your Lambda function, you can invoke it in several ways: • The Lambda console – Use the Lambda console to quickly create a test event to invoke your function. • The AWS SDK – Use the AWS SDK to programmatically invoke your function. • The Invoke API"} +{"global_id": 1253, "doc_id": "lambda", "chunk_id": "13", "question_id": 2, "question": "What is one of the resources you create for the app?", "answer_span": "An S3 bucket for users to upload PDF files to", "chunk": "HTML files or images, or to create thumbnails when a user uploads an image. In this example, you create an app which automatically encrypts PDF files when they are uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. To implement this app, you create the following resources: • An S3 bucket for users to upload PDF files to • A Lambda function in Python which reads the uploaded file and creates an encrypted, passwordprotected version of it • A second S3 bucket for Lambda to save the encrypted file in Additional resources 57 AWS Lambda Developer Guide You also create an AWS Identity and Access Management (IAM) policy to give your Lambda function permission to perform read and write operations on your S3 buckets. Tip If you’re brand new to Lambda, we recommend that you start with the tutorial Create your first function before creating this example app. You can deploy your app manually by creating and configuring resources with the AWS Management Console or the AWS Command Line Interface (AWS CLI). You can also deploy the app by using the AWS Serverless Application Model (AWS SAM). AWS SAM is an infrastructure as code (IaC) tool. With IaC, you don’t create resources manually, but define them in code and then deploy them automatically. If you want to learn more about using Lambda with IaC before deploying this example app, see the section called “Infrastructure as code (IaC)”. File-processing app 58 AWS Lambda Developer Guide Understanding Lambda function invocation methods After you deploy your Lambda function, you can invoke it in several ways: • The Lambda console – Use the Lambda console to quickly create a test event to invoke your function. • The AWS SDK – Use the AWS SDK to programmatically invoke your function. • The Invoke API"} +{"global_id": 1254, "doc_id": "lambda", "chunk_id": "13", "question_id": 3, "question": "What programming language is used for the Lambda function?", "answer_span": "A Lambda function in Python", "chunk": "HTML files or images, or to create thumbnails when a user uploads an image. In this example, you create an app which automatically encrypts PDF files when they are uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. To implement this app, you create the following resources: • An S3 bucket for users to upload PDF files to • A Lambda function in Python which reads the uploaded file and creates an encrypted, passwordprotected version of it • A second S3 bucket for Lambda to save the encrypted file in Additional resources 57 AWS Lambda Developer Guide You also create an AWS Identity and Access Management (IAM) policy to give your Lambda function permission to perform read and write operations on your S3 buckets. Tip If you’re brand new to Lambda, we recommend that you start with the tutorial Create your first function before creating this example app. You can deploy your app manually by creating and configuring resources with the AWS Management Console or the AWS Command Line Interface (AWS CLI). You can also deploy the app by using the AWS Serverless Application Model (AWS SAM). AWS SAM is an infrastructure as code (IaC) tool. With IaC, you don’t create resources manually, but define them in code and then deploy them automatically. If you want to learn more about using Lambda with IaC before deploying this example app, see the section called “Infrastructure as code (IaC)”. File-processing app 58 AWS Lambda Developer Guide Understanding Lambda function invocation methods After you deploy your Lambda function, you can invoke it in several ways: • The Lambda console – Use the Lambda console to quickly create a test event to invoke your function. • The AWS SDK – Use the AWS SDK to programmatically invoke your function. • The Invoke API"} +{"global_id": 1255, "doc_id": "lambda", "chunk_id": "13", "question_id": 4, "question": "What tool is mentioned for deploying the app using infrastructure as code?", "answer_span": "AWS Serverless Application Model (AWS SAM)", "chunk": "HTML files or images, or to create thumbnails when a user uploads an image. In this example, you create an app which automatically encrypts PDF files when they are uploaded to an Amazon Simple Storage Service (Amazon S3) bucket. To implement this app, you create the following resources: • An S3 bucket for users to upload PDF files to • A Lambda function in Python which reads the uploaded file and creates an encrypted, passwordprotected version of it • A second S3 bucket for Lambda to save the encrypted file in Additional resources 57 AWS Lambda Developer Guide You also create an AWS Identity and Access Management (IAM) policy to give your Lambda function permission to perform read and write operations on your S3 buckets. Tip If you’re brand new to Lambda, we recommend that you start with the tutorial Create your first function before creating this example app. You can deploy your app manually by creating and configuring resources with the AWS Management Console or the AWS Command Line Interface (AWS CLI). You can also deploy the app by using the AWS Serverless Application Model (AWS SAM). AWS SAM is an infrastructure as code (IaC) tool. With IaC, you don’t create resources manually, but define them in code and then deploy them automatically. If you want to learn more about using Lambda with IaC before deploying this example app, see the section called “Infrastructure as code (IaC)”. File-processing app 58 AWS Lambda Developer Guide Understanding Lambda function invocation methods After you deploy your Lambda function, you can invoke it in several ways: • The Lambda console – Use the Lambda console to quickly create a test event to invoke your function. • The AWS SDK – Use the AWS SDK to programmatically invoke your function. • The Invoke API"} +{"global_id": 1256, "doc_id": "lambda", "chunk_id": "14", "question_id": 1, "question": "What is one way to invoke your Lambda function?", "answer_span": "Use the Lambda console to quickly create a test event to invoke your function.", "chunk": "After you deploy your Lambda function, you can invoke it in several ways: • The Lambda console – Use the Lambda console to quickly create a test event to invoke your function. • The AWS SDK – Use the AWS SDK to programmatically invoke your function. • The Invoke API – Use the Lambda Invoke API to directly invoke your function. • The AWS Command Line Interface (AWS CLI) – Use the aws lambda invoke AWS CLI command to directly invoke your function from the command line. • A function URL HTTP(S) endpoint – Use function URLs to create a dedicated HTTP(S) endpoint that you can use to invoke your function. All of these methods are direct ways to invoke your function. In Lambda, a common use case is to invoke your function based on an event that occurs elsewhere in your application. Some services can invoke a Lambda function with each new event. This is called a trigger. For stream and queuebased services, Lambda invokes the function with batches of records. This is called an event source mapping. When you invoke a function, you can choose to invoke it synchronously or asynchronously. With synchronous invocation, you wait for the function to process the event and return a response. With asynchronous invocation, Lambda queues the event for processing and returns a response immediately. The InvocationType request parameter in the Invoke API determines how Lambda invokes your function. A value of RequestResponse indicates synchronous invocation, and a value of Event indicates asynchronous invocation. To invoke your function over IPv6, use Lambda's public dual-stack endpoints. Dual-stack endpoints support both IPv4 and IPv6. Lambda dual-stack endpoints use the following syntax: protocol://lambda.us-east-1.api.aws You can also use Lambda function URLs to invoke functions over IPv6. Function URL endpoints have the following format: https://url-id.lambda-url.us-east-1.on.aws 323 AWS"} +{"global_id": 1257, "doc_id": "lambda", "chunk_id": "14", "question_id": 2, "question": "What does a value of RequestResponse indicate in the InvocationType request parameter?", "answer_span": "A value of RequestResponse indicates synchronous invocation.", "chunk": "After you deploy your Lambda function, you can invoke it in several ways: • The Lambda console – Use the Lambda console to quickly create a test event to invoke your function. • The AWS SDK – Use the AWS SDK to programmatically invoke your function. • The Invoke API – Use the Lambda Invoke API to directly invoke your function. • The AWS Command Line Interface (AWS CLI) – Use the aws lambda invoke AWS CLI command to directly invoke your function from the command line. • A function URL HTTP(S) endpoint – Use function URLs to create a dedicated HTTP(S) endpoint that you can use to invoke your function. All of these methods are direct ways to invoke your function. In Lambda, a common use case is to invoke your function based on an event that occurs elsewhere in your application. Some services can invoke a Lambda function with each new event. This is called a trigger. For stream and queuebased services, Lambda invokes the function with batches of records. This is called an event source mapping. When you invoke a function, you can choose to invoke it synchronously or asynchronously. With synchronous invocation, you wait for the function to process the event and return a response. With asynchronous invocation, Lambda queues the event for processing and returns a response immediately. The InvocationType request parameter in the Invoke API determines how Lambda invokes your function. A value of RequestResponse indicates synchronous invocation, and a value of Event indicates asynchronous invocation. To invoke your function over IPv6, use Lambda's public dual-stack endpoints. Dual-stack endpoints support both IPv4 and IPv6. Lambda dual-stack endpoints use the following syntax: protocol://lambda.us-east-1.api.aws You can also use Lambda function URLs to invoke functions over IPv6. Function URL endpoints have the following format: https://url-id.lambda-url.us-east-1.on.aws 323 AWS"} +{"global_id": 1258, "doc_id": "lambda", "chunk_id": "14", "question_id": 3, "question": "What is called a trigger in Lambda?", "answer_span": "This is called a trigger.", "chunk": "After you deploy your Lambda function, you can invoke it in several ways: • The Lambda console – Use the Lambda console to quickly create a test event to invoke your function. • The AWS SDK – Use the AWS SDK to programmatically invoke your function. • The Invoke API – Use the Lambda Invoke API to directly invoke your function. • The AWS Command Line Interface (AWS CLI) – Use the aws lambda invoke AWS CLI command to directly invoke your function from the command line. • A function URL HTTP(S) endpoint – Use function URLs to create a dedicated HTTP(S) endpoint that you can use to invoke your function. All of these methods are direct ways to invoke your function. In Lambda, a common use case is to invoke your function based on an event that occurs elsewhere in your application. Some services can invoke a Lambda function with each new event. This is called a trigger. For stream and queuebased services, Lambda invokes the function with batches of records. This is called an event source mapping. When you invoke a function, you can choose to invoke it synchronously or asynchronously. With synchronous invocation, you wait for the function to process the event and return a response. With asynchronous invocation, Lambda queues the event for processing and returns a response immediately. The InvocationType request parameter in the Invoke API determines how Lambda invokes your function. A value of RequestResponse indicates synchronous invocation, and a value of Event indicates asynchronous invocation. To invoke your function over IPv6, use Lambda's public dual-stack endpoints. Dual-stack endpoints support both IPv4 and IPv6. Lambda dual-stack endpoints use the following syntax: protocol://lambda.us-east-1.api.aws You can also use Lambda function URLs to invoke functions over IPv6. Function URL endpoints have the following format: https://url-id.lambda-url.us-east-1.on.aws 323 AWS"} +{"global_id": 1259, "doc_id": "lambda", "chunk_id": "14", "question_id": 4, "question": "What syntax do Lambda dual-stack endpoints use?", "answer_span": "Lambda dual-stack endpoints use the following syntax: protocol://lambda.us-east-1.api.aws", "chunk": "After you deploy your Lambda function, you can invoke it in several ways: • The Lambda console – Use the Lambda console to quickly create a test event to invoke your function. • The AWS SDK – Use the AWS SDK to programmatically invoke your function. • The Invoke API – Use the Lambda Invoke API to directly invoke your function. • The AWS Command Line Interface (AWS CLI) – Use the aws lambda invoke AWS CLI command to directly invoke your function from the command line. • A function URL HTTP(S) endpoint – Use function URLs to create a dedicated HTTP(S) endpoint that you can use to invoke your function. All of these methods are direct ways to invoke your function. In Lambda, a common use case is to invoke your function based on an event that occurs elsewhere in your application. Some services can invoke a Lambda function with each new event. This is called a trigger. For stream and queuebased services, Lambda invokes the function with batches of records. This is called an event source mapping. When you invoke a function, you can choose to invoke it synchronously or asynchronously. With synchronous invocation, you wait for the function to process the event and return a response. With asynchronous invocation, Lambda queues the event for processing and returns a response immediately. The InvocationType request parameter in the Invoke API determines how Lambda invokes your function. A value of RequestResponse indicates synchronous invocation, and a value of Event indicates asynchronous invocation. To invoke your function over IPv6, use Lambda's public dual-stack endpoints. Dual-stack endpoints support both IPv4 and IPv6. Lambda dual-stack endpoints use the following syntax: protocol://lambda.us-east-1.api.aws You can also use Lambda function URLs to invoke functions over IPv6. Function URL endpoints have the following format: https://url-id.lambda-url.us-east-1.on.aws 323 AWS"} +{"global_id": 1260, "doc_id": "lambda", "chunk_id": "15", "question_id": 1, "question": "How do you invoke your function over IPv6?", "answer_span": "To invoke your function over IPv6, use Lambda's public dual-stack endpoints.", "chunk": "asynchronous invocation. To invoke your function over IPv6, use Lambda's public dual-stack endpoints. Dual-stack endpoints support both IPv4 and IPv6. Lambda dual-stack endpoints use the following syntax: protocol://lambda.us-east-1.api.aws You can also use Lambda function URLs to invoke functions over IPv6. Function URL endpoints have the following format: https://url-id.lambda-url.us-east-1.on.aws 323 AWS Lambda Developer Guide If the function invocation results in an error, for synchronous invocations, view the error message in the response and retry the invocation manually. For asynchronous invocations, Lambda handles retries automatically and can send invocation records to a destination. 324 AWS Lambda Developer Guide Invoke a Lambda function synchronously When you invoke a function synchronously, Lambda runs the function and waits for a response. When the function completes, Lambda returns the response from the function's code with additional data, such as the version of the function that was invoked. To invoke a function synchronously with the AWS CLI, use the invoke command. aws lambda invoke --function-name my-function \\ --cli-binary-format raw-in-base64-out \\ --payload '{ \"key\": \"value\" }' response.json The cli-binary-format option is required if you're using AWS CLI version 2. To make this the default setting, run aws configure set cli-binary-format raw-in-base64-out. For more information, see AWS CLI supported global command line options in the AWS Command Line Interface User Guide for Version 2. You should see the following output: { \"ExecutedVersion\": \"$LATEST\", \"StatusCode\": 200 } The following diagram shows clients invoking a Lambda function synchronously. Lambda sends the events directly to the function and sends the function's response back to the invoker. The payload is a string that contains an event in JSON format. The name of the file where the AWS CLI writes the response from the function is response.json. If the function returns an Invoke a function synchronously 325 AWS Lambda Developer Guide object or error,"} +{"global_id": 1261, "doc_id": "lambda", "chunk_id": "15", "question_id": 2, "question": "What do dual-stack endpoints support?", "answer_span": "Dual-stack endpoints support both IPv4 and IPv6.", "chunk": "asynchronous invocation. To invoke your function over IPv6, use Lambda's public dual-stack endpoints. Dual-stack endpoints support both IPv4 and IPv6. Lambda dual-stack endpoints use the following syntax: protocol://lambda.us-east-1.api.aws You can also use Lambda function URLs to invoke functions over IPv6. Function URL endpoints have the following format: https://url-id.lambda-url.us-east-1.on.aws 323 AWS Lambda Developer Guide If the function invocation results in an error, for synchronous invocations, view the error message in the response and retry the invocation manually. For asynchronous invocations, Lambda handles retries automatically and can send invocation records to a destination. 324 AWS Lambda Developer Guide Invoke a Lambda function synchronously When you invoke a function synchronously, Lambda runs the function and waits for a response. When the function completes, Lambda returns the response from the function's code with additional data, such as the version of the function that was invoked. To invoke a function synchronously with the AWS CLI, use the invoke command. aws lambda invoke --function-name my-function \\ --cli-binary-format raw-in-base64-out \\ --payload '{ \"key\": \"value\" }' response.json The cli-binary-format option is required if you're using AWS CLI version 2. To make this the default setting, run aws configure set cli-binary-format raw-in-base64-out. For more information, see AWS CLI supported global command line options in the AWS Command Line Interface User Guide for Version 2. You should see the following output: { \"ExecutedVersion\": \"$LATEST\", \"StatusCode\": 200 } The following diagram shows clients invoking a Lambda function synchronously. Lambda sends the events directly to the function and sends the function's response back to the invoker. The payload is a string that contains an event in JSON format. The name of the file where the AWS CLI writes the response from the function is response.json. If the function returns an Invoke a function synchronously 325 AWS Lambda Developer Guide object or error,"} +{"global_id": 1262, "doc_id": "lambda", "chunk_id": "15", "question_id": 3, "question": "What command is used to invoke a function synchronously with the AWS CLI?", "answer_span": "To invoke a function synchronously with the AWS CLI, use the invoke command.", "chunk": "asynchronous invocation. To invoke your function over IPv6, use Lambda's public dual-stack endpoints. Dual-stack endpoints support both IPv4 and IPv6. Lambda dual-stack endpoints use the following syntax: protocol://lambda.us-east-1.api.aws You can also use Lambda function URLs to invoke functions over IPv6. Function URL endpoints have the following format: https://url-id.lambda-url.us-east-1.on.aws 323 AWS Lambda Developer Guide If the function invocation results in an error, for synchronous invocations, view the error message in the response and retry the invocation manually. For asynchronous invocations, Lambda handles retries automatically and can send invocation records to a destination. 324 AWS Lambda Developer Guide Invoke a Lambda function synchronously When you invoke a function synchronously, Lambda runs the function and waits for a response. When the function completes, Lambda returns the response from the function's code with additional data, such as the version of the function that was invoked. To invoke a function synchronously with the AWS CLI, use the invoke command. aws lambda invoke --function-name my-function \\ --cli-binary-format raw-in-base64-out \\ --payload '{ \"key\": \"value\" }' response.json The cli-binary-format option is required if you're using AWS CLI version 2. To make this the default setting, run aws configure set cli-binary-format raw-in-base64-out. For more information, see AWS CLI supported global command line options in the AWS Command Line Interface User Guide for Version 2. You should see the following output: { \"ExecutedVersion\": \"$LATEST\", \"StatusCode\": 200 } The following diagram shows clients invoking a Lambda function synchronously. Lambda sends the events directly to the function and sends the function's response back to the invoker. The payload is a string that contains an event in JSON format. The name of the file where the AWS CLI writes the response from the function is response.json. If the function returns an Invoke a function synchronously 325 AWS Lambda Developer Guide object or error,"} +{"global_id": 1263, "doc_id": "lambda", "chunk_id": "15", "question_id": 4, "question": "What is the required option if you're using AWS CLI version 2?", "answer_span": "The cli-binary-format option is required if you're using AWS CLI version 2.", "chunk": "asynchronous invocation. To invoke your function over IPv6, use Lambda's public dual-stack endpoints. Dual-stack endpoints support both IPv4 and IPv6. Lambda dual-stack endpoints use the following syntax: protocol://lambda.us-east-1.api.aws You can also use Lambda function URLs to invoke functions over IPv6. Function URL endpoints have the following format: https://url-id.lambda-url.us-east-1.on.aws 323 AWS Lambda Developer Guide If the function invocation results in an error, for synchronous invocations, view the error message in the response and retry the invocation manually. For asynchronous invocations, Lambda handles retries automatically and can send invocation records to a destination. 324 AWS Lambda Developer Guide Invoke a Lambda function synchronously When you invoke a function synchronously, Lambda runs the function and waits for a response. When the function completes, Lambda returns the response from the function's code with additional data, such as the version of the function that was invoked. To invoke a function synchronously with the AWS CLI, use the invoke command. aws lambda invoke --function-name my-function \\ --cli-binary-format raw-in-base64-out \\ --payload '{ \"key\": \"value\" }' response.json The cli-binary-format option is required if you're using AWS CLI version 2. To make this the default setting, run aws configure set cli-binary-format raw-in-base64-out. For more information, see AWS CLI supported global command line options in the AWS Command Line Interface User Guide for Version 2. You should see the following output: { \"ExecutedVersion\": \"$LATEST\", \"StatusCode\": 200 } The following diagram shows clients invoking a Lambda function synchronously. Lambda sends the events directly to the function and sends the function's response back to the invoker. The payload is a string that contains an event in JSON format. The name of the file where the AWS CLI writes the response from the function is response.json. If the function returns an Invoke a function synchronously 325 AWS Lambda Developer Guide object or error,"} +{"global_id": 1264, "doc_id": "lambda", "chunk_id": "16", "question_id": 1, "question": "What is the payload in the context of the invoker?", "answer_span": "The payload is a string that contains an event in JSON format.", "chunk": "back to the invoker. The payload is a string that contains an event in JSON format. The name of the file where the AWS CLI writes the response from the function is response.json. If the function returns an Invoke a function synchronously 325 AWS Lambda Developer Guide object or error, the response body is the object or error in JSON format. If the function exits without error, the response body is null. Note Lambda does not wait for external extensions to complete before sending the response. External extensions run as independent processes in the execution environment and continue to run after the function invocation is complete. For more information, see Augment Lambda functions using Lambda extensions. The output from the command, which is displayed in the terminal, includes information from headers in the response from Lambda. This includes the version that processed the event (useful when you use aliases), and the status code returned by Lambda. If Lambda was able to run the function, the status code is 200, even if the function returned an error. Note For functions with a long timeout, your client might be disconnected during synchronous invocation while it waits for a response. Configure your HTTP client, SDK, firewall, proxy, or operating system to allow for long connections with timeout or keep-alive settings. If Lambda isn't able to run the function, the error is displayed in the output. aws lambda invoke --function-name my-function \\ --cli-binary-format raw-in-base64-out \\ --payload value response.json You should see the following output: An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Unrecognized token 'value': was expecting ('true', 'false' or 'null') at [Source: (byte[])\"value\"; line: 1, column: 11] The AWS CLI is an open-source tool that enables you to interact with AWS services using commands in"} +{"global_id": 1265, "doc_id": "lambda", "chunk_id": "16", "question_id": 2, "question": "What is the name of the file where the AWS CLI writes the response from the function?", "answer_span": "the name of the file where the AWS CLI writes the response from the function is response.json.", "chunk": "back to the invoker. The payload is a string that contains an event in JSON format. The name of the file where the AWS CLI writes the response from the function is response.json. If the function returns an Invoke a function synchronously 325 AWS Lambda Developer Guide object or error, the response body is the object or error in JSON format. If the function exits without error, the response body is null. Note Lambda does not wait for external extensions to complete before sending the response. External extensions run as independent processes in the execution environment and continue to run after the function invocation is complete. For more information, see Augment Lambda functions using Lambda extensions. The output from the command, which is displayed in the terminal, includes information from headers in the response from Lambda. This includes the version that processed the event (useful when you use aliases), and the status code returned by Lambda. If Lambda was able to run the function, the status code is 200, even if the function returned an error. Note For functions with a long timeout, your client might be disconnected during synchronous invocation while it waits for a response. Configure your HTTP client, SDK, firewall, proxy, or operating system to allow for long connections with timeout or keep-alive settings. If Lambda isn't able to run the function, the error is displayed in the output. aws lambda invoke --function-name my-function \\ --cli-binary-format raw-in-base64-out \\ --payload value response.json You should see the following output: An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Unrecognized token 'value': was expecting ('true', 'false' or 'null') at [Source: (byte[])\"value\"; line: 1, column: 11] The AWS CLI is an open-source tool that enables you to interact with AWS services using commands in"} +{"global_id": 1266, "doc_id": "lambda", "chunk_id": "16", "question_id": 3, "question": "What happens if the function exits without error?", "answer_span": "If the function exits without error, the response body is null.", "chunk": "back to the invoker. The payload is a string that contains an event in JSON format. The name of the file where the AWS CLI writes the response from the function is response.json. If the function returns an Invoke a function synchronously 325 AWS Lambda Developer Guide object or error, the response body is the object or error in JSON format. If the function exits without error, the response body is null. Note Lambda does not wait for external extensions to complete before sending the response. External extensions run as independent processes in the execution environment and continue to run after the function invocation is complete. For more information, see Augment Lambda functions using Lambda extensions. The output from the command, which is displayed in the terminal, includes information from headers in the response from Lambda. This includes the version that processed the event (useful when you use aliases), and the status code returned by Lambda. If Lambda was able to run the function, the status code is 200, even if the function returned an error. Note For functions with a long timeout, your client might be disconnected during synchronous invocation while it waits for a response. Configure your HTTP client, SDK, firewall, proxy, or operating system to allow for long connections with timeout or keep-alive settings. If Lambda isn't able to run the function, the error is displayed in the output. aws lambda invoke --function-name my-function \\ --cli-binary-format raw-in-base64-out \\ --payload value response.json You should see the following output: An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Unrecognized token 'value': was expecting ('true', 'false' or 'null') at [Source: (byte[])\"value\"; line: 1, column: 11] The AWS CLI is an open-source tool that enables you to interact with AWS services using commands in"} +{"global_id": 1267, "doc_id": "lambda", "chunk_id": "16", "question_id": 4, "question": "What status code is returned by Lambda if it was able to run the function?", "answer_span": "the status code is 200, even if the function returned an error.", "chunk": "back to the invoker. The payload is a string that contains an event in JSON format. The name of the file where the AWS CLI writes the response from the function is response.json. If the function returns an Invoke a function synchronously 325 AWS Lambda Developer Guide object or error, the response body is the object or error in JSON format. If the function exits without error, the response body is null. Note Lambda does not wait for external extensions to complete before sending the response. External extensions run as independent processes in the execution environment and continue to run after the function invocation is complete. For more information, see Augment Lambda functions using Lambda extensions. The output from the command, which is displayed in the terminal, includes information from headers in the response from Lambda. This includes the version that processed the event (useful when you use aliases), and the status code returned by Lambda. If Lambda was able to run the function, the status code is 200, even if the function returned an error. Note For functions with a long timeout, your client might be disconnected during synchronous invocation while it waits for a response. Configure your HTTP client, SDK, firewall, proxy, or operating system to allow for long connections with timeout or keep-alive settings. If Lambda isn't able to run the function, the error is displayed in the output. aws lambda invoke --function-name my-function \\ --cli-binary-format raw-in-base64-out \\ --payload value response.json You should see the following output: An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Unrecognized token 'value': was expecting ('true', 'false' or 'null') at [Source: (byte[])\"value\"; line: 1, column: 11] The AWS CLI is an open-source tool that enables you to interact with AWS services using commands in"} +{"global_id": 1268, "doc_id": "lambda", "chunk_id": "17", "question_id": 1, "question": "What error occurred when calling the Invoke operation?", "answer_span": "An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Unrecognized token 'value': was expecting ('true', 'false' or 'null') at [Source: (byte[])'value'; line: 1, column: 11]", "chunk": "An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Unrecognized token 'value': was expecting ('true', 'false' or 'null') at [Source: (byte[])\"value\"; line: 1, column: 11] The AWS CLI is an open-source tool that enables you to interact with AWS services using commands in your command line shell. To complete the steps in this section, you must have the AWS CLI version 2. Invoke a function synchronously 326 AWS Lambda Developer Guide You can use the AWS CLI to retrieve logs for an invocation using the --log-type command option. The response contains a LogResult field that contains up to 4 KB of base64-encoded logs from the invocation. Example retrieve a log ID The following example shows how to retrieve a log ID from the LogResult field for a function named my-function. aws lambda invoke --function-name my-function out --log-type Tail You should see the following output: { \"StatusCode\": 200, \"LogResult\": \"U1RBUlQgUmVxdWVzdElkOiA4N2QwNDRiOC1mMTU0LTExZTgtOGNkYS0yOTc0YzVlNGZiMjEgVmVyc2lvb...\", \"ExecutedVersion\": \"$LATEST\" } Example decode the logs In the same command prompt, use the base64 utility to decode the logs. The following example shows how to retrieve base64-encoded logs for my-function. aws lambda invoke --function-name my-function out --log-type Tail \\ --query 'LogResult' --output text --cli-binary-format raw-in-base64-out | base64 -decode The cli-binary-format option is required if you're using AWS CLI version 2. To make this the default setting, run aws configure set cli-binary-format raw-in-base64-out. For more information, see AWS CLI supported global command line options in the AWS Command Line Interface User Guide for Version 2. You should see the following output: START RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Version: $LATEST \"AWS_SESSION_TOKEN\": \"AgoJb3JpZ2luX2VjELj...\", \"_X_AMZN_TRACE_ID\": \"Root=1-5d02e5caf5792818b6fe8368e5b51d50;Parent=191db58857df8395;Sampled=0\"\",ask/lib:/opt/lib\", END RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Invoke a function synchronously 327 AWS Lambda REPORT RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Duration: 79.67 ms Duration: 80 ms Memory Size: 128 MB Max Memory Used: 73 MB Developer Guide Billed The base64 utility"} +{"global_id": 1269, "doc_id": "lambda", "chunk_id": "17", "question_id": 2, "question": "What is the AWS CLI?", "answer_span": "The AWS CLI is an open-source tool that enables you to interact with AWS services using commands in your command line shell.", "chunk": "An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Unrecognized token 'value': was expecting ('true', 'false' or 'null') at [Source: (byte[])\"value\"; line: 1, column: 11] The AWS CLI is an open-source tool that enables you to interact with AWS services using commands in your command line shell. To complete the steps in this section, you must have the AWS CLI version 2. Invoke a function synchronously 326 AWS Lambda Developer Guide You can use the AWS CLI to retrieve logs for an invocation using the --log-type command option. The response contains a LogResult field that contains up to 4 KB of base64-encoded logs from the invocation. Example retrieve a log ID The following example shows how to retrieve a log ID from the LogResult field for a function named my-function. aws lambda invoke --function-name my-function out --log-type Tail You should see the following output: { \"StatusCode\": 200, \"LogResult\": \"U1RBUlQgUmVxdWVzdElkOiA4N2QwNDRiOC1mMTU0LTExZTgtOGNkYS0yOTc0YzVlNGZiMjEgVmVyc2lvb...\", \"ExecutedVersion\": \"$LATEST\" } Example decode the logs In the same command prompt, use the base64 utility to decode the logs. The following example shows how to retrieve base64-encoded logs for my-function. aws lambda invoke --function-name my-function out --log-type Tail \\ --query 'LogResult' --output text --cli-binary-format raw-in-base64-out | base64 -decode The cli-binary-format option is required if you're using AWS CLI version 2. To make this the default setting, run aws configure set cli-binary-format raw-in-base64-out. For more information, see AWS CLI supported global command line options in the AWS Command Line Interface User Guide for Version 2. You should see the following output: START RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Version: $LATEST \"AWS_SESSION_TOKEN\": \"AgoJb3JpZ2luX2VjELj...\", \"_X_AMZN_TRACE_ID\": \"Root=1-5d02e5caf5792818b6fe8368e5b51d50;Parent=191db58857df8395;Sampled=0\"\",ask/lib:/opt/lib\", END RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Invoke a function synchronously 327 AWS Lambda REPORT RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Duration: 79.67 ms Duration: 80 ms Memory Size: 128 MB Max Memory Used: 73 MB Developer Guide Billed The base64 utility"} +{"global_id": 1270, "doc_id": "lambda", "chunk_id": "17", "question_id": 3, "question": "What command option can be used to retrieve logs for an invocation?", "answer_span": "You can use the AWS CLI to retrieve logs for an invocation using the --log-type command option.", "chunk": "An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Unrecognized token 'value': was expecting ('true', 'false' or 'null') at [Source: (byte[])\"value\"; line: 1, column: 11] The AWS CLI is an open-source tool that enables you to interact with AWS services using commands in your command line shell. To complete the steps in this section, you must have the AWS CLI version 2. Invoke a function synchronously 326 AWS Lambda Developer Guide You can use the AWS CLI to retrieve logs for an invocation using the --log-type command option. The response contains a LogResult field that contains up to 4 KB of base64-encoded logs from the invocation. Example retrieve a log ID The following example shows how to retrieve a log ID from the LogResult field for a function named my-function. aws lambda invoke --function-name my-function out --log-type Tail You should see the following output: { \"StatusCode\": 200, \"LogResult\": \"U1RBUlQgUmVxdWVzdElkOiA4N2QwNDRiOC1mMTU0LTExZTgtOGNkYS0yOTc0YzVlNGZiMjEgVmVyc2lvb...\", \"ExecutedVersion\": \"$LATEST\" } Example decode the logs In the same command prompt, use the base64 utility to decode the logs. The following example shows how to retrieve base64-encoded logs for my-function. aws lambda invoke --function-name my-function out --log-type Tail \\ --query 'LogResult' --output text --cli-binary-format raw-in-base64-out | base64 -decode The cli-binary-format option is required if you're using AWS CLI version 2. To make this the default setting, run aws configure set cli-binary-format raw-in-base64-out. For more information, see AWS CLI supported global command line options in the AWS Command Line Interface User Guide for Version 2. You should see the following output: START RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Version: $LATEST \"AWS_SESSION_TOKEN\": \"AgoJb3JpZ2luX2VjELj...\", \"_X_AMZN_TRACE_ID\": \"Root=1-5d02e5caf5792818b6fe8368e5b51d50;Parent=191db58857df8395;Sampled=0\"\",ask/lib:/opt/lib\", END RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Invoke a function synchronously 327 AWS Lambda REPORT RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Duration: 79.67 ms Duration: 80 ms Memory Size: 128 MB Max Memory Used: 73 MB Developer Guide Billed The base64 utility"} +{"global_id": 1271, "doc_id": "lambda", "chunk_id": "17", "question_id": 4, "question": "What is required if you're using AWS CLI version 2?", "answer_span": "The cli-binary-format option is required if you're using AWS CLI version 2.", "chunk": "An error occurred (InvalidRequestContentException) when calling the Invoke operation: Could not parse request body into json: Unrecognized token 'value': was expecting ('true', 'false' or 'null') at [Source: (byte[])\"value\"; line: 1, column: 11] The AWS CLI is an open-source tool that enables you to interact with AWS services using commands in your command line shell. To complete the steps in this section, you must have the AWS CLI version 2. Invoke a function synchronously 326 AWS Lambda Developer Guide You can use the AWS CLI to retrieve logs for an invocation using the --log-type command option. The response contains a LogResult field that contains up to 4 KB of base64-encoded logs from the invocation. Example retrieve a log ID The following example shows how to retrieve a log ID from the LogResult field for a function named my-function. aws lambda invoke --function-name my-function out --log-type Tail You should see the following output: { \"StatusCode\": 200, \"LogResult\": \"U1RBUlQgUmVxdWVzdElkOiA4N2QwNDRiOC1mMTU0LTExZTgtOGNkYS0yOTc0YzVlNGZiMjEgVmVyc2lvb...\", \"ExecutedVersion\": \"$LATEST\" } Example decode the logs In the same command prompt, use the base64 utility to decode the logs. The following example shows how to retrieve base64-encoded logs for my-function. aws lambda invoke --function-name my-function out --log-type Tail \\ --query 'LogResult' --output text --cli-binary-format raw-in-base64-out | base64 -decode The cli-binary-format option is required if you're using AWS CLI version 2. To make this the default setting, run aws configure set cli-binary-format raw-in-base64-out. For more information, see AWS CLI supported global command line options in the AWS Command Line Interface User Guide for Version 2. You should see the following output: START RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Version: $LATEST \"AWS_SESSION_TOKEN\": \"AgoJb3JpZ2luX2VjELj...\", \"_X_AMZN_TRACE_ID\": \"Root=1-5d02e5caf5792818b6fe8368e5b51d50;Parent=191db58857df8395;Sampled=0\"\",ask/lib:/opt/lib\", END RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Invoke a function synchronously 327 AWS Lambda REPORT RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Duration: 79.67 ms Duration: 80 ms Memory Size: 128 MB Max Memory Used: 73 MB Developer Guide Billed The base64 utility"} +{"global_id": 1272, "doc_id": "lambda", "chunk_id": "18", "question_id": 1, "question": "What should you see as the output when invoking a function?", "answer_span": "You should see the following output: START RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Version: $LATEST", "chunk": "2. You should see the following output: START RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Version: $LATEST \"AWS_SESSION_TOKEN\": \"AgoJb3JpZ2luX2VjELj...\", \"_X_AMZN_TRACE_ID\": \"Root=1-5d02e5caf5792818b6fe8368e5b51d50;Parent=191db58857df8395;Sampled=0\"\",ask/lib:/opt/lib\", END RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Invoke a function synchronously 327 AWS Lambda REPORT RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Duration: 79.67 ms Duration: 80 ms Memory Size: 128 MB Max Memory Used: 73 MB Developer Guide Billed The base64 utility is available on Linux, macOS, and Ubuntu on Windows. macOS users may need to use base64 -D. For more information about the Invoke API, including a full list of parameters, headers, and errors, see Invoke. When you invoke a function directly, you can check the response for errors and retry. The AWS CLI and AWS SDK also automatically retry on client timeouts, throttling, and service errors. For more information, see Understanding retry behavior in Lambda. Invoke a function synchronously 328 AWS Lambda Developer Guide Invoking a Lambda function asynchronously Several AWS services, such as Amazon Simple Storage Service (Amazon S3) and Amazon Simple Notification Service (Amazon SNS), invoke functions asynchronously to process events. You can also invoke a Lambda function asynchronously using the AWS Command Line Interface (AWS CLI) or one of the AWS SDKs. When you invoke a function asynchronously, you don't wait for a response from the function code. You hand off the event to Lambda and Lambda handles the rest. You can configure how Lambda handles errors, and can send invocation records to a downstream resource such as Amazon Simple Queue Service (Amazon SQS) or Amazon EventBridge (EventBridge) to chain together components of your application. The following diagram shows clients invoking a Lambda function asynchronously. Lambda queues the events before sending them to the function. For asynchronous invocation, Lambda places the event in a queue and returns a success response without additional information. A separate process reads events from the queue and sends them to your function."} +{"global_id": 1273, "doc_id": "lambda", "chunk_id": "18", "question_id": 2, "question": "What is the duration of the AWS Lambda function invocation?", "answer_span": "Duration: 79.67 ms Duration: 80 ms", "chunk": "2. You should see the following output: START RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Version: $LATEST \"AWS_SESSION_TOKEN\": \"AgoJb3JpZ2luX2VjELj...\", \"_X_AMZN_TRACE_ID\": \"Root=1-5d02e5caf5792818b6fe8368e5b51d50;Parent=191db58857df8395;Sampled=0\"\",ask/lib:/opt/lib\", END RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Invoke a function synchronously 327 AWS Lambda REPORT RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Duration: 79.67 ms Duration: 80 ms Memory Size: 128 MB Max Memory Used: 73 MB Developer Guide Billed The base64 utility is available on Linux, macOS, and Ubuntu on Windows. macOS users may need to use base64 -D. For more information about the Invoke API, including a full list of parameters, headers, and errors, see Invoke. When you invoke a function directly, you can check the response for errors and retry. The AWS CLI and AWS SDK also automatically retry on client timeouts, throttling, and service errors. For more information, see Understanding retry behavior in Lambda. Invoke a function synchronously 328 AWS Lambda Developer Guide Invoking a Lambda function asynchronously Several AWS services, such as Amazon Simple Storage Service (Amazon S3) and Amazon Simple Notification Service (Amazon SNS), invoke functions asynchronously to process events. You can also invoke a Lambda function asynchronously using the AWS Command Line Interface (AWS CLI) or one of the AWS SDKs. When you invoke a function asynchronously, you don't wait for a response from the function code. You hand off the event to Lambda and Lambda handles the rest. You can configure how Lambda handles errors, and can send invocation records to a downstream resource such as Amazon Simple Queue Service (Amazon SQS) or Amazon EventBridge (EventBridge) to chain together components of your application. The following diagram shows clients invoking a Lambda function asynchronously. Lambda queues the events before sending them to the function. For asynchronous invocation, Lambda places the event in a queue and returns a success response without additional information. A separate process reads events from the queue and sends them to your function."} +{"global_id": 1274, "doc_id": "lambda", "chunk_id": "18", "question_id": 3, "question": "Which operating systems support the base64 utility?", "answer_span": "The base64 utility is available on Linux, macOS, and Ubuntu on Windows.", "chunk": "2. You should see the following output: START RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Version: $LATEST \"AWS_SESSION_TOKEN\": \"AgoJb3JpZ2luX2VjELj...\", \"_X_AMZN_TRACE_ID\": \"Root=1-5d02e5caf5792818b6fe8368e5b51d50;Parent=191db58857df8395;Sampled=0\"\",ask/lib:/opt/lib\", END RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Invoke a function synchronously 327 AWS Lambda REPORT RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Duration: 79.67 ms Duration: 80 ms Memory Size: 128 MB Max Memory Used: 73 MB Developer Guide Billed The base64 utility is available on Linux, macOS, and Ubuntu on Windows. macOS users may need to use base64 -D. For more information about the Invoke API, including a full list of parameters, headers, and errors, see Invoke. When you invoke a function directly, you can check the response for errors and retry. The AWS CLI and AWS SDK also automatically retry on client timeouts, throttling, and service errors. For more information, see Understanding retry behavior in Lambda. Invoke a function synchronously 328 AWS Lambda Developer Guide Invoking a Lambda function asynchronously Several AWS services, such as Amazon Simple Storage Service (Amazon S3) and Amazon Simple Notification Service (Amazon SNS), invoke functions asynchronously to process events. You can also invoke a Lambda function asynchronously using the AWS Command Line Interface (AWS CLI) or one of the AWS SDKs. When you invoke a function asynchronously, you don't wait for a response from the function code. You hand off the event to Lambda and Lambda handles the rest. You can configure how Lambda handles errors, and can send invocation records to a downstream resource such as Amazon Simple Queue Service (Amazon SQS) or Amazon EventBridge (EventBridge) to chain together components of your application. The following diagram shows clients invoking a Lambda function asynchronously. Lambda queues the events before sending them to the function. For asynchronous invocation, Lambda places the event in a queue and returns a success response without additional information. A separate process reads events from the queue and sends them to your function."} +{"global_id": 1275, "doc_id": "lambda", "chunk_id": "18", "question_id": 4, "question": "How does Lambda handle asynchronous invocation?", "answer_span": "When you invoke a function asynchronously, you don't wait for a response from the function code.", "chunk": "2. You should see the following output: START RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Version: $LATEST \"AWS_SESSION_TOKEN\": \"AgoJb3JpZ2luX2VjELj...\", \"_X_AMZN_TRACE_ID\": \"Root=1-5d02e5caf5792818b6fe8368e5b51d50;Parent=191db58857df8395;Sampled=0\"\",ask/lib:/opt/lib\", END RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Invoke a function synchronously 327 AWS Lambda REPORT RequestId: 57f231fb-1730-4395-85cb-4f71bd2b87b8 Duration: 79.67 ms Duration: 80 ms Memory Size: 128 MB Max Memory Used: 73 MB Developer Guide Billed The base64 utility is available on Linux, macOS, and Ubuntu on Windows. macOS users may need to use base64 -D. For more information about the Invoke API, including a full list of parameters, headers, and errors, see Invoke. When you invoke a function directly, you can check the response for errors and retry. The AWS CLI and AWS SDK also automatically retry on client timeouts, throttling, and service errors. For more information, see Understanding retry behavior in Lambda. Invoke a function synchronously 328 AWS Lambda Developer Guide Invoking a Lambda function asynchronously Several AWS services, such as Amazon Simple Storage Service (Amazon S3) and Amazon Simple Notification Service (Amazon SNS), invoke functions asynchronously to process events. You can also invoke a Lambda function asynchronously using the AWS Command Line Interface (AWS CLI) or one of the AWS SDKs. When you invoke a function asynchronously, you don't wait for a response from the function code. You hand off the event to Lambda and Lambda handles the rest. You can configure how Lambda handles errors, and can send invocation records to a downstream resource such as Amazon Simple Queue Service (Amazon SQS) or Amazon EventBridge (EventBridge) to chain together components of your application. The following diagram shows clients invoking a Lambda function asynchronously. Lambda queues the events before sending them to the function. For asynchronous invocation, Lambda places the event in a queue and returns a success response without additional information. A separate process reads events from the queue and sends them to your function."} +{"global_id": 1276, "doc_id": "lambda", "chunk_id": "19", "question_id": 1, "question": "What does Lambda do before sending events to the function for asynchronous invocation?", "answer_span": "Lambda queues the events before sending them to the function.", "chunk": "diagram shows clients invoking a Lambda function asynchronously. Lambda queues the events before sending them to the function. For asynchronous invocation, Lambda places the event in a queue and returns a success response without additional information. A separate process reads events from the queue and sends them to your function. To invoke a Lambda function asynchronously using the AWS Command Line Interface (AWS CLI) or one of the AWS SDKs, set the InvocationType parameter to Event. The following example shows an AWS CLI command to invoke a function. aws lambda invoke \\ --function-name my-function \\ --invocation-type Event \\ --cli-binary-format raw-in-base64-out \\ Asynchronous invocation 329 AWS Lambda Developer Guide --payload '{ \"key\": \"value\" }' response.json You should see the following output: { \"StatusCode\": 202 } The cli-binary-format option is required if you're using AWS CLI version 2. To make this the default setting, run aws configure set cli-binary-format raw-in-base64-out. For more information, see AWS CLI supported global command line options in the AWS Command Line Interface User Guide for Version 2. The output file (response.json) doesn't contain any information, but is still created when you run this command. If Lambda isn't able to add the event to the queue, the error message appears in the command output. How Lambda handles errors and retries with asynchronous invocation Lambda manages your function's asynchronous event queue and attempts to retry on errors. If the function returns an error, by default Lambda attempts to run it two more times, with a one-minute wait between the first two attempts, and two minutes between the second and third attempts. Function errors include errors returned by the function's code and errors returned by the function's runtime, such as timeouts. If the function doesn't have enough concurrency available to process all events, additional requests are throttled. For throttling errors"} +{"global_id": 1277, "doc_id": "lambda", "chunk_id": "19", "question_id": 2, "question": "What parameter must be set to invoke a Lambda function asynchronously using the AWS CLI?", "answer_span": "set the InvocationType parameter to Event.", "chunk": "diagram shows clients invoking a Lambda function asynchronously. Lambda queues the events before sending them to the function. For asynchronous invocation, Lambda places the event in a queue and returns a success response without additional information. A separate process reads events from the queue and sends them to your function. To invoke a Lambda function asynchronously using the AWS Command Line Interface (AWS CLI) or one of the AWS SDKs, set the InvocationType parameter to Event. The following example shows an AWS CLI command to invoke a function. aws lambda invoke \\ --function-name my-function \\ --invocation-type Event \\ --cli-binary-format raw-in-base64-out \\ Asynchronous invocation 329 AWS Lambda Developer Guide --payload '{ \"key\": \"value\" }' response.json You should see the following output: { \"StatusCode\": 202 } The cli-binary-format option is required if you're using AWS CLI version 2. To make this the default setting, run aws configure set cli-binary-format raw-in-base64-out. For more information, see AWS CLI supported global command line options in the AWS Command Line Interface User Guide for Version 2. The output file (response.json) doesn't contain any information, but is still created when you run this command. If Lambda isn't able to add the event to the queue, the error message appears in the command output. How Lambda handles errors and retries with asynchronous invocation Lambda manages your function's asynchronous event queue and attempts to retry on errors. If the function returns an error, by default Lambda attempts to run it two more times, with a one-minute wait between the first two attempts, and two minutes between the second and third attempts. Function errors include errors returned by the function's code and errors returned by the function's runtime, such as timeouts. If the function doesn't have enough concurrency available to process all events, additional requests are throttled. For throttling errors"} +{"global_id": 1278, "doc_id": "lambda", "chunk_id": "19", "question_id": 3, "question": "What is the default behavior of Lambda when a function returns an error during asynchronous invocation?", "answer_span": "by default Lambda attempts to run it two more times, with a one-minute wait between the first two attempts, and two minutes between the second and third attempts.", "chunk": "diagram shows clients invoking a Lambda function asynchronously. Lambda queues the events before sending them to the function. For asynchronous invocation, Lambda places the event in a queue and returns a success response without additional information. A separate process reads events from the queue and sends them to your function. To invoke a Lambda function asynchronously using the AWS Command Line Interface (AWS CLI) or one of the AWS SDKs, set the InvocationType parameter to Event. The following example shows an AWS CLI command to invoke a function. aws lambda invoke \\ --function-name my-function \\ --invocation-type Event \\ --cli-binary-format raw-in-base64-out \\ Asynchronous invocation 329 AWS Lambda Developer Guide --payload '{ \"key\": \"value\" }' response.json You should see the following output: { \"StatusCode\": 202 } The cli-binary-format option is required if you're using AWS CLI version 2. To make this the default setting, run aws configure set cli-binary-format raw-in-base64-out. For more information, see AWS CLI supported global command line options in the AWS Command Line Interface User Guide for Version 2. The output file (response.json) doesn't contain any information, but is still created when you run this command. If Lambda isn't able to add the event to the queue, the error message appears in the command output. How Lambda handles errors and retries with asynchronous invocation Lambda manages your function's asynchronous event queue and attempts to retry on errors. If the function returns an error, by default Lambda attempts to run it two more times, with a one-minute wait between the first two attempts, and two minutes between the second and third attempts. Function errors include errors returned by the function's code and errors returned by the function's runtime, such as timeouts. If the function doesn't have enough concurrency available to process all events, additional requests are throttled. For throttling errors"} +{"global_id": 1279, "doc_id": "lambda", "chunk_id": "19", "question_id": 4, "question": "What happens if Lambda isn't able to add the event to the queue?", "answer_span": "the error message appears in the command output.", "chunk": "diagram shows clients invoking a Lambda function asynchronously. Lambda queues the events before sending them to the function. For asynchronous invocation, Lambda places the event in a queue and returns a success response without additional information. A separate process reads events from the queue and sends them to your function. To invoke a Lambda function asynchronously using the AWS Command Line Interface (AWS CLI) or one of the AWS SDKs, set the InvocationType parameter to Event. The following example shows an AWS CLI command to invoke a function. aws lambda invoke \\ --function-name my-function \\ --invocation-type Event \\ --cli-binary-format raw-in-base64-out \\ Asynchronous invocation 329 AWS Lambda Developer Guide --payload '{ \"key\": \"value\" }' response.json You should see the following output: { \"StatusCode\": 202 } The cli-binary-format option is required if you're using AWS CLI version 2. To make this the default setting, run aws configure set cli-binary-format raw-in-base64-out. For more information, see AWS CLI supported global command line options in the AWS Command Line Interface User Guide for Version 2. The output file (response.json) doesn't contain any information, but is still created when you run this command. If Lambda isn't able to add the event to the queue, the error message appears in the command output. How Lambda handles errors and retries with asynchronous invocation Lambda manages your function's asynchronous event queue and attempts to retry on errors. If the function returns an error, by default Lambda attempts to run it two more times, with a one-minute wait between the first two attempts, and two minutes between the second and third attempts. Function errors include errors returned by the function's code and errors returned by the function's runtime, such as timeouts. If the function doesn't have enough concurrency available to process all events, additional requests are throttled. For throttling errors"} +{"global_id": 1280, "doc_id": "lambda", "chunk_id": "20", "question_id": 1, "question": "What is the maximum retry interval for Lambda after the first attempt?", "answer_span": "the maximum of 5 minutes", "chunk": "first two attempts, and two minutes between the second and third attempts. Function errors include errors returned by the function's code and errors returned by the function's runtime, such as timeouts. If the function doesn't have enough concurrency available to process all events, additional requests are throttled. For throttling errors (429) and system errors (500-series), Lambda returns the event to the queue and attempts to run the function again for up to 6 hours by default. The retry interval increases exponentially from 1 second after the first attempt to a maximum of 5 minutes. If the queue contains many entries, Lambda increases the retry interval and reduces the rate at which it reads events from the queue. Even if your function doesn't return an error, it's possible for it to receive the same event from Lambda multiple times because the queue itself is eventually consistent. If the function can't keep up with incoming events, events might also be deleted from the queue without being sent to the function. Ensure that your function code gracefully handles duplicate events, and that you have enough concurrency available to handle all invocations. Error handling 330 AWS Lambda Developer Guide Understanding Lambda function scaling Concurrency is the number of in-flight requests that your AWS Lambda function is handling at the same time. For each concurrent request, Lambda provisions a separate instance of your execution environment. As your functions receive more requests, Lambda automatically handles scaling the number of execution environments until you reach your account's concurrency limit. By default, Lambda provides your account with a total concurrency limit of 1,000 concurrent executions across all functions in an AWS Region. To support your specific account needs, you can request a quota increase and configure function-level concurrency controls so that your critical functions don't experience throttling. This"} +{"global_id": 1281, "doc_id": "lambda", "chunk_id": "20", "question_id": 2, "question": "What happens if the function doesn't have enough concurrency available?", "answer_span": "additional requests are throttled", "chunk": "first two attempts, and two minutes between the second and third attempts. Function errors include errors returned by the function's code and errors returned by the function's runtime, such as timeouts. If the function doesn't have enough concurrency available to process all events, additional requests are throttled. For throttling errors (429) and system errors (500-series), Lambda returns the event to the queue and attempts to run the function again for up to 6 hours by default. The retry interval increases exponentially from 1 second after the first attempt to a maximum of 5 minutes. If the queue contains many entries, Lambda increases the retry interval and reduces the rate at which it reads events from the queue. Even if your function doesn't return an error, it's possible for it to receive the same event from Lambda multiple times because the queue itself is eventually consistent. If the function can't keep up with incoming events, events might also be deleted from the queue without being sent to the function. Ensure that your function code gracefully handles duplicate events, and that you have enough concurrency available to handle all invocations. Error handling 330 AWS Lambda Developer Guide Understanding Lambda function scaling Concurrency is the number of in-flight requests that your AWS Lambda function is handling at the same time. For each concurrent request, Lambda provisions a separate instance of your execution environment. As your functions receive more requests, Lambda automatically handles scaling the number of execution environments until you reach your account's concurrency limit. By default, Lambda provides your account with a total concurrency limit of 1,000 concurrent executions across all functions in an AWS Region. To support your specific account needs, you can request a quota increase and configure function-level concurrency controls so that your critical functions don't experience throttling. This"} +{"global_id": 1282, "doc_id": "lambda", "chunk_id": "20", "question_id": 3, "question": "What is the default total concurrency limit for an AWS account?", "answer_span": "a total concurrency limit of 1,000 concurrent executions across all functions in an AWS Region", "chunk": "first two attempts, and two minutes between the second and third attempts. Function errors include errors returned by the function's code and errors returned by the function's runtime, such as timeouts. If the function doesn't have enough concurrency available to process all events, additional requests are throttled. For throttling errors (429) and system errors (500-series), Lambda returns the event to the queue and attempts to run the function again for up to 6 hours by default. The retry interval increases exponentially from 1 second after the first attempt to a maximum of 5 minutes. If the queue contains many entries, Lambda increases the retry interval and reduces the rate at which it reads events from the queue. Even if your function doesn't return an error, it's possible for it to receive the same event from Lambda multiple times because the queue itself is eventually consistent. If the function can't keep up with incoming events, events might also be deleted from the queue without being sent to the function. Ensure that your function code gracefully handles duplicate events, and that you have enough concurrency available to handle all invocations. Error handling 330 AWS Lambda Developer Guide Understanding Lambda function scaling Concurrency is the number of in-flight requests that your AWS Lambda function is handling at the same time. For each concurrent request, Lambda provisions a separate instance of your execution environment. As your functions receive more requests, Lambda automatically handles scaling the number of execution environments until you reach your account's concurrency limit. By default, Lambda provides your account with a total concurrency limit of 1,000 concurrent executions across all functions in an AWS Region. To support your specific account needs, you can request a quota increase and configure function-level concurrency controls so that your critical functions don't experience throttling. This"} +{"global_id": 1283, "doc_id": "lambda", "chunk_id": "20", "question_id": 4, "question": "What should you ensure about your function code regarding duplicate events?", "answer_span": "your function code gracefully handles duplicate events", "chunk": "first two attempts, and two minutes between the second and third attempts. Function errors include errors returned by the function's code and errors returned by the function's runtime, such as timeouts. If the function doesn't have enough concurrency available to process all events, additional requests are throttled. For throttling errors (429) and system errors (500-series), Lambda returns the event to the queue and attempts to run the function again for up to 6 hours by default. The retry interval increases exponentially from 1 second after the first attempt to a maximum of 5 minutes. If the queue contains many entries, Lambda increases the retry interval and reduces the rate at which it reads events from the queue. Even if your function doesn't return an error, it's possible for it to receive the same event from Lambda multiple times because the queue itself is eventually consistent. If the function can't keep up with incoming events, events might also be deleted from the queue without being sent to the function. Ensure that your function code gracefully handles duplicate events, and that you have enough concurrency available to handle all invocations. Error handling 330 AWS Lambda Developer Guide Understanding Lambda function scaling Concurrency is the number of in-flight requests that your AWS Lambda function is handling at the same time. For each concurrent request, Lambda provisions a separate instance of your execution environment. As your functions receive more requests, Lambda automatically handles scaling the number of execution environments until you reach your account's concurrency limit. By default, Lambda provides your account with a total concurrency limit of 1,000 concurrent executions across all functions in an AWS Region. To support your specific account needs, you can request a quota increase and configure function-level concurrency controls so that your critical functions don't experience throttling. This"} +{"global_id": 1284, "doc_id": "lambda", "chunk_id": "21", "question_id": 1, "question": "What is the default total concurrency limit provided by Lambda for an account?", "answer_span": "By default, Lambda provides your account with a total concurrency limit of 1,000 concurrent executions across all functions in an AWS Region.", "chunk": "concurrency limit. By default, Lambda provides your account with a total concurrency limit of 1,000 concurrent executions across all functions in an AWS Region. To support your specific account needs, you can request a quota increase and configure function-level concurrency controls so that your critical functions don't experience throttling. This topic explains concurrency concepts and function scaling in Lambda. By the end of this topic, you'll be able to understand how to calculate concurrency, visualize the two main concurrency control options (reserved and provisioned), estimate appropriate concurrency control settings, and view metrics for further optimization. Sections • Understanding and visualizing concurrency • Calculating concurrency for a function • Understanding reserved concurrency and provisioned concurrency • Understanding concurrency and requests per second • Concurrency quotas • Configuring reserved concurrency for a function • Configuring provisioned concurrency for a function • Lambda scaling behavior • Monitoring concurrency Understanding and visualizing concurrency Lambda invokes your function in a secure and isolated execution environment. To handle a request, Lambda must first initialize an execution environment (the Init phase), before using it to invoke your function (the Invoke phase): Understanding and visualizing concurrency 438 AWS Lambda Developer Guide Note Actual Init and Invoke durations can vary depending on many factors, such as the runtime you choose and the Lambda function code. The previous diagram isn't meant to represent the exact proportions of Init and Invoke phase durations. The previous diagram uses a rectangle to represent a single execution environment. When your function receives its very first request (represented by the yellow circle with label 1), Lambda creates a new execution environment and runs the code outside your main handler during the Init phase. Then, Lambda runs your function's main handler code during the Invoke phase. During this entire process, this execution environment is busy and"} +{"global_id": 1285, "doc_id": "lambda", "chunk_id": "21", "question_id": 2, "question": "What can you request to support your specific account needs regarding concurrency?", "answer_span": "To support your specific account needs, you can request a quota increase and configure function-level concurrency controls so that your critical functions don't experience throttling.", "chunk": "concurrency limit. By default, Lambda provides your account with a total concurrency limit of 1,000 concurrent executions across all functions in an AWS Region. To support your specific account needs, you can request a quota increase and configure function-level concurrency controls so that your critical functions don't experience throttling. This topic explains concurrency concepts and function scaling in Lambda. By the end of this topic, you'll be able to understand how to calculate concurrency, visualize the two main concurrency control options (reserved and provisioned), estimate appropriate concurrency control settings, and view metrics for further optimization. Sections • Understanding and visualizing concurrency • Calculating concurrency for a function • Understanding reserved concurrency and provisioned concurrency • Understanding concurrency and requests per second • Concurrency quotas • Configuring reserved concurrency for a function • Configuring provisioned concurrency for a function • Lambda scaling behavior • Monitoring concurrency Understanding and visualizing concurrency Lambda invokes your function in a secure and isolated execution environment. To handle a request, Lambda must first initialize an execution environment (the Init phase), before using it to invoke your function (the Invoke phase): Understanding and visualizing concurrency 438 AWS Lambda Developer Guide Note Actual Init and Invoke durations can vary depending on many factors, such as the runtime you choose and the Lambda function code. The previous diagram isn't meant to represent the exact proportions of Init and Invoke phase durations. The previous diagram uses a rectangle to represent a single execution environment. When your function receives its very first request (represented by the yellow circle with label 1), Lambda creates a new execution environment and runs the code outside your main handler during the Init phase. Then, Lambda runs your function's main handler code during the Invoke phase. During this entire process, this execution environment is busy and"} +{"global_id": 1286, "doc_id": "lambda", "chunk_id": "21", "question_id": 3, "question": "What are the two main concurrency control options mentioned in the text?", "answer_span": "visualize the two main concurrency control options (reserved and provisioned)", "chunk": "concurrency limit. By default, Lambda provides your account with a total concurrency limit of 1,000 concurrent executions across all functions in an AWS Region. To support your specific account needs, you can request a quota increase and configure function-level concurrency controls so that your critical functions don't experience throttling. This topic explains concurrency concepts and function scaling in Lambda. By the end of this topic, you'll be able to understand how to calculate concurrency, visualize the two main concurrency control options (reserved and provisioned), estimate appropriate concurrency control settings, and view metrics for further optimization. Sections • Understanding and visualizing concurrency • Calculating concurrency for a function • Understanding reserved concurrency and provisioned concurrency • Understanding concurrency and requests per second • Concurrency quotas • Configuring reserved concurrency for a function • Configuring provisioned concurrency for a function • Lambda scaling behavior • Monitoring concurrency Understanding and visualizing concurrency Lambda invokes your function in a secure and isolated execution environment. To handle a request, Lambda must first initialize an execution environment (the Init phase), before using it to invoke your function (the Invoke phase): Understanding and visualizing concurrency 438 AWS Lambda Developer Guide Note Actual Init and Invoke durations can vary depending on many factors, such as the runtime you choose and the Lambda function code. The previous diagram isn't meant to represent the exact proportions of Init and Invoke phase durations. The previous diagram uses a rectangle to represent a single execution environment. When your function receives its very first request (represented by the yellow circle with label 1), Lambda creates a new execution environment and runs the code outside your main handler during the Init phase. Then, Lambda runs your function's main handler code during the Invoke phase. During this entire process, this execution environment is busy and"} +{"global_id": 1287, "doc_id": "lambda", "chunk_id": "21", "question_id": 4, "question": "What must Lambda do first to handle a request?", "answer_span": "Lambda must first initialize an execution environment (the Init phase), before using it to invoke your function (the Invoke phase):", "chunk": "concurrency limit. By default, Lambda provides your account with a total concurrency limit of 1,000 concurrent executions across all functions in an AWS Region. To support your specific account needs, you can request a quota increase and configure function-level concurrency controls so that your critical functions don't experience throttling. This topic explains concurrency concepts and function scaling in Lambda. By the end of this topic, you'll be able to understand how to calculate concurrency, visualize the two main concurrency control options (reserved and provisioned), estimate appropriate concurrency control settings, and view metrics for further optimization. Sections • Understanding and visualizing concurrency • Calculating concurrency for a function • Understanding reserved concurrency and provisioned concurrency • Understanding concurrency and requests per second • Concurrency quotas • Configuring reserved concurrency for a function • Configuring provisioned concurrency for a function • Lambda scaling behavior • Monitoring concurrency Understanding and visualizing concurrency Lambda invokes your function in a secure and isolated execution environment. To handle a request, Lambda must first initialize an execution environment (the Init phase), before using it to invoke your function (the Invoke phase): Understanding and visualizing concurrency 438 AWS Lambda Developer Guide Note Actual Init and Invoke durations can vary depending on many factors, such as the runtime you choose and the Lambda function code. The previous diagram isn't meant to represent the exact proportions of Init and Invoke phase durations. The previous diagram uses a rectangle to represent a single execution environment. When your function receives its very first request (represented by the yellow circle with label 1), Lambda creates a new execution environment and runs the code outside your main handler during the Init phase. Then, Lambda runs your function's main handler code during the Invoke phase. During this entire process, this execution environment is busy and"} +{"global_id": 1288, "doc_id": "lambda", "chunk_id": "22", "question_id": 1, "question": "What happens during the Init phase of the first request?", "answer_span": "Lambda creates a new execution environment and runs the code outside your main handler during the Init phase.", "chunk": "first request (represented by the yellow circle with label 1), Lambda creates a new execution environment and runs the code outside your main handler during the Init phase. Then, Lambda runs your function's main handler code during the Invoke phase. During this entire process, this execution environment is busy and cannot process other requests. When Lambda finishes processing the first request, this execution environment can then process additional requests for the same function. For subsequent requests, Lambda doesn't need to reinitialize the environment. In the previous diagram, Lambda reuses the execution environment to handle the second request (represented by the yellow circle with label 2). Understanding and visualizing concurrency 439 AWS Lambda Developer Guide So far, we've focused on just a single instance of your execution environment (that is, a concurrency of 1). In practice, Lambda may need to provision multiple execution environment instances in parallel to handle all incoming requests. When your function receives a new request, one of two things can happen: • If a pre-initialized execution environment instance is available, Lambda uses it to process the request. • Otherwise, Lambda creates a new execution environment instance to process the request. For example, let's explore what happens when your function receives 10 requests: In the previous diagram, each horizontal plane represents a single execution environment instance (labeled from A through F). Here's how Lambda handles each request: Request Lambda behavior Reasoning 1 Provisions new environment A This is the first request; no execution environment instances are available. 2 Provisions new environment B Existing execution environme nt instance A is busy. Understanding and visualizing concurrency 440 AWS Lambda Developer Guide Best practices for working with AWS Lambda functions The following are recommended best practices for using AWS Lambda: Topics • Function code • Function configuration • Function scalability •"} +{"global_id": 1289, "doc_id": "lambda", "chunk_id": "22", "question_id": 2, "question": "What does Lambda do during the Invoke phase?", "answer_span": "Then, Lambda runs your function's main handler code during the Invoke phase.", "chunk": "first request (represented by the yellow circle with label 1), Lambda creates a new execution environment and runs the code outside your main handler during the Init phase. Then, Lambda runs your function's main handler code during the Invoke phase. During this entire process, this execution environment is busy and cannot process other requests. When Lambda finishes processing the first request, this execution environment can then process additional requests for the same function. For subsequent requests, Lambda doesn't need to reinitialize the environment. In the previous diagram, Lambda reuses the execution environment to handle the second request (represented by the yellow circle with label 2). Understanding and visualizing concurrency 439 AWS Lambda Developer Guide So far, we've focused on just a single instance of your execution environment (that is, a concurrency of 1). In practice, Lambda may need to provision multiple execution environment instances in parallel to handle all incoming requests. When your function receives a new request, one of two things can happen: • If a pre-initialized execution environment instance is available, Lambda uses it to process the request. • Otherwise, Lambda creates a new execution environment instance to process the request. For example, let's explore what happens when your function receives 10 requests: In the previous diagram, each horizontal plane represents a single execution environment instance (labeled from A through F). Here's how Lambda handles each request: Request Lambda behavior Reasoning 1 Provisions new environment A This is the first request; no execution environment instances are available. 2 Provisions new environment B Existing execution environme nt instance A is busy. Understanding and visualizing concurrency 440 AWS Lambda Developer Guide Best practices for working with AWS Lambda functions The following are recommended best practices for using AWS Lambda: Topics • Function code • Function configuration • Function scalability •"} +{"global_id": 1290, "doc_id": "lambda", "chunk_id": "22", "question_id": 3, "question": "What can happen when your function receives a new request?", "answer_span": "When your function receives a new request, one of two things can happen: • If a pre-initialized execution environment instance is available, Lambda uses it to process the request. • Otherwise, Lambda creates a new execution environment instance to process the request.", "chunk": "first request (represented by the yellow circle with label 1), Lambda creates a new execution environment and runs the code outside your main handler during the Init phase. Then, Lambda runs your function's main handler code during the Invoke phase. During this entire process, this execution environment is busy and cannot process other requests. When Lambda finishes processing the first request, this execution environment can then process additional requests for the same function. For subsequent requests, Lambda doesn't need to reinitialize the environment. In the previous diagram, Lambda reuses the execution environment to handle the second request (represented by the yellow circle with label 2). Understanding and visualizing concurrency 439 AWS Lambda Developer Guide So far, we've focused on just a single instance of your execution environment (that is, a concurrency of 1). In practice, Lambda may need to provision multiple execution environment instances in parallel to handle all incoming requests. When your function receives a new request, one of two things can happen: • If a pre-initialized execution environment instance is available, Lambda uses it to process the request. • Otherwise, Lambda creates a new execution environment instance to process the request. For example, let's explore what happens when your function receives 10 requests: In the previous diagram, each horizontal plane represents a single execution environment instance (labeled from A through F). Here's how Lambda handles each request: Request Lambda behavior Reasoning 1 Provisions new environment A This is the first request; no execution environment instances are available. 2 Provisions new environment B Existing execution environme nt instance A is busy. Understanding and visualizing concurrency 440 AWS Lambda Developer Guide Best practices for working with AWS Lambda functions The following are recommended best practices for using AWS Lambda: Topics • Function code • Function configuration • Function scalability •"} +{"global_id": 1291, "doc_id": "lambda", "chunk_id": "22", "question_id": 4, "question": "What is the behavior of Lambda when processing the first request?", "answer_span": "Provisions new environment A This is the first request; no execution environment instances are available.", "chunk": "first request (represented by the yellow circle with label 1), Lambda creates a new execution environment and runs the code outside your main handler during the Init phase. Then, Lambda runs your function's main handler code during the Invoke phase. During this entire process, this execution environment is busy and cannot process other requests. When Lambda finishes processing the first request, this execution environment can then process additional requests for the same function. For subsequent requests, Lambda doesn't need to reinitialize the environment. In the previous diagram, Lambda reuses the execution environment to handle the second request (represented by the yellow circle with label 2). Understanding and visualizing concurrency 439 AWS Lambda Developer Guide So far, we've focused on just a single instance of your execution environment (that is, a concurrency of 1). In practice, Lambda may need to provision multiple execution environment instances in parallel to handle all incoming requests. When your function receives a new request, one of two things can happen: • If a pre-initialized execution environment instance is available, Lambda uses it to process the request. • Otherwise, Lambda creates a new execution environment instance to process the request. For example, let's explore what happens when your function receives 10 requests: In the previous diagram, each horizontal plane represents a single execution environment instance (labeled from A through F). Here's how Lambda handles each request: Request Lambda behavior Reasoning 1 Provisions new environment A This is the first request; no execution environment instances are available. 2 Provisions new environment B Existing execution environme nt instance A is busy. Understanding and visualizing concurrency 440 AWS Lambda Developer Guide Best practices for working with AWS Lambda functions The following are recommended best practices for using AWS Lambda: Topics • Function code • Function configuration • Function scalability •"} +{"global_id": 1292, "doc_id": "lambda", "chunk_id": "23", "question_id": 1, "question": "What should you do to improve the performance of your function?", "answer_span": "Take advantage of execution environment reuse to improve the performance of your function.", "chunk": "Provisions new environment B Existing execution environme nt instance A is busy. Understanding and visualizing concurrency 440 AWS Lambda Developer Guide Best practices for working with AWS Lambda functions The following are recommended best practices for using AWS Lambda: Topics • Function code • Function configuration • Function scalability • Metrics and alarms • Working with streams • Security best practices Function code • Take advantage of execution environment reuse to improve the performance of your function. Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the /tmp directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves cost by reducing function run time. To avoid potential data leaks across invocations, don’t use the execution environment to store user data, events, or other information with security implications. If your function relies on a mutable state that can’t be stored in memory within the handler, consider creating a separate function or separate versions of a function for each user. • Use a keep-alive directive to maintain persistent connections. Lambda purges idle connections over time. Attempting to reuse an idle connection when invoking a function will result in a connection error. To maintain your persistent connection, use the keep-alive directive associated with your runtime. For an example, see Reusing Connections with Keep-Alive in Node.js. • Use environment variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable. • Avoid using recursive invocations in your Lambda function, where the function invokes itself or initiates a process that may invoke the function again. This could lead to unintended volume of Function code"} +{"global_id": 1293, "doc_id": "lambda", "chunk_id": "23", "question_id": 2, "question": "Where should you initialize SDK clients and database connections?", "answer_span": "Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the /tmp directory.", "chunk": "Provisions new environment B Existing execution environme nt instance A is busy. Understanding and visualizing concurrency 440 AWS Lambda Developer Guide Best practices for working with AWS Lambda functions The following are recommended best practices for using AWS Lambda: Topics • Function code • Function configuration • Function scalability • Metrics and alarms • Working with streams • Security best practices Function code • Take advantage of execution environment reuse to improve the performance of your function. Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the /tmp directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves cost by reducing function run time. To avoid potential data leaks across invocations, don’t use the execution environment to store user data, events, or other information with security implications. If your function relies on a mutable state that can’t be stored in memory within the handler, consider creating a separate function or separate versions of a function for each user. • Use a keep-alive directive to maintain persistent connections. Lambda purges idle connections over time. Attempting to reuse an idle connection when invoking a function will result in a connection error. To maintain your persistent connection, use the keep-alive directive associated with your runtime. For an example, see Reusing Connections with Keep-Alive in Node.js. • Use environment variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable. • Avoid using recursive invocations in your Lambda function, where the function invokes itself or initiates a process that may invoke the function again. This could lead to unintended volume of Function code"} +{"global_id": 1294, "doc_id": "lambda", "chunk_id": "23", "question_id": 3, "question": "What should you avoid using the execution environment for?", "answer_span": "To avoid potential data leaks across invocations, don’t use the execution environment to store user data, events, or other information with security implications.", "chunk": "Provisions new environment B Existing execution environme nt instance A is busy. Understanding and visualizing concurrency 440 AWS Lambda Developer Guide Best practices for working with AWS Lambda functions The following are recommended best practices for using AWS Lambda: Topics • Function code • Function configuration • Function scalability • Metrics and alarms • Working with streams • Security best practices Function code • Take advantage of execution environment reuse to improve the performance of your function. Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the /tmp directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves cost by reducing function run time. To avoid potential data leaks across invocations, don’t use the execution environment to store user data, events, or other information with security implications. If your function relies on a mutable state that can’t be stored in memory within the handler, consider creating a separate function or separate versions of a function for each user. • Use a keep-alive directive to maintain persistent connections. Lambda purges idle connections over time. Attempting to reuse an idle connection when invoking a function will result in a connection error. To maintain your persistent connection, use the keep-alive directive associated with your runtime. For an example, see Reusing Connections with Keep-Alive in Node.js. • Use environment variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable. • Avoid using recursive invocations in your Lambda function, where the function invokes itself or initiates a process that may invoke the function again. This could lead to unintended volume of Function code"} +{"global_id": 1295, "doc_id": "lambda", "chunk_id": "23", "question_id": 4, "question": "What directive should you use to maintain persistent connections?", "answer_span": "Use a keep-alive directive to maintain persistent connections.", "chunk": "Provisions new environment B Existing execution environme nt instance A is busy. Understanding and visualizing concurrency 440 AWS Lambda Developer Guide Best practices for working with AWS Lambda functions The following are recommended best practices for using AWS Lambda: Topics • Function code • Function configuration • Function scalability • Metrics and alarms • Working with streams • Security best practices Function code • Take advantage of execution environment reuse to improve the performance of your function. Initialize SDK clients and database connections outside of the function handler, and cache static assets locally in the /tmp directory. Subsequent invocations processed by the same instance of your function can reuse these resources. This saves cost by reducing function run time. To avoid potential data leaks across invocations, don’t use the execution environment to store user data, events, or other information with security implications. If your function relies on a mutable state that can’t be stored in memory within the handler, consider creating a separate function or separate versions of a function for each user. • Use a keep-alive directive to maintain persistent connections. Lambda purges idle connections over time. Attempting to reuse an idle connection when invoking a function will result in a connection error. To maintain your persistent connection, use the keep-alive directive associated with your runtime. For an example, see Reusing Connections with Keep-Alive in Node.js. • Use environment variables to pass operational parameters to your function. For example, if you are writing to an Amazon S3 bucket, instead of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable. • Avoid using recursive invocations in your Lambda function, where the function invokes itself or initiates a process that may invoke the function again. This could lead to unintended volume of Function code"} +{"global_id": 1296, "doc_id": "lambda", "chunk_id": "24", "question_id": 1, "question": "What should you configure instead of hard-coding the bucket name?", "answer_span": "configure the bucket name as an environment variable.", "chunk": "of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable. • Avoid using recursive invocations in your Lambda function, where the function invokes itself or initiates a process that may invoke the function again. This could lead to unintended volume of Function code 1068 AWS Lambda Developer Guide function invocations and escalated costs. If you see an unintended volume of invocations, set the function reserved concurrency to 0 immediately to throttle all invocations to the function, while you update the code. • Do not use non-documented, non-public APIs in your Lambda function code. For AWS Lambda managed runtimes, Lambda periodically applies security and functional updates to Lambda's internal APIs. These internal API updates may be backwards-incompatible, leading to unintended consequences such as invocation failures if your function has a dependency on these non-public APIs. See the API reference for a list of publicly available APIs. • Write idempotent code. Writing idempotent code for your functions ensures that duplicate events are handled the same way. Your code should properly validate events and gracefully handle duplicate events. For more information, see How do I make my Lambda function idempotent?. For language-specific code best practices, refer to the following sections: • the section called “Best practices” • the section called “Best practices” • the section called “Code best practices for Python Lambda functions” • the section called “Code best practices for Ruby Lambda functions” • the section called “Code best practices for Java Lambda functions” • the section called “Code best practices for Go Lambda functions” • the section called “Code best practices for C# Lambda functions” • the section called “Code best practices for Rust Lambda functions” Function configuration • Performance testing your Lambda function is a crucial part in ensuring you pick the optimum"} +{"global_id": 1297, "doc_id": "lambda", "chunk_id": "24", "question_id": 2, "question": "What should you avoid using in your Lambda function?", "answer_span": "Avoid using recursive invocations in your Lambda function, where the function invokes itself or initiates a process that may invoke the function again.", "chunk": "of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable. • Avoid using recursive invocations in your Lambda function, where the function invokes itself or initiates a process that may invoke the function again. This could lead to unintended volume of Function code 1068 AWS Lambda Developer Guide function invocations and escalated costs. If you see an unintended volume of invocations, set the function reserved concurrency to 0 immediately to throttle all invocations to the function, while you update the code. • Do not use non-documented, non-public APIs in your Lambda function code. For AWS Lambda managed runtimes, Lambda periodically applies security and functional updates to Lambda's internal APIs. These internal API updates may be backwards-incompatible, leading to unintended consequences such as invocation failures if your function has a dependency on these non-public APIs. See the API reference for a list of publicly available APIs. • Write idempotent code. Writing idempotent code for your functions ensures that duplicate events are handled the same way. Your code should properly validate events and gracefully handle duplicate events. For more information, see How do I make my Lambda function idempotent?. For language-specific code best practices, refer to the following sections: • the section called “Best practices” • the section called “Best practices” • the section called “Code best practices for Python Lambda functions” • the section called “Code best practices for Ruby Lambda functions” • the section called “Code best practices for Java Lambda functions” • the section called “Code best practices for Go Lambda functions” • the section called “Code best practices for C# Lambda functions” • the section called “Code best practices for Rust Lambda functions” Function configuration • Performance testing your Lambda function is a crucial part in ensuring you pick the optimum"} +{"global_id": 1298, "doc_id": "lambda", "chunk_id": "24", "question_id": 3, "question": "What should you do if you see an unintended volume of invocations?", "answer_span": "set the function reserved concurrency to 0 immediately to throttle all invocations to the function, while you update the code.", "chunk": "of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable. • Avoid using recursive invocations in your Lambda function, where the function invokes itself or initiates a process that may invoke the function again. This could lead to unintended volume of Function code 1068 AWS Lambda Developer Guide function invocations and escalated costs. If you see an unintended volume of invocations, set the function reserved concurrency to 0 immediately to throttle all invocations to the function, while you update the code. • Do not use non-documented, non-public APIs in your Lambda function code. For AWS Lambda managed runtimes, Lambda periodically applies security and functional updates to Lambda's internal APIs. These internal API updates may be backwards-incompatible, leading to unintended consequences such as invocation failures if your function has a dependency on these non-public APIs. See the API reference for a list of publicly available APIs. • Write idempotent code. Writing idempotent code for your functions ensures that duplicate events are handled the same way. Your code should properly validate events and gracefully handle duplicate events. For more information, see How do I make my Lambda function idempotent?. For language-specific code best practices, refer to the following sections: • the section called “Best practices” • the section called “Best practices” • the section called “Code best practices for Python Lambda functions” • the section called “Code best practices for Ruby Lambda functions” • the section called “Code best practices for Java Lambda functions” • the section called “Code best practices for Go Lambda functions” • the section called “Code best practices for C# Lambda functions” • the section called “Code best practices for Rust Lambda functions” Function configuration • Performance testing your Lambda function is a crucial part in ensuring you pick the optimum"} +{"global_id": 1299, "doc_id": "lambda", "chunk_id": "24", "question_id": 4, "question": "What type of code should you write for your functions?", "answer_span": "Write idempotent code.", "chunk": "of hard-coding the bucket name you are writing to, configure the bucket name as an environment variable. • Avoid using recursive invocations in your Lambda function, where the function invokes itself or initiates a process that may invoke the function again. This could lead to unintended volume of Function code 1068 AWS Lambda Developer Guide function invocations and escalated costs. If you see an unintended volume of invocations, set the function reserved concurrency to 0 immediately to throttle all invocations to the function, while you update the code. • Do not use non-documented, non-public APIs in your Lambda function code. For AWS Lambda managed runtimes, Lambda periodically applies security and functional updates to Lambda's internal APIs. These internal API updates may be backwards-incompatible, leading to unintended consequences such as invocation failures if your function has a dependency on these non-public APIs. See the API reference for a list of publicly available APIs. • Write idempotent code. Writing idempotent code for your functions ensures that duplicate events are handled the same way. Your code should properly validate events and gracefully handle duplicate events. For more information, see How do I make my Lambda function idempotent?. For language-specific code best practices, refer to the following sections: • the section called “Best practices” • the section called “Best practices” • the section called “Code best practices for Python Lambda functions” • the section called “Code best practices for Ruby Lambda functions” • the section called “Code best practices for Java Lambda functions” • the section called “Code best practices for Go Lambda functions” • the section called “Code best practices for C# Lambda functions” • the section called “Code best practices for Rust Lambda functions” Function configuration • Performance testing your Lambda function is a crucial part in ensuring you pick the optimum"} +{"global_id": 1300, "doc_id": "lambda", "chunk_id": "25", "question_id": 1, "question": "What is a crucial part in ensuring you pick the optimum memory size configuration for your Lambda function?", "answer_span": "Performance testing your Lambda function is a crucial part in ensuring you pick the optimum memory size configuration.", "chunk": "the section called “Code best practices for Go Lambda functions” • the section called “Code best practices for C# Lambda functions” • the section called “Code best practices for Rust Lambda functions” Function configuration • Performance testing your Lambda function is a crucial part in ensuring you pick the optimum memory size configuration. Any increase in memory size triggers an equivalent increase in CPU available to your function. The memory usage for your function is determined per-invoke and can be viewed in Amazon CloudWatch. On each invoke a REPORT: entry will be made, as shown below: REPORT RequestId: 3604209a-e9a3-11e6-939a-754dd98c7be3 Duration: 12.34 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 18 MB Function configuration 1069 AWS Lambda Developer Guide By analyzing the Max Memory Used: field, you can determine if your function needs more memory or if you over-provisioned your function's memory size. To find the right memory configuration for your functions, we recommend using the open source AWS Lambda Power Tuning project. For more information, see AWS Lambda Power Tuning on GitHub. To optimize function performance, we also recommend deploying libraries that can leverage Advanced Vector Extensions 2 (AVX2). This allows you to process demanding workloads, including machine learning inferencing, media processing, high performance computing (HPC), scientific simulations, and financial modeling. For more information, see Creating faster AWS Lambda functions with AVX2. • Load test your Lambda function to determine an optimum timeout value. It is important to analyze how long your function runs so that you can better determine any problems with a dependency service that may increase the concurrency of the function beyond what you expect. This is especially important when your Lambda function makes network calls to resources that may not handle Lambda's scaling. For more information about load testing your application, see"} +{"global_id": 1301, "doc_id": "lambda", "chunk_id": "25", "question_id": 2, "question": "What happens when there is an increase in memory size for your Lambda function?", "answer_span": "Any increase in memory size triggers an equivalent increase in CPU available to your function.", "chunk": "the section called “Code best practices for Go Lambda functions” • the section called “Code best practices for C# Lambda functions” • the section called “Code best practices for Rust Lambda functions” Function configuration • Performance testing your Lambda function is a crucial part in ensuring you pick the optimum memory size configuration. Any increase in memory size triggers an equivalent increase in CPU available to your function. The memory usage for your function is determined per-invoke and can be viewed in Amazon CloudWatch. On each invoke a REPORT: entry will be made, as shown below: REPORT RequestId: 3604209a-e9a3-11e6-939a-754dd98c7be3 Duration: 12.34 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 18 MB Function configuration 1069 AWS Lambda Developer Guide By analyzing the Max Memory Used: field, you can determine if your function needs more memory or if you over-provisioned your function's memory size. To find the right memory configuration for your functions, we recommend using the open source AWS Lambda Power Tuning project. For more information, see AWS Lambda Power Tuning on GitHub. To optimize function performance, we also recommend deploying libraries that can leverage Advanced Vector Extensions 2 (AVX2). This allows you to process demanding workloads, including machine learning inferencing, media processing, high performance computing (HPC), scientific simulations, and financial modeling. For more information, see Creating faster AWS Lambda functions with AVX2. • Load test your Lambda function to determine an optimum timeout value. It is important to analyze how long your function runs so that you can better determine any problems with a dependency service that may increase the concurrency of the function beyond what you expect. This is especially important when your Lambda function makes network calls to resources that may not handle Lambda's scaling. For more information about load testing your application, see"} +{"global_id": 1302, "doc_id": "lambda", "chunk_id": "25", "question_id": 3, "question": "How can you determine if your function needs more memory?", "answer_span": "By analyzing the Max Memory Used: field, you can determine if your function needs more memory or if you over-provisioned your function's memory size.", "chunk": "the section called “Code best practices for Go Lambda functions” • the section called “Code best practices for C# Lambda functions” • the section called “Code best practices for Rust Lambda functions” Function configuration • Performance testing your Lambda function is a crucial part in ensuring you pick the optimum memory size configuration. Any increase in memory size triggers an equivalent increase in CPU available to your function. The memory usage for your function is determined per-invoke and can be viewed in Amazon CloudWatch. On each invoke a REPORT: entry will be made, as shown below: REPORT RequestId: 3604209a-e9a3-11e6-939a-754dd98c7be3 Duration: 12.34 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 18 MB Function configuration 1069 AWS Lambda Developer Guide By analyzing the Max Memory Used: field, you can determine if your function needs more memory or if you over-provisioned your function's memory size. To find the right memory configuration for your functions, we recommend using the open source AWS Lambda Power Tuning project. For more information, see AWS Lambda Power Tuning on GitHub. To optimize function performance, we also recommend deploying libraries that can leverage Advanced Vector Extensions 2 (AVX2). This allows you to process demanding workloads, including machine learning inferencing, media processing, high performance computing (HPC), scientific simulations, and financial modeling. For more information, see Creating faster AWS Lambda functions with AVX2. • Load test your Lambda function to determine an optimum timeout value. It is important to analyze how long your function runs so that you can better determine any problems with a dependency service that may increase the concurrency of the function beyond what you expect. This is especially important when your Lambda function makes network calls to resources that may not handle Lambda's scaling. For more information about load testing your application, see"} +{"global_id": 1303, "doc_id": "lambda", "chunk_id": "25", "question_id": 4, "question": "What do you recommend using to find the right memory configuration for your functions?", "answer_span": "we recommend using the open source AWS Lambda Power Tuning project.", "chunk": "the section called “Code best practices for Go Lambda functions” • the section called “Code best practices for C# Lambda functions” • the section called “Code best practices for Rust Lambda functions” Function configuration • Performance testing your Lambda function is a crucial part in ensuring you pick the optimum memory size configuration. Any increase in memory size triggers an equivalent increase in CPU available to your function. The memory usage for your function is determined per-invoke and can be viewed in Amazon CloudWatch. On each invoke a REPORT: entry will be made, as shown below: REPORT RequestId: 3604209a-e9a3-11e6-939a-754dd98c7be3 Duration: 12.34 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 18 MB Function configuration 1069 AWS Lambda Developer Guide By analyzing the Max Memory Used: field, you can determine if your function needs more memory or if you over-provisioned your function's memory size. To find the right memory configuration for your functions, we recommend using the open source AWS Lambda Power Tuning project. For more information, see AWS Lambda Power Tuning on GitHub. To optimize function performance, we also recommend deploying libraries that can leverage Advanced Vector Extensions 2 (AVX2). This allows you to process demanding workloads, including machine learning inferencing, media processing, high performance computing (HPC), scientific simulations, and financial modeling. For more information, see Creating faster AWS Lambda functions with AVX2. • Load test your Lambda function to determine an optimum timeout value. It is important to analyze how long your function runs so that you can better determine any problems with a dependency service that may increase the concurrency of the function beyond what you expect. This is especially important when your Lambda function makes network calls to resources that may not handle Lambda's scaling. For more information about load testing your application, see"} +{"global_id": 1304, "doc_id": "lambda", "chunk_id": "26", "question_id": 1, "question": "What should you do if you are using Amazon Simple Queue Service as an event source?", "answer_span": "make sure the value of the function's expected invocation time does not exceed the Visibility Timeout value on the queue.", "chunk": "you can better determine any problems with a dependency service that may increase the concurrency of the function beyond what you expect. This is especially important when your Lambda function makes network calls to resources that may not handle Lambda's scaling. For more information about load testing your application, see Distributed Load Testing on AWS. • Use most-restrictive permissions when setting IAM policies. Understand the resources and operations your Lambda function needs, and limit the execution role to these permissions. For more information, see Managing permissions in AWS Lambda. • Be familiar with Lambda quotas. Payload size, file descriptors and /tmp space are often overlooked when determining runtime resource limits. • Delete Lambda functions that you are no longer using. By doing so, the unused functions won't needlessly count against your deployment package size limit. • If you are using Amazon Simple Queue Service as an event source, make sure the value of the function's expected invocation time does not exceed the Visibility Timeout value on the queue. This applies both to CreateFunction and UpdateFunctionConfiguration. • In the case of CreateFunction, AWS Lambda will fail the function creation process. • In the case of UpdateFunctionConfiguration, it could result in duplicate invocations of the function. Function configuration 1070 AWS Lambda Developer Guide Function scalability • Be familiar with your upstream and downstream throughput constraints. While Lambda functions scale seamlessly with load, upstream and downstream dependencies may not have the same throughput capabilities. If you need to limit how high your function can scale, you can configure reserved concurrency on your function. • Build in throttle tolerance. If your synchronous function experiences throttling due to traffic exceeding Lambda's scaling rate, you can use the following strategies to improve throttle tolerance: • Use timeouts, retries, and backoff with jitter. Implementing these strategies smooth"} +{"global_id": 1305, "doc_id": "lambda", "chunk_id": "26", "question_id": 2, "question": "What happens in the case of CreateFunction if the expected invocation time exceeds the Visibility Timeout?", "answer_span": "AWS Lambda will fail the function creation process.", "chunk": "you can better determine any problems with a dependency service that may increase the concurrency of the function beyond what you expect. This is especially important when your Lambda function makes network calls to resources that may not handle Lambda's scaling. For more information about load testing your application, see Distributed Load Testing on AWS. • Use most-restrictive permissions when setting IAM policies. Understand the resources and operations your Lambda function needs, and limit the execution role to these permissions. For more information, see Managing permissions in AWS Lambda. • Be familiar with Lambda quotas. Payload size, file descriptors and /tmp space are often overlooked when determining runtime resource limits. • Delete Lambda functions that you are no longer using. By doing so, the unused functions won't needlessly count against your deployment package size limit. • If you are using Amazon Simple Queue Service as an event source, make sure the value of the function's expected invocation time does not exceed the Visibility Timeout value on the queue. This applies both to CreateFunction and UpdateFunctionConfiguration. • In the case of CreateFunction, AWS Lambda will fail the function creation process. • In the case of UpdateFunctionConfiguration, it could result in duplicate invocations of the function. Function configuration 1070 AWS Lambda Developer Guide Function scalability • Be familiar with your upstream and downstream throughput constraints. While Lambda functions scale seamlessly with load, upstream and downstream dependencies may not have the same throughput capabilities. If you need to limit how high your function can scale, you can configure reserved concurrency on your function. • Build in throttle tolerance. If your synchronous function experiences throttling due to traffic exceeding Lambda's scaling rate, you can use the following strategies to improve throttle tolerance: • Use timeouts, retries, and backoff with jitter. Implementing these strategies smooth"} +{"global_id": 1306, "doc_id": "lambda", "chunk_id": "26", "question_id": 3, "question": "What should you do with Lambda functions that you are no longer using?", "answer_span": "Delete Lambda functions that you are no longer using.", "chunk": "you can better determine any problems with a dependency service that may increase the concurrency of the function beyond what you expect. This is especially important when your Lambda function makes network calls to resources that may not handle Lambda's scaling. For more information about load testing your application, see Distributed Load Testing on AWS. • Use most-restrictive permissions when setting IAM policies. Understand the resources and operations your Lambda function needs, and limit the execution role to these permissions. For more information, see Managing permissions in AWS Lambda. • Be familiar with Lambda quotas. Payload size, file descriptors and /tmp space are often overlooked when determining runtime resource limits. • Delete Lambda functions that you are no longer using. By doing so, the unused functions won't needlessly count against your deployment package size limit. • If you are using Amazon Simple Queue Service as an event source, make sure the value of the function's expected invocation time does not exceed the Visibility Timeout value on the queue. This applies both to CreateFunction and UpdateFunctionConfiguration. • In the case of CreateFunction, AWS Lambda will fail the function creation process. • In the case of UpdateFunctionConfiguration, it could result in duplicate invocations of the function. Function configuration 1070 AWS Lambda Developer Guide Function scalability • Be familiar with your upstream and downstream throughput constraints. While Lambda functions scale seamlessly with load, upstream and downstream dependencies may not have the same throughput capabilities. If you need to limit how high your function can scale, you can configure reserved concurrency on your function. • Build in throttle tolerance. If your synchronous function experiences throttling due to traffic exceeding Lambda's scaling rate, you can use the following strategies to improve throttle tolerance: • Use timeouts, retries, and backoff with jitter. Implementing these strategies smooth"} +{"global_id": 1307, "doc_id": "lambda", "chunk_id": "26", "question_id": 4, "question": "What can you configure on your function to limit how high it can scale?", "answer_span": "you can configure reserved concurrency on your function.", "chunk": "you can better determine any problems with a dependency service that may increase the concurrency of the function beyond what you expect. This is especially important when your Lambda function makes network calls to resources that may not handle Lambda's scaling. For more information about load testing your application, see Distributed Load Testing on AWS. • Use most-restrictive permissions when setting IAM policies. Understand the resources and operations your Lambda function needs, and limit the execution role to these permissions. For more information, see Managing permissions in AWS Lambda. • Be familiar with Lambda quotas. Payload size, file descriptors and /tmp space are often overlooked when determining runtime resource limits. • Delete Lambda functions that you are no longer using. By doing so, the unused functions won't needlessly count against your deployment package size limit. • If you are using Amazon Simple Queue Service as an event source, make sure the value of the function's expected invocation time does not exceed the Visibility Timeout value on the queue. This applies both to CreateFunction and UpdateFunctionConfiguration. • In the case of CreateFunction, AWS Lambda will fail the function creation process. • In the case of UpdateFunctionConfiguration, it could result in duplicate invocations of the function. Function configuration 1070 AWS Lambda Developer Guide Function scalability • Be familiar with your upstream and downstream throughput constraints. While Lambda functions scale seamlessly with load, upstream and downstream dependencies may not have the same throughput capabilities. If you need to limit how high your function can scale, you can configure reserved concurrency on your function. • Build in throttle tolerance. If your synchronous function experiences throttling due to traffic exceeding Lambda's scaling rate, you can use the following strategies to improve throttle tolerance: • Use timeouts, retries, and backoff with jitter. Implementing these strategies smooth"} +{"global_id": 1308, "doc_id": "lambda", "chunk_id": "27", "question_id": 1, "question": "What can you configure on your function to scale?", "answer_span": "you can configure reserved concurrency on your function.", "chunk": "can scale, you can configure reserved concurrency on your function. • Build in throttle tolerance. If your synchronous function experiences throttling due to traffic exceeding Lambda's scaling rate, you can use the following strategies to improve throttle tolerance: • Use timeouts, retries, and backoff with jitter. Implementing these strategies smooth out retried invocations, and helps ensure Lambda can scale up within seconds to minimize enduser throttling. • Use provisioned concurrency. Provisioned concurrency is the number of pre-initialized execution environments that Lambda allocates to your function. Lambda handles incoming requests using provisioned concurrency when available. Lambda can also scale your function above and beyond your provisioned concurrency setting if required. Configuring provisioned concurrency incurs additional charges to your AWS account. Metrics and alarms • Use Using CloudWatch metrics with Lambda and CloudWatch Alarms instead of creating or updating a metric from within your Lambda function code. It's a much more efficient way to track the health of your Lambda functions, allowing you to catch issues early in the development process. For instance, you can configure an alarm based on the expected duration of your Lambda function invocation in order to address any bottlenecks or latencies attributable to your function code. • Leverage your logging library and AWS Lambda Metrics and Dimensions to catch app errors (e.g. ERR, ERROR, WARNING, etc.) • Use AWS Cost Anomaly Detection to detect unusual activity on your account. Cost Anomaly Detection uses machine learning to continuously monitor your cost and usage while minimizing false positive alerts. Cost Anomaly Detection uses data from AWS Cost Explorer, which has a delay of up to 24 hours. As a result, it can take up to 24 hours to detect an anomaly after usage occurs. To get started with Cost Anomaly Detection, you must first sign up for Cost Explorer."} +{"global_id": 1309, "doc_id": "lambda", "chunk_id": "27", "question_id": 2, "question": "What strategies can improve throttle tolerance?", "answer_span": "Use timeouts, retries, and backoff with jitter.", "chunk": "can scale, you can configure reserved concurrency on your function. • Build in throttle tolerance. If your synchronous function experiences throttling due to traffic exceeding Lambda's scaling rate, you can use the following strategies to improve throttle tolerance: • Use timeouts, retries, and backoff with jitter. Implementing these strategies smooth out retried invocations, and helps ensure Lambda can scale up within seconds to minimize enduser throttling. • Use provisioned concurrency. Provisioned concurrency is the number of pre-initialized execution environments that Lambda allocates to your function. Lambda handles incoming requests using provisioned concurrency when available. Lambda can also scale your function above and beyond your provisioned concurrency setting if required. Configuring provisioned concurrency incurs additional charges to your AWS account. Metrics and alarms • Use Using CloudWatch metrics with Lambda and CloudWatch Alarms instead of creating or updating a metric from within your Lambda function code. It's a much more efficient way to track the health of your Lambda functions, allowing you to catch issues early in the development process. For instance, you can configure an alarm based on the expected duration of your Lambda function invocation in order to address any bottlenecks or latencies attributable to your function code. • Leverage your logging library and AWS Lambda Metrics and Dimensions to catch app errors (e.g. ERR, ERROR, WARNING, etc.) • Use AWS Cost Anomaly Detection to detect unusual activity on your account. Cost Anomaly Detection uses machine learning to continuously monitor your cost and usage while minimizing false positive alerts. Cost Anomaly Detection uses data from AWS Cost Explorer, which has a delay of up to 24 hours. As a result, it can take up to 24 hours to detect an anomaly after usage occurs. To get started with Cost Anomaly Detection, you must first sign up for Cost Explorer."} +{"global_id": 1310, "doc_id": "lambda", "chunk_id": "27", "question_id": 3, "question": "What is provisioned concurrency?", "answer_span": "Provisioned concurrency is the number of pre-initialized execution environments that Lambda allocates to your function.", "chunk": "can scale, you can configure reserved concurrency on your function. • Build in throttle tolerance. If your synchronous function experiences throttling due to traffic exceeding Lambda's scaling rate, you can use the following strategies to improve throttle tolerance: • Use timeouts, retries, and backoff with jitter. Implementing these strategies smooth out retried invocations, and helps ensure Lambda can scale up within seconds to minimize enduser throttling. • Use provisioned concurrency. Provisioned concurrency is the number of pre-initialized execution environments that Lambda allocates to your function. Lambda handles incoming requests using provisioned concurrency when available. Lambda can also scale your function above and beyond your provisioned concurrency setting if required. Configuring provisioned concurrency incurs additional charges to your AWS account. Metrics and alarms • Use Using CloudWatch metrics with Lambda and CloudWatch Alarms instead of creating or updating a metric from within your Lambda function code. It's a much more efficient way to track the health of your Lambda functions, allowing you to catch issues early in the development process. For instance, you can configure an alarm based on the expected duration of your Lambda function invocation in order to address any bottlenecks or latencies attributable to your function code. • Leverage your logging library and AWS Lambda Metrics and Dimensions to catch app errors (e.g. ERR, ERROR, WARNING, etc.) • Use AWS Cost Anomaly Detection to detect unusual activity on your account. Cost Anomaly Detection uses machine learning to continuously monitor your cost and usage while minimizing false positive alerts. Cost Anomaly Detection uses data from AWS Cost Explorer, which has a delay of up to 24 hours. As a result, it can take up to 24 hours to detect an anomaly after usage occurs. To get started with Cost Anomaly Detection, you must first sign up for Cost Explorer."} +{"global_id": 1311, "doc_id": "lambda", "chunk_id": "27", "question_id": 4, "question": "What does Cost Anomaly Detection use to monitor your cost and usage?", "answer_span": "Cost Anomaly Detection uses machine learning to continuously monitor your cost and usage while minimizing false positive alerts.", "chunk": "can scale, you can configure reserved concurrency on your function. • Build in throttle tolerance. If your synchronous function experiences throttling due to traffic exceeding Lambda's scaling rate, you can use the following strategies to improve throttle tolerance: • Use timeouts, retries, and backoff with jitter. Implementing these strategies smooth out retried invocations, and helps ensure Lambda can scale up within seconds to minimize enduser throttling. • Use provisioned concurrency. Provisioned concurrency is the number of pre-initialized execution environments that Lambda allocates to your function. Lambda handles incoming requests using provisioned concurrency when available. Lambda can also scale your function above and beyond your provisioned concurrency setting if required. Configuring provisioned concurrency incurs additional charges to your AWS account. Metrics and alarms • Use Using CloudWatch metrics with Lambda and CloudWatch Alarms instead of creating or updating a metric from within your Lambda function code. It's a much more efficient way to track the health of your Lambda functions, allowing you to catch issues early in the development process. For instance, you can configure an alarm based on the expected duration of your Lambda function invocation in order to address any bottlenecks or latencies attributable to your function code. • Leverage your logging library and AWS Lambda Metrics and Dimensions to catch app errors (e.g. ERR, ERROR, WARNING, etc.) • Use AWS Cost Anomaly Detection to detect unusual activity on your account. Cost Anomaly Detection uses machine learning to continuously monitor your cost and usage while minimizing false positive alerts. Cost Anomaly Detection uses data from AWS Cost Explorer, which has a delay of up to 24 hours. As a result, it can take up to 24 hours to detect an anomaly after usage occurs. To get started with Cost Anomaly Detection, you must first sign up for Cost Explorer."} +{"global_id": 1312, "doc_id": "lambda", "chunk_id": "28", "question_id": 1, "question": "What is the maximum delay for AWS Cost Explorer data used in Cost Anomaly Detection?", "answer_span": "Cost Anomaly Detection uses data from AWS Cost Explorer, which has a delay of up to 24 hours.", "chunk": "Cost Anomaly Detection uses data from AWS Cost Explorer, which has a delay of up to 24 hours. As a result, it can take up to 24 hours to detect an anomaly after usage occurs. To get started with Cost Anomaly Detection, you must first sign up for Cost Explorer. Then, access Cost Anomaly Detection. Function scalability 1071 AWS Lambda Developer Guide Working with streams • Test with different batch and record sizes so that the polling frequency of each event source is tuned to how quickly your function is able to complete its task. The CreateEventSourceMapping BatchSize parameter controls the maximum number of records that can be sent to your function with each invoke. A larger batch size can often more efficiently absorb the invoke overhead across a larger set of records, increasing your throughput. By default, Lambda invokes your function as soon as records are available. If the batch that Lambda reads from the event source has only one record in it, Lambda sends only one record to the function. To avoid invoking the function with a small number of records, you can tell the event source to buffer records for up to 5 minutes by configuring a batching window. Before invoking the function, Lambda continues to read records from the event source until it has gathered a full batch, the batching window expires, or the batch reaches the payload limit of 6 MB. For more information, see Batching behavior. Warning Lambda event source mappings process each event at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see How do I make my Lambda function idempotent in the AWS Knowledge Center. • Increase Kinesis stream processing"} +{"global_id": 1313, "doc_id": "lambda", "chunk_id": "28", "question_id": 2, "question": "What must you do first to get started with Cost Anomaly Detection?", "answer_span": "To get started with Cost Anomaly Detection, you must first sign up for Cost Explorer.", "chunk": "Cost Anomaly Detection uses data from AWS Cost Explorer, which has a delay of up to 24 hours. As a result, it can take up to 24 hours to detect an anomaly after usage occurs. To get started with Cost Anomaly Detection, you must first sign up for Cost Explorer. Then, access Cost Anomaly Detection. Function scalability 1071 AWS Lambda Developer Guide Working with streams • Test with different batch and record sizes so that the polling frequency of each event source is tuned to how quickly your function is able to complete its task. The CreateEventSourceMapping BatchSize parameter controls the maximum number of records that can be sent to your function with each invoke. A larger batch size can often more efficiently absorb the invoke overhead across a larger set of records, increasing your throughput. By default, Lambda invokes your function as soon as records are available. If the batch that Lambda reads from the event source has only one record in it, Lambda sends only one record to the function. To avoid invoking the function with a small number of records, you can tell the event source to buffer records for up to 5 minutes by configuring a batching window. Before invoking the function, Lambda continues to read records from the event source until it has gathered a full batch, the batching window expires, or the batch reaches the payload limit of 6 MB. For more information, see Batching behavior. Warning Lambda event source mappings process each event at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see How do I make my Lambda function idempotent in the AWS Knowledge Center. • Increase Kinesis stream processing"} +{"global_id": 1314, "doc_id": "lambda", "chunk_id": "28", "question_id": 3, "question": "What does the CreateEventSourceMapping BatchSize parameter control?", "answer_span": "The CreateEventSourceMapping BatchSize parameter controls the maximum number of records that can be sent to your function with each invoke.", "chunk": "Cost Anomaly Detection uses data from AWS Cost Explorer, which has a delay of up to 24 hours. As a result, it can take up to 24 hours to detect an anomaly after usage occurs. To get started with Cost Anomaly Detection, you must first sign up for Cost Explorer. Then, access Cost Anomaly Detection. Function scalability 1071 AWS Lambda Developer Guide Working with streams • Test with different batch and record sizes so that the polling frequency of each event source is tuned to how quickly your function is able to complete its task. The CreateEventSourceMapping BatchSize parameter controls the maximum number of records that can be sent to your function with each invoke. A larger batch size can often more efficiently absorb the invoke overhead across a larger set of records, increasing your throughput. By default, Lambda invokes your function as soon as records are available. If the batch that Lambda reads from the event source has only one record in it, Lambda sends only one record to the function. To avoid invoking the function with a small number of records, you can tell the event source to buffer records for up to 5 minutes by configuring a batching window. Before invoking the function, Lambda continues to read records from the event source until it has gathered a full batch, the batching window expires, or the batch reaches the payload limit of 6 MB. For more information, see Batching behavior. Warning Lambda event source mappings process each event at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see How do I make my Lambda function idempotent in the AWS Knowledge Center. • Increase Kinesis stream processing"} +{"global_id": 1315, "doc_id": "lambda", "chunk_id": "28", "question_id": 4, "question": "What can you configure to avoid invoking the function with a small number of records?", "answer_span": "you can tell the event source to buffer records for up to 5 minutes by configuring a batching window.", "chunk": "Cost Anomaly Detection uses data from AWS Cost Explorer, which has a delay of up to 24 hours. As a result, it can take up to 24 hours to detect an anomaly after usage occurs. To get started with Cost Anomaly Detection, you must first sign up for Cost Explorer. Then, access Cost Anomaly Detection. Function scalability 1071 AWS Lambda Developer Guide Working with streams • Test with different batch and record sizes so that the polling frequency of each event source is tuned to how quickly your function is able to complete its task. The CreateEventSourceMapping BatchSize parameter controls the maximum number of records that can be sent to your function with each invoke. A larger batch size can often more efficiently absorb the invoke overhead across a larger set of records, increasing your throughput. By default, Lambda invokes your function as soon as records are available. If the batch that Lambda reads from the event source has only one record in it, Lambda sends only one record to the function. To avoid invoking the function with a small number of records, you can tell the event source to buffer records for up to 5 minutes by configuring a batching window. Before invoking the function, Lambda continues to read records from the event source until it has gathered a full batch, the batching window expires, or the batch reaches the payload limit of 6 MB. For more information, see Batching behavior. Warning Lambda event source mappings process each event at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see How do I make my Lambda function idempotent in the AWS Knowledge Center. • Increase Kinesis stream processing"} +{"global_id": 1316, "doc_id": "lambda", "chunk_id": "29", "question_id": 1, "question": "What should you do to avoid potential issues related to duplicate events?", "answer_span": "we strongly recommend that you make your function code idempotent.", "chunk": "at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see How do I make my Lambda function idempotent in the AWS Knowledge Center. • Increase Kinesis stream processing throughput by adding shards. A Kinesis stream is composed of one or more shards. The rate at which Lambda can read data from Kinesis scales linearly with the number of shards. Increasing the number of shards will directly increase the number of maximum concurrent Lambda function invocations and can increase your Kinesis stream processing throughput. For more information about the relationship between shards and function invocations, see the section called “ Polling and batching streams”. If you are increasing the number of shards in a Kinesis stream, make sure you have picked a good partition key (see Partition Keys) for your data, so that related records end up on the same shards and your data is well distributed. • Use Amazon CloudWatch on IteratorAge to determine if your Kinesis stream is being processed. For example, configure a CloudWatch alarm with a maximum setting to 30000 (30 seconds). Working with streams 1072 AWS Lambda Developer Guide Security best practices • Monitor your usage of AWS Lambda as it relates to security best practices by using AWS Security Hub. Security Hub uses security controls to evaluate resource configurations and security standards to help you comply with various compliance frameworks. For more information about using Security Hub to evaluate Lambda resources, see AWS Lambda controls in the AWS Security Hub User Guide. • Monitor Lambda network activity logs using Amazon GuardDuty Lambda Protection. GuardDuty Lambda protection helps you identify potential security threats when Lambda functions are invoked in your AWS account. For example,"} +{"global_id": 1317, "doc_id": "lambda", "chunk_id": "29", "question_id": 2, "question": "How can you increase Kinesis stream processing throughput?", "answer_span": "Increase Kinesis stream processing throughput by adding shards.", "chunk": "at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see How do I make my Lambda function idempotent in the AWS Knowledge Center. • Increase Kinesis stream processing throughput by adding shards. A Kinesis stream is composed of one or more shards. The rate at which Lambda can read data from Kinesis scales linearly with the number of shards. Increasing the number of shards will directly increase the number of maximum concurrent Lambda function invocations and can increase your Kinesis stream processing throughput. For more information about the relationship between shards and function invocations, see the section called “ Polling and batching streams”. If you are increasing the number of shards in a Kinesis stream, make sure you have picked a good partition key (see Partition Keys) for your data, so that related records end up on the same shards and your data is well distributed. • Use Amazon CloudWatch on IteratorAge to determine if your Kinesis stream is being processed. For example, configure a CloudWatch alarm with a maximum setting to 30000 (30 seconds). Working with streams 1072 AWS Lambda Developer Guide Security best practices • Monitor your usage of AWS Lambda as it relates to security best practices by using AWS Security Hub. Security Hub uses security controls to evaluate resource configurations and security standards to help you comply with various compliance frameworks. For more information about using Security Hub to evaluate Lambda resources, see AWS Lambda controls in the AWS Security Hub User Guide. • Monitor Lambda network activity logs using Amazon GuardDuty Lambda Protection. GuardDuty Lambda protection helps you identify potential security threats when Lambda functions are invoked in your AWS account. For example,"} +{"global_id": 1318, "doc_id": "lambda", "chunk_id": "29", "question_id": 3, "question": "What does increasing the number of shards in a Kinesis stream do?", "answer_span": "Increasing the number of shards will directly increase the number of maximum concurrent Lambda function invocations and can increase your Kinesis stream processing throughput.", "chunk": "at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see How do I make my Lambda function idempotent in the AWS Knowledge Center. • Increase Kinesis stream processing throughput by adding shards. A Kinesis stream is composed of one or more shards. The rate at which Lambda can read data from Kinesis scales linearly with the number of shards. Increasing the number of shards will directly increase the number of maximum concurrent Lambda function invocations and can increase your Kinesis stream processing throughput. For more information about the relationship between shards and function invocations, see the section called “ Polling and batching streams”. If you are increasing the number of shards in a Kinesis stream, make sure you have picked a good partition key (see Partition Keys) for your data, so that related records end up on the same shards and your data is well distributed. • Use Amazon CloudWatch on IteratorAge to determine if your Kinesis stream is being processed. For example, configure a CloudWatch alarm with a maximum setting to 30000 (30 seconds). Working with streams 1072 AWS Lambda Developer Guide Security best practices • Monitor your usage of AWS Lambda as it relates to security best practices by using AWS Security Hub. Security Hub uses security controls to evaluate resource configurations and security standards to help you comply with various compliance frameworks. For more information about using Security Hub to evaluate Lambda resources, see AWS Lambda controls in the AWS Security Hub User Guide. • Monitor Lambda network activity logs using Amazon GuardDuty Lambda Protection. GuardDuty Lambda protection helps you identify potential security threats when Lambda functions are invoked in your AWS account. For example,"} +{"global_id": 1319, "doc_id": "lambda", "chunk_id": "29", "question_id": 4, "question": "What should you monitor to determine if your Kinesis stream is being processed?", "answer_span": "Use Amazon CloudWatch on IteratorAge to determine if your Kinesis stream is being processed.", "chunk": "at least once, and duplicate processing of records can occur. To avoid potential issues related to duplicate events, we strongly recommend that you make your function code idempotent. To learn more, see How do I make my Lambda function idempotent in the AWS Knowledge Center. • Increase Kinesis stream processing throughput by adding shards. A Kinesis stream is composed of one or more shards. The rate at which Lambda can read data from Kinesis scales linearly with the number of shards. Increasing the number of shards will directly increase the number of maximum concurrent Lambda function invocations and can increase your Kinesis stream processing throughput. For more information about the relationship between shards and function invocations, see the section called “ Polling and batching streams”. If you are increasing the number of shards in a Kinesis stream, make sure you have picked a good partition key (see Partition Keys) for your data, so that related records end up on the same shards and your data is well distributed. • Use Amazon CloudWatch on IteratorAge to determine if your Kinesis stream is being processed. For example, configure a CloudWatch alarm with a maximum setting to 30000 (30 seconds). Working with streams 1072 AWS Lambda Developer Guide Security best practices • Monitor your usage of AWS Lambda as it relates to security best practices by using AWS Security Hub. Security Hub uses security controls to evaluate resource configurations and security standards to help you comply with various compliance frameworks. For more information about using Security Hub to evaluate Lambda resources, see AWS Lambda controls in the AWS Security Hub User Guide. • Monitor Lambda network activity logs using Amazon GuardDuty Lambda Protection. GuardDuty Lambda protection helps you identify potential security threats when Lambda functions are invoked in your AWS account. For example,"} +{"global_id": 1320, "doc_id": "lambda", "chunk_id": "30", "question_id": 1, "question": "What does GuardDuty Lambda protection help you identify?", "answer_span": "GuardDuty Lambda protection helps you identify potential security threats when Lambda functions are invoked in your AWS account.", "chunk": "about using Security Hub to evaluate Lambda resources, see AWS Lambda controls in the AWS Security Hub User Guide. • Monitor Lambda network activity logs using Amazon GuardDuty Lambda Protection. GuardDuty Lambda protection helps you identify potential security threats when Lambda functions are invoked in your AWS account. For example, if one of your functions queries an IP address that is associated with cryptocurrency-related activity. GuardDuty monitors the network activity logs that are generated when a Lambda function is invoked. To learn more, see Lambda protection in the Amazon GuardDuty User Guide. Security best practices 1073 AWS Lambda Developer Guide How to test serverless functions and applications Testing serverless functions uses traditional test types and techniques, but you must also consider testing serverless applications as a whole. Cloud-based tests will provide the most accurate measure of quality of both your functions and serverless applications. A serverless application architecture includes managed services that provide critical application functionality through API calls. For this reason, your development cycle should include automated tests that verify functionality when your function and services interact. If you do not create cloud-based tests, you could encounter issues due to differences between your local environment and the deployed environment. Your continuous integration process should run tests against a suite of resources provisioned in the cloud before promoting your code to the next deployment environment, such as QA, Staging, or Production. Continue reading this short guide to learn about testing strategies for serverless applications, or visit the Serverless Test Samples repository to dive in with practical examples, specific to your chosen language and runtime. For serverless testing, you will still write unit, integration and end-to-end tests. • Unit tests - Tests that run against an isolated block of code. For example, verifying the business logic to calculate the delivery charge"} +{"global_id": 1321, "doc_id": "lambda", "chunk_id": "30", "question_id": 2, "question": "What should your development cycle include for serverless applications?", "answer_span": "your development cycle should include automated tests that verify functionality when your function and services interact.", "chunk": "about using Security Hub to evaluate Lambda resources, see AWS Lambda controls in the AWS Security Hub User Guide. • Monitor Lambda network activity logs using Amazon GuardDuty Lambda Protection. GuardDuty Lambda protection helps you identify potential security threats when Lambda functions are invoked in your AWS account. For example, if one of your functions queries an IP address that is associated with cryptocurrency-related activity. GuardDuty monitors the network activity logs that are generated when a Lambda function is invoked. To learn more, see Lambda protection in the Amazon GuardDuty User Guide. Security best practices 1073 AWS Lambda Developer Guide How to test serverless functions and applications Testing serverless functions uses traditional test types and techniques, but you must also consider testing serverless applications as a whole. Cloud-based tests will provide the most accurate measure of quality of both your functions and serverless applications. A serverless application architecture includes managed services that provide critical application functionality through API calls. For this reason, your development cycle should include automated tests that verify functionality when your function and services interact. If you do not create cloud-based tests, you could encounter issues due to differences between your local environment and the deployed environment. Your continuous integration process should run tests against a suite of resources provisioned in the cloud before promoting your code to the next deployment environment, such as QA, Staging, or Production. Continue reading this short guide to learn about testing strategies for serverless applications, or visit the Serverless Test Samples repository to dive in with practical examples, specific to your chosen language and runtime. For serverless testing, you will still write unit, integration and end-to-end tests. • Unit tests - Tests that run against an isolated block of code. For example, verifying the business logic to calculate the delivery charge"} +{"global_id": 1322, "doc_id": "lambda", "chunk_id": "30", "question_id": 3, "question": "What type of tests will provide the most accurate measure of quality for serverless functions?", "answer_span": "Cloud-based tests will provide the most accurate measure of quality of both your functions and serverless applications.", "chunk": "about using Security Hub to evaluate Lambda resources, see AWS Lambda controls in the AWS Security Hub User Guide. • Monitor Lambda network activity logs using Amazon GuardDuty Lambda Protection. GuardDuty Lambda protection helps you identify potential security threats when Lambda functions are invoked in your AWS account. For example, if one of your functions queries an IP address that is associated with cryptocurrency-related activity. GuardDuty monitors the network activity logs that are generated when a Lambda function is invoked. To learn more, see Lambda protection in the Amazon GuardDuty User Guide. Security best practices 1073 AWS Lambda Developer Guide How to test serverless functions and applications Testing serverless functions uses traditional test types and techniques, but you must also consider testing serverless applications as a whole. Cloud-based tests will provide the most accurate measure of quality of both your functions and serverless applications. A serverless application architecture includes managed services that provide critical application functionality through API calls. For this reason, your development cycle should include automated tests that verify functionality when your function and services interact. If you do not create cloud-based tests, you could encounter issues due to differences between your local environment and the deployed environment. Your continuous integration process should run tests against a suite of resources provisioned in the cloud before promoting your code to the next deployment environment, such as QA, Staging, or Production. Continue reading this short guide to learn about testing strategies for serverless applications, or visit the Serverless Test Samples repository to dive in with practical examples, specific to your chosen language and runtime. For serverless testing, you will still write unit, integration and end-to-end tests. • Unit tests - Tests that run against an isolated block of code. For example, verifying the business logic to calculate the delivery charge"} +{"global_id": 1323, "doc_id": "lambda", "chunk_id": "30", "question_id": 4, "question": "What should your continuous integration process run tests against?", "answer_span": "your continuous integration process should run tests against a suite of resources provisioned in the cloud before promoting your code to the next deployment environment.", "chunk": "about using Security Hub to evaluate Lambda resources, see AWS Lambda controls in the AWS Security Hub User Guide. • Monitor Lambda network activity logs using Amazon GuardDuty Lambda Protection. GuardDuty Lambda protection helps you identify potential security threats when Lambda functions are invoked in your AWS account. For example, if one of your functions queries an IP address that is associated with cryptocurrency-related activity. GuardDuty monitors the network activity logs that are generated when a Lambda function is invoked. To learn more, see Lambda protection in the Amazon GuardDuty User Guide. Security best practices 1073 AWS Lambda Developer Guide How to test serverless functions and applications Testing serverless functions uses traditional test types and techniques, but you must also consider testing serverless applications as a whole. Cloud-based tests will provide the most accurate measure of quality of both your functions and serverless applications. A serverless application architecture includes managed services that provide critical application functionality through API calls. For this reason, your development cycle should include automated tests that verify functionality when your function and services interact. If you do not create cloud-based tests, you could encounter issues due to differences between your local environment and the deployed environment. Your continuous integration process should run tests against a suite of resources provisioned in the cloud before promoting your code to the next deployment environment, such as QA, Staging, or Production. Continue reading this short guide to learn about testing strategies for serverless applications, or visit the Serverless Test Samples repository to dive in with practical examples, specific to your chosen language and runtime. For serverless testing, you will still write unit, integration and end-to-end tests. • Unit tests - Tests that run against an isolated block of code. For example, verifying the business logic to calculate the delivery charge"} +{"global_id": 1324, "doc_id": "lambda", "chunk_id": "31", "question_id": 1, "question": "What are unit tests?", "answer_span": "Unit tests - Tests that run against an isolated block of code.", "chunk": "repository to dive in with practical examples, specific to your chosen language and runtime. For serverless testing, you will still write unit, integration and end-to-end tests. • Unit tests - Tests that run against an isolated block of code. For example, verifying the business logic to calculate the delivery charge given a particular item and destination. • Integration tests - Tests involving two or more components or services that interact, typically in a cloud environment. For example, verifying a function processes events from a queue. 1074 AWS Lambda Developer Guide • End-to-end tests - Tests that verify behavior across an entire application. For example, ensuring infrastructure is set up correctly and that events flow between services as expected to record a customer's order. Targeted business outcomes Testing serverless solutions may require slightly more time to set up tests that verify event-driven interactions between services. Keep the following practical business reasons in mind as you read this guide: • Increase the quality of your application • Decrease time to build features and fix bugs The quality of an application depends on testing a variety of scenarios to verify functionality. Carefully considering the business scenarios and automating those tests to run against cloud services will raise the quality of your application. Software bugs and configuration problems have the least impact on cost and schedule when caught during an iterative development cycle. If issues remain undetected during development, finding and fixing in production requires more effort by more people. A well planned serverless testing strategy will increase software quality and improve iteration time by verifying your Lambda functions and applications perform as expected in a cloud environment. What to test We recommend adopting a testing strategy that tests managed service behaviors, cloud configuration, security policies, and the integration with your code to improve"} +{"global_id": 1325, "doc_id": "lambda", "chunk_id": "31", "question_id": 2, "question": "What do integration tests involve?", "answer_span": "Integration tests - Tests involving two or more components or services that interact, typically in a cloud environment.", "chunk": "repository to dive in with practical examples, specific to your chosen language and runtime. For serverless testing, you will still write unit, integration and end-to-end tests. • Unit tests - Tests that run against an isolated block of code. For example, verifying the business logic to calculate the delivery charge given a particular item and destination. • Integration tests - Tests involving two or more components or services that interact, typically in a cloud environment. For example, verifying a function processes events from a queue. 1074 AWS Lambda Developer Guide • End-to-end tests - Tests that verify behavior across an entire application. For example, ensuring infrastructure is set up correctly and that events flow between services as expected to record a customer's order. Targeted business outcomes Testing serverless solutions may require slightly more time to set up tests that verify event-driven interactions between services. Keep the following practical business reasons in mind as you read this guide: • Increase the quality of your application • Decrease time to build features and fix bugs The quality of an application depends on testing a variety of scenarios to verify functionality. Carefully considering the business scenarios and automating those tests to run against cloud services will raise the quality of your application. Software bugs and configuration problems have the least impact on cost and schedule when caught during an iterative development cycle. If issues remain undetected during development, finding and fixing in production requires more effort by more people. A well planned serverless testing strategy will increase software quality and improve iteration time by verifying your Lambda functions and applications perform as expected in a cloud environment. What to test We recommend adopting a testing strategy that tests managed service behaviors, cloud configuration, security policies, and the integration with your code to improve"} +{"global_id": 1326, "doc_id": "lambda", "chunk_id": "31", "question_id": 3, "question": "What is the purpose of end-to-end tests?", "answer_span": "End-to-end tests - Tests that verify behavior across an entire application.", "chunk": "repository to dive in with practical examples, specific to your chosen language and runtime. For serverless testing, you will still write unit, integration and end-to-end tests. • Unit tests - Tests that run against an isolated block of code. For example, verifying the business logic to calculate the delivery charge given a particular item and destination. • Integration tests - Tests involving two or more components or services that interact, typically in a cloud environment. For example, verifying a function processes events from a queue. 1074 AWS Lambda Developer Guide • End-to-end tests - Tests that verify behavior across an entire application. For example, ensuring infrastructure is set up correctly and that events flow between services as expected to record a customer's order. Targeted business outcomes Testing serverless solutions may require slightly more time to set up tests that verify event-driven interactions between services. Keep the following practical business reasons in mind as you read this guide: • Increase the quality of your application • Decrease time to build features and fix bugs The quality of an application depends on testing a variety of scenarios to verify functionality. Carefully considering the business scenarios and automating those tests to run against cloud services will raise the quality of your application. Software bugs and configuration problems have the least impact on cost and schedule when caught during an iterative development cycle. If issues remain undetected during development, finding and fixing in production requires more effort by more people. A well planned serverless testing strategy will increase software quality and improve iteration time by verifying your Lambda functions and applications perform as expected in a cloud environment. What to test We recommend adopting a testing strategy that tests managed service behaviors, cloud configuration, security policies, and the integration with your code to improve"} +{"global_id": 1327, "doc_id": "lambda", "chunk_id": "31", "question_id": 4, "question": "What may testing serverless solutions require?", "answer_span": "Testing serverless solutions may require slightly more time to set up tests that verify event-driven interactions between services.", "chunk": "repository to dive in with practical examples, specific to your chosen language and runtime. For serverless testing, you will still write unit, integration and end-to-end tests. • Unit tests - Tests that run against an isolated block of code. For example, verifying the business logic to calculate the delivery charge given a particular item and destination. • Integration tests - Tests involving two or more components or services that interact, typically in a cloud environment. For example, verifying a function processes events from a queue. 1074 AWS Lambda Developer Guide • End-to-end tests - Tests that verify behavior across an entire application. For example, ensuring infrastructure is set up correctly and that events flow between services as expected to record a customer's order. Targeted business outcomes Testing serverless solutions may require slightly more time to set up tests that verify event-driven interactions between services. Keep the following practical business reasons in mind as you read this guide: • Increase the quality of your application • Decrease time to build features and fix bugs The quality of an application depends on testing a variety of scenarios to verify functionality. Carefully considering the business scenarios and automating those tests to run against cloud services will raise the quality of your application. Software bugs and configuration problems have the least impact on cost and schedule when caught during an iterative development cycle. If issues remain undetected during development, finding and fixing in production requires more effort by more people. A well planned serverless testing strategy will increase software quality and improve iteration time by verifying your Lambda functions and applications perform as expected in a cloud environment. What to test We recommend adopting a testing strategy that tests managed service behaviors, cloud configuration, security policies, and the integration with your code to improve"} +{"global_id": 1328, "doc_id": "lambda", "chunk_id": "32", "question_id": 1, "question": "What will the testing strategy increase?", "answer_span": "testing strategy will increase software quality", "chunk": "testing strategy will increase software quality and improve iteration time by verifying your Lambda functions and applications perform as expected in a cloud environment. What to test We recommend adopting a testing strategy that tests managed service behaviors, cloud configuration, security policies, and the integration with your code to improve software quality. Behavior testing, also known as black box testing, verifies a system works as expected without knowing all the internals. • Run unit tests to check business logic inside Lambda functions. • Verify integrated services are actually invoked, and input parameters are correct. • Check that an event goes through all expected services end-to-end in a workflow. Targeted business outcomes 1075"} +{"global_id": 1329, "doc_id": "lambda", "chunk_id": "32", "question_id": 2, "question": "What does behavior testing verify?", "answer_span": "Behavior testing, also known as black box testing, verifies a system works as expected", "chunk": "testing strategy will increase software quality and improve iteration time by verifying your Lambda functions and applications perform as expected in a cloud environment. What to test We recommend adopting a testing strategy that tests managed service behaviors, cloud configuration, security policies, and the integration with your code to improve software quality. Behavior testing, also known as black box testing, verifies a system works as expected without knowing all the internals. • Run unit tests to check business logic inside Lambda functions. • Verify integrated services are actually invoked, and input parameters are correct. • Check that an event goes through all expected services end-to-end in a workflow. Targeted business outcomes 1075"} +{"global_id": 1330, "doc_id": "lambda", "chunk_id": "32", "question_id": 3, "question": "What should you run to check business logic inside Lambda functions?", "answer_span": "Run unit tests to check business logic inside Lambda functions.", "chunk": "testing strategy will increase software quality and improve iteration time by verifying your Lambda functions and applications perform as expected in a cloud environment. What to test We recommend adopting a testing strategy that tests managed service behaviors, cloud configuration, security policies, and the integration with your code to improve software quality. Behavior testing, also known as black box testing, verifies a system works as expected without knowing all the internals. • Run unit tests to check business logic inside Lambda functions. • Verify integrated services are actually invoked, and input parameters are correct. • Check that an event goes through all expected services end-to-end in a workflow. Targeted business outcomes 1075"} +{"global_id": 1331, "doc_id": "lambda", "chunk_id": "32", "question_id": 4, "question": "What should you verify about integrated services?", "answer_span": "Verify integrated services are actually invoked, and input parameters are correct.", "chunk": "testing strategy will increase software quality and improve iteration time by verifying your Lambda functions and applications perform as expected in a cloud environment. What to test We recommend adopting a testing strategy that tests managed service behaviors, cloud configuration, security policies, and the integration with your code to improve software quality. Behavior testing, also known as black box testing, verifies a system works as expected without knowing all the internals. • Run unit tests to check business logic inside Lambda functions. • Verify integrated services are actually invoked, and input parameters are correct. • Check that an event goes through all expected services end-to-end in a workflow. Targeted business outcomes 1075"} +{"global_id": 1332, "doc_id": "cloudfront", "chunk_id": "0", "question_id": 1, "question": "What is Amazon CloudFront?", "answer_span": "Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users.", "chunk": "Amazon CloudFront Developer Guide What is Amazon CloudFront? Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. • If the content is already in the edge location with the lowest latency, CloudFront delivers it immediately. • If the content is not in that edge location, CloudFront retrieves it from an origin that you've defined—such as an Amazon S3 bucket, a MediaPackage channel, or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content. As an example, suppose that you're serving an image from a traditional web server, not from CloudFront. For example, you might serve an image, sunsetphoto.png, using the URL https:// example.com/sunsetphoto.png. Your users can easily navigate to this URL and see the image. But they probably don't know that their request is routed from one network to another—through the complex collection of interconnected networks that comprise the internet—until the image is found. CloudFront speeds up the distribution of your content by routing each user request through the AWS backbone network to the edge location that can best serve your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the AWS network dramatically reduces the number of networks that your users' requests must pass through, which improves performance. Users get lower latency—the time it takes to load the first byte of"} +{"global_id": 1333, "doc_id": "cloudfront", "chunk_id": "0", "question_id": 2, "question": "How does CloudFront deliver content?", "answer_span": "CloudFront delivers your content through a worldwide network of data centers called edge locations.", "chunk": "Amazon CloudFront Developer Guide What is Amazon CloudFront? Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. • If the content is already in the edge location with the lowest latency, CloudFront delivers it immediately. • If the content is not in that edge location, CloudFront retrieves it from an origin that you've defined—such as an Amazon S3 bucket, a MediaPackage channel, or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content. As an example, suppose that you're serving an image from a traditional web server, not from CloudFront. For example, you might serve an image, sunsetphoto.png, using the URL https:// example.com/sunsetphoto.png. Your users can easily navigate to this URL and see the image. But they probably don't know that their request is routed from one network to another—through the complex collection of interconnected networks that comprise the internet—until the image is found. CloudFront speeds up the distribution of your content by routing each user request through the AWS backbone network to the edge location that can best serve your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the AWS network dramatically reduces the number of networks that your users' requests must pass through, which improves performance. Users get lower latency—the time it takes to load the first byte of"} +{"global_id": 1334, "doc_id": "cloudfront", "chunk_id": "0", "question_id": 3, "question": "What happens if the content is already in the edge location with the lowest latency?", "answer_span": "If the content is already in the edge location with the lowest latency, CloudFront delivers it immediately.", "chunk": "Amazon CloudFront Developer Guide What is Amazon CloudFront? Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. • If the content is already in the edge location with the lowest latency, CloudFront delivers it immediately. • If the content is not in that edge location, CloudFront retrieves it from an origin that you've defined—such as an Amazon S3 bucket, a MediaPackage channel, or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content. As an example, suppose that you're serving an image from a traditional web server, not from CloudFront. For example, you might serve an image, sunsetphoto.png, using the URL https:// example.com/sunsetphoto.png. Your users can easily navigate to this URL and see the image. But they probably don't know that their request is routed from one network to another—through the complex collection of interconnected networks that comprise the internet—until the image is found. CloudFront speeds up the distribution of your content by routing each user request through the AWS backbone network to the edge location that can best serve your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the AWS network dramatically reduces the number of networks that your users' requests must pass through, which improves performance. Users get lower latency—the time it takes to load the first byte of"} +{"global_id": 1335, "doc_id": "cloudfront", "chunk_id": "0", "question_id": 4, "question": "What does CloudFront do if the content is not in the edge location?", "answer_span": "If the content is not in that edge location, CloudFront retrieves it from an origin that you've defined.", "chunk": "Amazon CloudFront Developer Guide What is Amazon CloudFront? Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations. When a user requests content that you're serving with CloudFront, the request is routed to the edge location that provides the lowest latency (time delay), so that content is delivered with the best possible performance. • If the content is already in the edge location with the lowest latency, CloudFront delivers it immediately. • If the content is not in that edge location, CloudFront retrieves it from an origin that you've defined—such as an Amazon S3 bucket, a MediaPackage channel, or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content. As an example, suppose that you're serving an image from a traditional web server, not from CloudFront. For example, you might serve an image, sunsetphoto.png, using the URL https:// example.com/sunsetphoto.png. Your users can easily navigate to this URL and see the image. But they probably don't know that their request is routed from one network to another—through the complex collection of interconnected networks that comprise the internet—until the image is found. CloudFront speeds up the distribution of your content by routing each user request through the AWS backbone network to the edge location that can best serve your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the AWS network dramatically reduces the number of networks that your users' requests must pass through, which improves performance. Users get lower latency—the time it takes to load the first byte of"} +{"global_id": 1336, "doc_id": "cloudfront", "chunk_id": "1", "question_id": 1, "question": "What typically provides the fastest delivery to the viewer?", "answer_span": "Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer.", "chunk": "your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the AWS network dramatically reduces the number of networks that your users' requests must pass through, which improves performance. Users get lower latency—the time it takes to load the first byte of the file— and higher data transfer rates. You also get increased reliability and availability because copies of your files (also known as objects) are now held (or cached) in multiple edge locations around the world. Topics • How you set up CloudFront to deliver content • Choose between standard distribution or multi-tenant distribution 1 Amazon CloudFront Developer Guide • Pricing • Ways to use CloudFront • How CloudFront delivers content • Locations and IP address ranges of CloudFront edge servers • Using CloudFront with an AWS SDK • CloudFront technical resources How you set up CloudFront to deliver content You create a CloudFront distribution to tell CloudFront where you want content to be delivered from, and the details about how to track and manage content delivery. Then CloudFront uses computers—edge servers—that are close to your viewers to deliver that content quickly when someone wants to see it or use it. How you set up CloudFront to deliver content 2 Amazon CloudFront Developer Guide How you configure CloudFront to deliver your content 1. You specify origin servers, like an Amazon S3 bucket or your own HTTP server, from which CloudFront gets your files which will then be distributed from CloudFront edge locations all over the world. An origin server stores the original, definitive version of your objects. If you're serving content over HTTP, your origin server is either an Amazon S3 bucket or an HTTP server, such as a web server. Your HTTP server can run on an Amazon Elastic Compute"} +{"global_id": 1337, "doc_id": "cloudfront", "chunk_id": "1", "question_id": 2, "question": "What does using the AWS network dramatically reduce?", "answer_span": "Using the AWS network dramatically reduces the number of networks that your users' requests must pass through, which improves performance.", "chunk": "your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the AWS network dramatically reduces the number of networks that your users' requests must pass through, which improves performance. Users get lower latency—the time it takes to load the first byte of the file— and higher data transfer rates. You also get increased reliability and availability because copies of your files (also known as objects) are now held (or cached) in multiple edge locations around the world. Topics • How you set up CloudFront to deliver content • Choose between standard distribution or multi-tenant distribution 1 Amazon CloudFront Developer Guide • Pricing • Ways to use CloudFront • How CloudFront delivers content • Locations and IP address ranges of CloudFront edge servers • Using CloudFront with an AWS SDK • CloudFront technical resources How you set up CloudFront to deliver content You create a CloudFront distribution to tell CloudFront where you want content to be delivered from, and the details about how to track and manage content delivery. Then CloudFront uses computers—edge servers—that are close to your viewers to deliver that content quickly when someone wants to see it or use it. How you set up CloudFront to deliver content 2 Amazon CloudFront Developer Guide How you configure CloudFront to deliver your content 1. You specify origin servers, like an Amazon S3 bucket or your own HTTP server, from which CloudFront gets your files which will then be distributed from CloudFront edge locations all over the world. An origin server stores the original, definitive version of your objects. If you're serving content over HTTP, your origin server is either an Amazon S3 bucket or an HTTP server, such as a web server. Your HTTP server can run on an Amazon Elastic Compute"} +{"global_id": 1338, "doc_id": "cloudfront", "chunk_id": "1", "question_id": 3, "question": "What do you specify to configure CloudFront to deliver your content?", "answer_span": "You specify origin servers, like an Amazon S3 bucket or your own HTTP server, from which CloudFront gets your files which will then be distributed from CloudFront edge locations all over the world.", "chunk": "your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the AWS network dramatically reduces the number of networks that your users' requests must pass through, which improves performance. Users get lower latency—the time it takes to load the first byte of the file— and higher data transfer rates. You also get increased reliability and availability because copies of your files (also known as objects) are now held (or cached) in multiple edge locations around the world. Topics • How you set up CloudFront to deliver content • Choose between standard distribution or multi-tenant distribution 1 Amazon CloudFront Developer Guide • Pricing • Ways to use CloudFront • How CloudFront delivers content • Locations and IP address ranges of CloudFront edge servers • Using CloudFront with an AWS SDK • CloudFront technical resources How you set up CloudFront to deliver content You create a CloudFront distribution to tell CloudFront where you want content to be delivered from, and the details about how to track and manage content delivery. Then CloudFront uses computers—edge servers—that are close to your viewers to deliver that content quickly when someone wants to see it or use it. How you set up CloudFront to deliver content 2 Amazon CloudFront Developer Guide How you configure CloudFront to deliver your content 1. You specify origin servers, like an Amazon S3 bucket or your own HTTP server, from which CloudFront gets your files which will then be distributed from CloudFront edge locations all over the world. An origin server stores the original, definitive version of your objects. If you're serving content over HTTP, your origin server is either an Amazon S3 bucket or an HTTP server, such as a web server. Your HTTP server can run on an Amazon Elastic Compute"} +{"global_id": 1339, "doc_id": "cloudfront", "chunk_id": "1", "question_id": 4, "question": "What is an origin server?", "answer_span": "An origin server stores the original, definitive version of your objects.", "chunk": "your content. Typically, this is a CloudFront edge server that provides the fastest delivery to the viewer. Using the AWS network dramatically reduces the number of networks that your users' requests must pass through, which improves performance. Users get lower latency—the time it takes to load the first byte of the file— and higher data transfer rates. You also get increased reliability and availability because copies of your files (also known as objects) are now held (or cached) in multiple edge locations around the world. Topics • How you set up CloudFront to deliver content • Choose between standard distribution or multi-tenant distribution 1 Amazon CloudFront Developer Guide • Pricing • Ways to use CloudFront • How CloudFront delivers content • Locations and IP address ranges of CloudFront edge servers • Using CloudFront with an AWS SDK • CloudFront technical resources How you set up CloudFront to deliver content You create a CloudFront distribution to tell CloudFront where you want content to be delivered from, and the details about how to track and manage content delivery. Then CloudFront uses computers—edge servers—that are close to your viewers to deliver that content quickly when someone wants to see it or use it. How you set up CloudFront to deliver content 2 Amazon CloudFront Developer Guide How you configure CloudFront to deliver your content 1. You specify origin servers, like an Amazon S3 bucket or your own HTTP server, from which CloudFront gets your files which will then be distributed from CloudFront edge locations all over the world. An origin server stores the original, definitive version of your objects. If you're serving content over HTTP, your origin server is either an Amazon S3 bucket or an HTTP server, such as a web server. Your HTTP server can run on an Amazon Elastic Compute"} +{"global_id": 1340, "doc_id": "cloudfront", "chunk_id": "2", "question_id": 1, "question": "What does an origin server store?", "answer_span": "An origin server stores the original, definitive version of your objects.", "chunk": "locations all over the world. An origin server stores the original, definitive version of your objects. If you're serving content over HTTP, your origin server is either an Amazon S3 bucket or an HTTP server, such as a web server. Your HTTP server can run on an Amazon Elastic Compute Cloud (Amazon EC2) instance or on a server that you manage; these servers are also known as custom origins. 2. You upload your files to your origin servers. Your files, also known as objects, typically include web pages, images, and media files, but can be anything that can be served over HTTP. If you're using an Amazon S3 bucket as an origin server, you can make the objects in your bucket publicly readable, so that anyone who knows the CloudFront URLs for your objects can access them. You also have the option of keeping objects private and controlling who accesses them. See Serve private content with signed URLs and signed cookies. 3. You create a CloudFront distribution, which tells CloudFront which origin servers to get your files from when users request the files through your web site or application. At the same time, you specify details such as whether you want CloudFront to log all requests and whether you want the distribution to be enabled as soon as it's created. 4. CloudFront assigns a domain name to your new distribution that you can see in the CloudFront console, or that is returned in the response to a programmatic request, for example, an API request. If you like, you can add an alternate domain name to use instead. 5. CloudFront sends your distribution's configuration (but not your content) to all of its edge locations or points of presence (POPs)— collections of servers in geographically-dispersed data centers where CloudFront caches copies of"} +{"global_id": 1341, "doc_id": "cloudfront", "chunk_id": "2", "question_id": 2, "question": "What can your HTTP server run on?", "answer_span": "Your HTTP server can run on an Amazon Elastic Compute Cloud (Amazon EC2) instance or on a server that you manage; these servers are also known as custom origins.", "chunk": "locations all over the world. An origin server stores the original, definitive version of your objects. If you're serving content over HTTP, your origin server is either an Amazon S3 bucket or an HTTP server, such as a web server. Your HTTP server can run on an Amazon Elastic Compute Cloud (Amazon EC2) instance or on a server that you manage; these servers are also known as custom origins. 2. You upload your files to your origin servers. Your files, also known as objects, typically include web pages, images, and media files, but can be anything that can be served over HTTP. If you're using an Amazon S3 bucket as an origin server, you can make the objects in your bucket publicly readable, so that anyone who knows the CloudFront URLs for your objects can access them. You also have the option of keeping objects private and controlling who accesses them. See Serve private content with signed URLs and signed cookies. 3. You create a CloudFront distribution, which tells CloudFront which origin servers to get your files from when users request the files through your web site or application. At the same time, you specify details such as whether you want CloudFront to log all requests and whether you want the distribution to be enabled as soon as it's created. 4. CloudFront assigns a domain name to your new distribution that you can see in the CloudFront console, or that is returned in the response to a programmatic request, for example, an API request. If you like, you can add an alternate domain name to use instead. 5. CloudFront sends your distribution's configuration (but not your content) to all of its edge locations or points of presence (POPs)— collections of servers in geographically-dispersed data centers where CloudFront caches copies of"} +{"global_id": 1342, "doc_id": "cloudfront", "chunk_id": "2", "question_id": 3, "question": "What types of files are typically included as objects?", "answer_span": "Your files, also known as objects, typically include web pages, images, and media files, but can be anything that can be served over HTTP.", "chunk": "locations all over the world. An origin server stores the original, definitive version of your objects. If you're serving content over HTTP, your origin server is either an Amazon S3 bucket or an HTTP server, such as a web server. Your HTTP server can run on an Amazon Elastic Compute Cloud (Amazon EC2) instance or on a server that you manage; these servers are also known as custom origins. 2. You upload your files to your origin servers. Your files, also known as objects, typically include web pages, images, and media files, but can be anything that can be served over HTTP. If you're using an Amazon S3 bucket as an origin server, you can make the objects in your bucket publicly readable, so that anyone who knows the CloudFront URLs for your objects can access them. You also have the option of keeping objects private and controlling who accesses them. See Serve private content with signed URLs and signed cookies. 3. You create a CloudFront distribution, which tells CloudFront which origin servers to get your files from when users request the files through your web site or application. At the same time, you specify details such as whether you want CloudFront to log all requests and whether you want the distribution to be enabled as soon as it's created. 4. CloudFront assigns a domain name to your new distribution that you can see in the CloudFront console, or that is returned in the response to a programmatic request, for example, an API request. If you like, you can add an alternate domain name to use instead. 5. CloudFront sends your distribution's configuration (but not your content) to all of its edge locations or points of presence (POPs)— collections of servers in geographically-dispersed data centers where CloudFront caches copies of"} +{"global_id": 1343, "doc_id": "cloudfront", "chunk_id": "2", "question_id": 4, "question": "What does CloudFront assign to your new distribution?", "answer_span": "CloudFront assigns a domain name to your new distribution that you can see in the CloudFront console.", "chunk": "locations all over the world. An origin server stores the original, definitive version of your objects. If you're serving content over HTTP, your origin server is either an Amazon S3 bucket or an HTTP server, such as a web server. Your HTTP server can run on an Amazon Elastic Compute Cloud (Amazon EC2) instance or on a server that you manage; these servers are also known as custom origins. 2. You upload your files to your origin servers. Your files, also known as objects, typically include web pages, images, and media files, but can be anything that can be served over HTTP. If you're using an Amazon S3 bucket as an origin server, you can make the objects in your bucket publicly readable, so that anyone who knows the CloudFront URLs for your objects can access them. You also have the option of keeping objects private and controlling who accesses them. See Serve private content with signed URLs and signed cookies. 3. You create a CloudFront distribution, which tells CloudFront which origin servers to get your files from when users request the files through your web site or application. At the same time, you specify details such as whether you want CloudFront to log all requests and whether you want the distribution to be enabled as soon as it's created. 4. CloudFront assigns a domain name to your new distribution that you can see in the CloudFront console, or that is returned in the response to a programmatic request, for example, an API request. If you like, you can add an alternate domain name to use instead. 5. CloudFront sends your distribution's configuration (but not your content) to all of its edge locations or points of presence (POPs)— collections of servers in geographically-dispersed data centers where CloudFront caches copies of"} +{"global_id": 1344, "doc_id": "cloudfront", "chunk_id": "3", "question_id": 1, "question": "What does CloudFront send to all of its edge locations?", "answer_span": "CloudFront sends your distribution's configuration (but not your content) to all of its edge locations or points of presence (POPs)— collections of servers in geographically-dispersed data centers where CloudFront caches copies of your files.", "chunk": "example, an API request. If you like, you can add an alternate domain name to use instead. 5. CloudFront sends your distribution's configuration (but not your content) to all of its edge locations or points of presence (POPs)— collections of servers in geographically-dispersed data centers where CloudFront caches copies of your files. As you develop your website or application, you use the domain name that CloudFront provides for your URLs. For example, if CloudFront returns d111111abcdef8.cloudfront.net as the domain name for your distribution, the URL for logo.jpg in your Amazon S3 bucket (or in the root directory on an HTTP server) is https://d111111abcdef8.cloudfront.net/logo.jpg. Or you can set up CloudFront to use your own domain name with your distribution. In that case, the URL might be https://www.example.com/logo.jpg. How you set up CloudFront to deliver content 3 Amazon CloudFront Developer Guide Optionally, you can configure your origin server to add headers to the files, to indicate how long you want the files to stay in the cache in CloudFront edge locations. By default, each file stays in an edge location for 24 hours before it expires. The minimum expiration time is 0 seconds; there isn't a maximum expiration time. For more information, see Manage how long content stays in the cache (expiration). Choose between standard distribution or multi-tenant distribution CloudFront offers distribution options for single websites or apps, and for multi-tenant scenarios. Standard distribution Designed for unique configurations per website or application. Choose this in the following use cases: • You need a standalone CloudFront distribution • Each site or application requires its own custom settings Most people start with a standard distribution. Multi-tenant distribution and distribution tenants (CloudFront SaaS Manager) Designed specifically for SaaS providers and multi-tenant scenarios. Choose this in the following use cases: • You're building a SaaS platform"} +{"global_id": 1345, "doc_id": "cloudfront", "chunk_id": "3", "question_id": 2, "question": "What is the default expiration time for each file in an edge location?", "answer_span": "By default, each file stays in an edge location for 24 hours before it expires.", "chunk": "example, an API request. If you like, you can add an alternate domain name to use instead. 5. CloudFront sends your distribution's configuration (but not your content) to all of its edge locations or points of presence (POPs)— collections of servers in geographically-dispersed data centers where CloudFront caches copies of your files. As you develop your website or application, you use the domain name that CloudFront provides for your URLs. For example, if CloudFront returns d111111abcdef8.cloudfront.net as the domain name for your distribution, the URL for logo.jpg in your Amazon S3 bucket (or in the root directory on an HTTP server) is https://d111111abcdef8.cloudfront.net/logo.jpg. Or you can set up CloudFront to use your own domain name with your distribution. In that case, the URL might be https://www.example.com/logo.jpg. How you set up CloudFront to deliver content 3 Amazon CloudFront Developer Guide Optionally, you can configure your origin server to add headers to the files, to indicate how long you want the files to stay in the cache in CloudFront edge locations. By default, each file stays in an edge location for 24 hours before it expires. The minimum expiration time is 0 seconds; there isn't a maximum expiration time. For more information, see Manage how long content stays in the cache (expiration). Choose between standard distribution or multi-tenant distribution CloudFront offers distribution options for single websites or apps, and for multi-tenant scenarios. Standard distribution Designed for unique configurations per website or application. Choose this in the following use cases: • You need a standalone CloudFront distribution • Each site or application requires its own custom settings Most people start with a standard distribution. Multi-tenant distribution and distribution tenants (CloudFront SaaS Manager) Designed specifically for SaaS providers and multi-tenant scenarios. Choose this in the following use cases: • You're building a SaaS platform"} +{"global_id": 1346, "doc_id": "cloudfront", "chunk_id": "3", "question_id": 3, "question": "What type of distribution is designed for unique configurations per website or application?", "answer_span": "Standard distribution Designed for unique configurations per website or application.", "chunk": "example, an API request. If you like, you can add an alternate domain name to use instead. 5. CloudFront sends your distribution's configuration (but not your content) to all of its edge locations or points of presence (POPs)— collections of servers in geographically-dispersed data centers where CloudFront caches copies of your files. As you develop your website or application, you use the domain name that CloudFront provides for your URLs. For example, if CloudFront returns d111111abcdef8.cloudfront.net as the domain name for your distribution, the URL for logo.jpg in your Amazon S3 bucket (or in the root directory on an HTTP server) is https://d111111abcdef8.cloudfront.net/logo.jpg. Or you can set up CloudFront to use your own domain name with your distribution. In that case, the URL might be https://www.example.com/logo.jpg. How you set up CloudFront to deliver content 3 Amazon CloudFront Developer Guide Optionally, you can configure your origin server to add headers to the files, to indicate how long you want the files to stay in the cache in CloudFront edge locations. By default, each file stays in an edge location for 24 hours before it expires. The minimum expiration time is 0 seconds; there isn't a maximum expiration time. For more information, see Manage how long content stays in the cache (expiration). Choose between standard distribution or multi-tenant distribution CloudFront offers distribution options for single websites or apps, and for multi-tenant scenarios. Standard distribution Designed for unique configurations per website or application. Choose this in the following use cases: • You need a standalone CloudFront distribution • Each site or application requires its own custom settings Most people start with a standard distribution. Multi-tenant distribution and distribution tenants (CloudFront SaaS Manager) Designed specifically for SaaS providers and multi-tenant scenarios. Choose this in the following use cases: • You're building a SaaS platform"} +{"global_id": 1347, "doc_id": "cloudfront", "chunk_id": "3", "question_id": 4, "question": "Who is the multi-tenant distribution specifically designed for?", "answer_span": "Designed specifically for SaaS providers and multi-tenant scenarios.", "chunk": "example, an API request. If you like, you can add an alternate domain name to use instead. 5. CloudFront sends your distribution's configuration (but not your content) to all of its edge locations or points of presence (POPs)— collections of servers in geographically-dispersed data centers where CloudFront caches copies of your files. As you develop your website or application, you use the domain name that CloudFront provides for your URLs. For example, if CloudFront returns d111111abcdef8.cloudfront.net as the domain name for your distribution, the URL for logo.jpg in your Amazon S3 bucket (or in the root directory on an HTTP server) is https://d111111abcdef8.cloudfront.net/logo.jpg. Or you can set up CloudFront to use your own domain name with your distribution. In that case, the URL might be https://www.example.com/logo.jpg. How you set up CloudFront to deliver content 3 Amazon CloudFront Developer Guide Optionally, you can configure your origin server to add headers to the files, to indicate how long you want the files to stay in the cache in CloudFront edge locations. By default, each file stays in an edge location for 24 hours before it expires. The minimum expiration time is 0 seconds; there isn't a maximum expiration time. For more information, see Manage how long content stays in the cache (expiration). Choose between standard distribution or multi-tenant distribution CloudFront offers distribution options for single websites or apps, and for multi-tenant scenarios. Standard distribution Designed for unique configurations per website or application. Choose this in the following use cases: • You need a standalone CloudFront distribution • Each site or application requires its own custom settings Most people start with a standard distribution. Multi-tenant distribution and distribution tenants (CloudFront SaaS Manager) Designed specifically for SaaS providers and multi-tenant scenarios. Choose this in the following use cases: • You're building a SaaS platform"} +{"global_id": 1348, "doc_id": "cloudfront", "chunk_id": "4", "question_id": 1, "question": "What type of distribution do most people start with?", "answer_span": "Most people start with a standard distribution.", "chunk": "a standalone CloudFront distribution • Each site or application requires its own custom settings Most people start with a standard distribution. Multi-tenant distribution and distribution tenants (CloudFront SaaS Manager) Designed specifically for SaaS providers and multi-tenant scenarios. Choose this in the following use cases: • You're building a SaaS platform to serve multiple customer websites or applications • You need to manage multiple similar distributions efficiently • You want centralized control over shared configurations For more information, see Understand how multi-tenant distributions work. Pricing CloudFront charges for data transfers out from its edge locations, along with HTTP or HTTPS requests. Pricing varies by usage type, geographical region, and feature selection. The data transfer from your origin to CloudFront is always free when using AWS origins like Amazon Simple Storage Service (Amazon S3), Elastic Load Balancing, or Amazon API Gateway. Choose between standard distribution or multi-tenant distribution 4 Amazon CloudFront Developer Guide You are only billed for the outbound data transfer from CloudFront to the viewer when using AWS origins. For more information, see CloudFront pricing and the Billing and Savings Bundle FAQs. Ways to use CloudFront Using CloudFront can help you accomplish a variety of goals. This section lists just a few, together with links to more information, to give you an idea of the possibilities. Topics • Accelerate static website content delivery • Serve video on demand or live streaming video • Encrypt specific fields throughout system processing • Customize at the edge • Serve private content by using Lambda@Edge customizations Accelerate static website content delivery CloudFront can speed up the delivery of your static content (for example, images, style sheets, JavaScript, and so on) to viewers across the globe. By using CloudFront, you can take advantage of the AWS backbone network and CloudFront edge servers to give your"} +{"global_id": 1349, "doc_id": "cloudfront", "chunk_id": "4", "question_id": 2, "question": "What is designed specifically for SaaS providers and multi-tenant scenarios?", "answer_span": "Multi-tenant distribution and distribution tenants (CloudFront SaaS Manager)", "chunk": "a standalone CloudFront distribution • Each site or application requires its own custom settings Most people start with a standard distribution. Multi-tenant distribution and distribution tenants (CloudFront SaaS Manager) Designed specifically for SaaS providers and multi-tenant scenarios. Choose this in the following use cases: • You're building a SaaS platform to serve multiple customer websites or applications • You need to manage multiple similar distributions efficiently • You want centralized control over shared configurations For more information, see Understand how multi-tenant distributions work. Pricing CloudFront charges for data transfers out from its edge locations, along with HTTP or HTTPS requests. Pricing varies by usage type, geographical region, and feature selection. The data transfer from your origin to CloudFront is always free when using AWS origins like Amazon Simple Storage Service (Amazon S3), Elastic Load Balancing, or Amazon API Gateway. Choose between standard distribution or multi-tenant distribution 4 Amazon CloudFront Developer Guide You are only billed for the outbound data transfer from CloudFront to the viewer when using AWS origins. For more information, see CloudFront pricing and the Billing and Savings Bundle FAQs. Ways to use CloudFront Using CloudFront can help you accomplish a variety of goals. This section lists just a few, together with links to more information, to give you an idea of the possibilities. Topics • Accelerate static website content delivery • Serve video on demand or live streaming video • Encrypt specific fields throughout system processing • Customize at the edge • Serve private content by using Lambda@Edge customizations Accelerate static website content delivery CloudFront can speed up the delivery of your static content (for example, images, style sheets, JavaScript, and so on) to viewers across the globe. By using CloudFront, you can take advantage of the AWS backbone network and CloudFront edge servers to give your"} +{"global_id": 1350, "doc_id": "cloudfront", "chunk_id": "4", "question_id": 3, "question": "What is always free when using AWS origins like Amazon S3?", "answer_span": "The data transfer from your origin to CloudFront is always free when using AWS origins like Amazon Simple Storage Service (Amazon S3), Elastic Load Balancing, or Amazon API Gateway.", "chunk": "a standalone CloudFront distribution • Each site or application requires its own custom settings Most people start with a standard distribution. Multi-tenant distribution and distribution tenants (CloudFront SaaS Manager) Designed specifically for SaaS providers and multi-tenant scenarios. Choose this in the following use cases: • You're building a SaaS platform to serve multiple customer websites or applications • You need to manage multiple similar distributions efficiently • You want centralized control over shared configurations For more information, see Understand how multi-tenant distributions work. Pricing CloudFront charges for data transfers out from its edge locations, along with HTTP or HTTPS requests. Pricing varies by usage type, geographical region, and feature selection. The data transfer from your origin to CloudFront is always free when using AWS origins like Amazon Simple Storage Service (Amazon S3), Elastic Load Balancing, or Amazon API Gateway. Choose between standard distribution or multi-tenant distribution 4 Amazon CloudFront Developer Guide You are only billed for the outbound data transfer from CloudFront to the viewer when using AWS origins. For more information, see CloudFront pricing and the Billing and Savings Bundle FAQs. Ways to use CloudFront Using CloudFront can help you accomplish a variety of goals. This section lists just a few, together with links to more information, to give you an idea of the possibilities. Topics • Accelerate static website content delivery • Serve video on demand or live streaming video • Encrypt specific fields throughout system processing • Customize at the edge • Serve private content by using Lambda@Edge customizations Accelerate static website content delivery CloudFront can speed up the delivery of your static content (for example, images, style sheets, JavaScript, and so on) to viewers across the globe. By using CloudFront, you can take advantage of the AWS backbone network and CloudFront edge servers to give your"} +{"global_id": 1351, "doc_id": "cloudfront", "chunk_id": "4", "question_id": 4, "question": "What can CloudFront help you accelerate?", "answer_span": "Accelerate static website content delivery", "chunk": "a standalone CloudFront distribution • Each site or application requires its own custom settings Most people start with a standard distribution. Multi-tenant distribution and distribution tenants (CloudFront SaaS Manager) Designed specifically for SaaS providers and multi-tenant scenarios. Choose this in the following use cases: • You're building a SaaS platform to serve multiple customer websites or applications • You need to manage multiple similar distributions efficiently • You want centralized control over shared configurations For more information, see Understand how multi-tenant distributions work. Pricing CloudFront charges for data transfers out from its edge locations, along with HTTP or HTTPS requests. Pricing varies by usage type, geographical region, and feature selection. The data transfer from your origin to CloudFront is always free when using AWS origins like Amazon Simple Storage Service (Amazon S3), Elastic Load Balancing, or Amazon API Gateway. Choose between standard distribution or multi-tenant distribution 4 Amazon CloudFront Developer Guide You are only billed for the outbound data transfer from CloudFront to the viewer when using AWS origins. For more information, see CloudFront pricing and the Billing and Savings Bundle FAQs. Ways to use CloudFront Using CloudFront can help you accomplish a variety of goals. This section lists just a few, together with links to more information, to give you an idea of the possibilities. Topics • Accelerate static website content delivery • Serve video on demand or live streaming video • Encrypt specific fields throughout system processing • Customize at the edge • Serve private content by using Lambda@Edge customizations Accelerate static website content delivery CloudFront can speed up the delivery of your static content (for example, images, style sheets, JavaScript, and so on) to viewers across the globe. By using CloudFront, you can take advantage of the AWS backbone network and CloudFront edge servers to give your"} +{"global_id": 1352, "doc_id": "cloudfront", "chunk_id": "5", "question_id": 1, "question": "What can CloudFront speed up the delivery of?", "answer_span": "CloudFront can speed up the delivery of your static content (for example, images, style sheets, JavaScript, and so on) to viewers across the globe.", "chunk": "Lambda@Edge customizations Accelerate static website content delivery CloudFront can speed up the delivery of your static content (for example, images, style sheets, JavaScript, and so on) to viewers across the globe. By using CloudFront, you can take advantage of the AWS backbone network and CloudFront edge servers to give your viewers a fast, safe, and reliable experience when they visit your website. A simple approach for storing and delivering static content is to use an Amazon S3 bucket. Using S3 together with CloudFront has a number of advantages, including the option to use origin access control to easily restrict access to your Amazon S3 content. For more information about using Amazon S3 together with CloudFront, including an AWS CloudFormation template to help you get started quickly, see Get started with a secure static website. Serve video on demand or live streaming video CloudFront offers several options for streaming your media to global viewers—both pre-recorded files and live events. Ways to use CloudFront 5 Amazon CloudFront Developer Guide • For video on demand (VOD) streaming, you can use CloudFront to stream in common formats such as MPEG DASH, Apple HLS, Microsoft Smooth Streaming, and CMAF, to any device. • For broadcasting a live stream, you can cache media fragments at the edge, so that multiple requests for the manifest file that delivers the fragments in the right order can be combined, to reduce the load on your origin server. For more information about how to deliver streaming content with CloudFront, see Video on demand and live streaming video with CloudFront. Encrypt specific fields throughout system processing When you configure HTTPS with CloudFront, you already have secure end-to-end connections to origin servers. When you add field-level encryption, you can protect specific data throughout system processing in addition to HTTPS security, so that"} +{"global_id": 1353, "doc_id": "cloudfront", "chunk_id": "5", "question_id": 2, "question": "What is a simple approach for storing and delivering static content?", "answer_span": "A simple approach for storing and delivering static content is to use an Amazon S3 bucket.", "chunk": "Lambda@Edge customizations Accelerate static website content delivery CloudFront can speed up the delivery of your static content (for example, images, style sheets, JavaScript, and so on) to viewers across the globe. By using CloudFront, you can take advantage of the AWS backbone network and CloudFront edge servers to give your viewers a fast, safe, and reliable experience when they visit your website. A simple approach for storing and delivering static content is to use an Amazon S3 bucket. Using S3 together with CloudFront has a number of advantages, including the option to use origin access control to easily restrict access to your Amazon S3 content. For more information about using Amazon S3 together with CloudFront, including an AWS CloudFormation template to help you get started quickly, see Get started with a secure static website. Serve video on demand or live streaming video CloudFront offers several options for streaming your media to global viewers—both pre-recorded files and live events. Ways to use CloudFront 5 Amazon CloudFront Developer Guide • For video on demand (VOD) streaming, you can use CloudFront to stream in common formats such as MPEG DASH, Apple HLS, Microsoft Smooth Streaming, and CMAF, to any device. • For broadcasting a live stream, you can cache media fragments at the edge, so that multiple requests for the manifest file that delivers the fragments in the right order can be combined, to reduce the load on your origin server. For more information about how to deliver streaming content with CloudFront, see Video on demand and live streaming video with CloudFront. Encrypt specific fields throughout system processing When you configure HTTPS with CloudFront, you already have secure end-to-end connections to origin servers. When you add field-level encryption, you can protect specific data throughout system processing in addition to HTTPS security, so that"} +{"global_id": 1354, "doc_id": "cloudfront", "chunk_id": "5", "question_id": 3, "question": "What formats can CloudFront stream for video on demand?", "answer_span": "For video on demand (VOD) streaming, you can use CloudFront to stream in common formats such as MPEG DASH, Apple HLS, Microsoft Smooth Streaming, and CMAF, to any device.", "chunk": "Lambda@Edge customizations Accelerate static website content delivery CloudFront can speed up the delivery of your static content (for example, images, style sheets, JavaScript, and so on) to viewers across the globe. By using CloudFront, you can take advantage of the AWS backbone network and CloudFront edge servers to give your viewers a fast, safe, and reliable experience when they visit your website. A simple approach for storing and delivering static content is to use an Amazon S3 bucket. Using S3 together with CloudFront has a number of advantages, including the option to use origin access control to easily restrict access to your Amazon S3 content. For more information about using Amazon S3 together with CloudFront, including an AWS CloudFormation template to help you get started quickly, see Get started with a secure static website. Serve video on demand or live streaming video CloudFront offers several options for streaming your media to global viewers—both pre-recorded files and live events. Ways to use CloudFront 5 Amazon CloudFront Developer Guide • For video on demand (VOD) streaming, you can use CloudFront to stream in common formats such as MPEG DASH, Apple HLS, Microsoft Smooth Streaming, and CMAF, to any device. • For broadcasting a live stream, you can cache media fragments at the edge, so that multiple requests for the manifest file that delivers the fragments in the right order can be combined, to reduce the load on your origin server. For more information about how to deliver streaming content with CloudFront, see Video on demand and live streaming video with CloudFront. Encrypt specific fields throughout system processing When you configure HTTPS with CloudFront, you already have secure end-to-end connections to origin servers. When you add field-level encryption, you can protect specific data throughout system processing in addition to HTTPS security, so that"} +{"global_id": 1355, "doc_id": "cloudfront", "chunk_id": "5", "question_id": 4, "question": "What does adding field-level encryption allow you to do?", "answer_span": "When you add field-level encryption, you can protect specific data throughout system processing in addition to HTTPS security.", "chunk": "Lambda@Edge customizations Accelerate static website content delivery CloudFront can speed up the delivery of your static content (for example, images, style sheets, JavaScript, and so on) to viewers across the globe. By using CloudFront, you can take advantage of the AWS backbone network and CloudFront edge servers to give your viewers a fast, safe, and reliable experience when they visit your website. A simple approach for storing and delivering static content is to use an Amazon S3 bucket. Using S3 together with CloudFront has a number of advantages, including the option to use origin access control to easily restrict access to your Amazon S3 content. For more information about using Amazon S3 together with CloudFront, including an AWS CloudFormation template to help you get started quickly, see Get started with a secure static website. Serve video on demand or live streaming video CloudFront offers several options for streaming your media to global viewers—both pre-recorded files and live events. Ways to use CloudFront 5 Amazon CloudFront Developer Guide • For video on demand (VOD) streaming, you can use CloudFront to stream in common formats such as MPEG DASH, Apple HLS, Microsoft Smooth Streaming, and CMAF, to any device. • For broadcasting a live stream, you can cache media fragments at the edge, so that multiple requests for the manifest file that delivers the fragments in the right order can be combined, to reduce the load on your origin server. For more information about how to deliver streaming content with CloudFront, see Video on demand and live streaming video with CloudFront. Encrypt specific fields throughout system processing When you configure HTTPS with CloudFront, you already have secure end-to-end connections to origin servers. When you add field-level encryption, you can protect specific data throughout system processing in addition to HTTPS security, so that"} +{"global_id": 1356, "doc_id": "cloudfront", "chunk_id": "6", "question_id": 1, "question": "What can you protect throughout system processing in addition to HTTPS security?", "answer_span": "you can protect specific data throughout system processing in addition to HTTPS security", "chunk": "Video on demand and live streaming video with CloudFront. Encrypt specific fields throughout system processing When you configure HTTPS with CloudFront, you already have secure end-to-end connections to origin servers. When you add field-level encryption, you can protect specific data throughout system processing in addition to HTTPS security, so that only certain applications at your origin can see the data. To set up field-level encryption, you add a public key to CloudFront, and then specify the set of fields that you want to be encrypted with the key. For more information, see Use field-level encryption to help protect sensitive data. Customize at the edge Running serverless code at the edge opens up a number of possibilities for customizing the content and experience for viewers, at reduced latency. For example, you can return a custom error message when your origin server is down for maintenance, so viewers don't get a generic HTTP error message. Or you can use a function to help authorize users and control access to your content, before CloudFront forwards a request to your origin. Using Lambda@Edge with CloudFront enables a variety of ways to customize the content that CloudFront delivers. To learn more about Lambda@Edge and how to create and deploy functions with CloudFront, see Customize at the edge with Lambda@Edge. To see a number of code samples that you can customize for your own solutions, see Lambda@Edge example functions. Serve private content by using Lambda@Edge customizations Using Lambda@Edge can help you configure your CloudFront distribution to serve private content from your own custom origin, in addition to using signed URLs or signed cookies. To serve private content using CloudFront, you do the following: Encrypt specific fields throughout system processing 6 Amazon CloudFront Developer Guide • Require that your users (viewers) access content using signed URLs or"} +{"global_id": 1357, "doc_id": "cloudfront", "chunk_id": "6", "question_id": 2, "question": "What do you add to CloudFront to set up field-level encryption?", "answer_span": "you add a public key to CloudFront", "chunk": "Video on demand and live streaming video with CloudFront. Encrypt specific fields throughout system processing When you configure HTTPS with CloudFront, you already have secure end-to-end connections to origin servers. When you add field-level encryption, you can protect specific data throughout system processing in addition to HTTPS security, so that only certain applications at your origin can see the data. To set up field-level encryption, you add a public key to CloudFront, and then specify the set of fields that you want to be encrypted with the key. For more information, see Use field-level encryption to help protect sensitive data. Customize at the edge Running serverless code at the edge opens up a number of possibilities for customizing the content and experience for viewers, at reduced latency. For example, you can return a custom error message when your origin server is down for maintenance, so viewers don't get a generic HTTP error message. Or you can use a function to help authorize users and control access to your content, before CloudFront forwards a request to your origin. Using Lambda@Edge with CloudFront enables a variety of ways to customize the content that CloudFront delivers. To learn more about Lambda@Edge and how to create and deploy functions with CloudFront, see Customize at the edge with Lambda@Edge. To see a number of code samples that you can customize for your own solutions, see Lambda@Edge example functions. Serve private content by using Lambda@Edge customizations Using Lambda@Edge can help you configure your CloudFront distribution to serve private content from your own custom origin, in addition to using signed URLs or signed cookies. To serve private content using CloudFront, you do the following: Encrypt specific fields throughout system processing 6 Amazon CloudFront Developer Guide • Require that your users (viewers) access content using signed URLs or"} +{"global_id": 1358, "doc_id": "cloudfront", "chunk_id": "6", "question_id": 3, "question": "What does using Lambda@Edge with CloudFront enable?", "answer_span": "Using Lambda@Edge with CloudFront enables a variety of ways to customize the content that CloudFront delivers", "chunk": "Video on demand and live streaming video with CloudFront. Encrypt specific fields throughout system processing When you configure HTTPS with CloudFront, you already have secure end-to-end connections to origin servers. When you add field-level encryption, you can protect specific data throughout system processing in addition to HTTPS security, so that only certain applications at your origin can see the data. To set up field-level encryption, you add a public key to CloudFront, and then specify the set of fields that you want to be encrypted with the key. For more information, see Use field-level encryption to help protect sensitive data. Customize at the edge Running serverless code at the edge opens up a number of possibilities for customizing the content and experience for viewers, at reduced latency. For example, you can return a custom error message when your origin server is down for maintenance, so viewers don't get a generic HTTP error message. Or you can use a function to help authorize users and control access to your content, before CloudFront forwards a request to your origin. Using Lambda@Edge with CloudFront enables a variety of ways to customize the content that CloudFront delivers. To learn more about Lambda@Edge and how to create and deploy functions with CloudFront, see Customize at the edge with Lambda@Edge. To see a number of code samples that you can customize for your own solutions, see Lambda@Edge example functions. Serve private content by using Lambda@Edge customizations Using Lambda@Edge can help you configure your CloudFront distribution to serve private content from your own custom origin, in addition to using signed URLs or signed cookies. To serve private content using CloudFront, you do the following: Encrypt specific fields throughout system processing 6 Amazon CloudFront Developer Guide • Require that your users (viewers) access content using signed URLs or"} +{"global_id": 1359, "doc_id": "cloudfront", "chunk_id": "6", "question_id": 4, "question": "What can you do to serve private content using CloudFront?", "answer_span": "Require that your users (viewers) access content using signed URLs or signed cookies", "chunk": "Video on demand and live streaming video with CloudFront. Encrypt specific fields throughout system processing When you configure HTTPS with CloudFront, you already have secure end-to-end connections to origin servers. When you add field-level encryption, you can protect specific data throughout system processing in addition to HTTPS security, so that only certain applications at your origin can see the data. To set up field-level encryption, you add a public key to CloudFront, and then specify the set of fields that you want to be encrypted with the key. For more information, see Use field-level encryption to help protect sensitive data. Customize at the edge Running serverless code at the edge opens up a number of possibilities for customizing the content and experience for viewers, at reduced latency. For example, you can return a custom error message when your origin server is down for maintenance, so viewers don't get a generic HTTP error message. Or you can use a function to help authorize users and control access to your content, before CloudFront forwards a request to your origin. Using Lambda@Edge with CloudFront enables a variety of ways to customize the content that CloudFront delivers. To learn more about Lambda@Edge and how to create and deploy functions with CloudFront, see Customize at the edge with Lambda@Edge. To see a number of code samples that you can customize for your own solutions, see Lambda@Edge example functions. Serve private content by using Lambda@Edge customizations Using Lambda@Edge can help you configure your CloudFront distribution to serve private content from your own custom origin, in addition to using signed URLs or signed cookies. To serve private content using CloudFront, you do the following: Encrypt specific fields throughout system processing 6 Amazon CloudFront Developer Guide • Require that your users (viewers) access content using signed URLs or"} +{"global_id": 1360, "doc_id": "cloudfront", "chunk_id": "7", "question_id": 1, "question": "What is required for users to access private content using CloudFront?", "answer_span": "Require that your users (viewers) access content using signed URLs or signed cookies.", "chunk": "serve private content from your own custom origin, in addition to using signed URLs or signed cookies. To serve private content using CloudFront, you do the following: Encrypt specific fields throughout system processing 6 Amazon CloudFront Developer Guide • Require that your users (viewers) access content using signed URLs or signed cookies. • Restrict access to your origin so that it's only available from CloudFront's origin-facing servers. To do this, you can do one of the following: • For an Amazon S3 origin, you can use an origin access control (OAC). • For a custom origin, you can do the following: • If the custom origin is protected by an Amazon VPC security group or AWS Firewall Manager, you can use the CloudFront managed prefix list to allow inbound traffic to your origin from only CloudFront's origin-facing IP addresses. • Use a custom HTTP header to restrict access to only requests from CloudFront. For more information, see the section called “Restrict access to files on custom origins” and the section called “Add custom headers to origin requests”. For an example that uses a custom header to restrict access to an Application Load Balancer origin, see the section called “Restrict access to Application Load Balancers”. • If the custom origin requires custom access control logic, you can use Lambda@Edge to implement that logic, as described in this blog post: Serving Private Content Using Amazon CloudFront & Lambda@Edge. How CloudFront delivers content After some initial setup, CloudFront works together with your website or application and speeds up delivery of your content. This section explains how CloudFront serves your content when viewers request it. Topics • How CloudFront delivers content to your users • How CloudFront works with regional edge caches How CloudFront delivers content to your users After you configure CloudFront to"} +{"global_id": 1361, "doc_id": "cloudfront", "chunk_id": "7", "question_id": 2, "question": "What can you use for an Amazon S3 origin to restrict access?", "answer_span": "For an Amazon S3 origin, you can use an origin access control (OAC).", "chunk": "serve private content from your own custom origin, in addition to using signed URLs or signed cookies. To serve private content using CloudFront, you do the following: Encrypt specific fields throughout system processing 6 Amazon CloudFront Developer Guide • Require that your users (viewers) access content using signed URLs or signed cookies. • Restrict access to your origin so that it's only available from CloudFront's origin-facing servers. To do this, you can do one of the following: • For an Amazon S3 origin, you can use an origin access control (OAC). • For a custom origin, you can do the following: • If the custom origin is protected by an Amazon VPC security group or AWS Firewall Manager, you can use the CloudFront managed prefix list to allow inbound traffic to your origin from only CloudFront's origin-facing IP addresses. • Use a custom HTTP header to restrict access to only requests from CloudFront. For more information, see the section called “Restrict access to files on custom origins” and the section called “Add custom headers to origin requests”. For an example that uses a custom header to restrict access to an Application Load Balancer origin, see the section called “Restrict access to Application Load Balancers”. • If the custom origin requires custom access control logic, you can use Lambda@Edge to implement that logic, as described in this blog post: Serving Private Content Using Amazon CloudFront & Lambda@Edge. How CloudFront delivers content After some initial setup, CloudFront works together with your website or application and speeds up delivery of your content. This section explains how CloudFront serves your content when viewers request it. Topics • How CloudFront delivers content to your users • How CloudFront works with regional edge caches How CloudFront delivers content to your users After you configure CloudFront to"} +{"global_id": 1362, "doc_id": "cloudfront", "chunk_id": "7", "question_id": 3, "question": "What can you use to restrict access to a custom origin protected by an Amazon VPC security group?", "answer_span": "you can use the CloudFront managed prefix list to allow inbound traffic to your origin from only CloudFront's origin-facing IP addresses.", "chunk": "serve private content from your own custom origin, in addition to using signed URLs or signed cookies. To serve private content using CloudFront, you do the following: Encrypt specific fields throughout system processing 6 Amazon CloudFront Developer Guide • Require that your users (viewers) access content using signed URLs or signed cookies. • Restrict access to your origin so that it's only available from CloudFront's origin-facing servers. To do this, you can do one of the following: • For an Amazon S3 origin, you can use an origin access control (OAC). • For a custom origin, you can do the following: • If the custom origin is protected by an Amazon VPC security group or AWS Firewall Manager, you can use the CloudFront managed prefix list to allow inbound traffic to your origin from only CloudFront's origin-facing IP addresses. • Use a custom HTTP header to restrict access to only requests from CloudFront. For more information, see the section called “Restrict access to files on custom origins” and the section called “Add custom headers to origin requests”. For an example that uses a custom header to restrict access to an Application Load Balancer origin, see the section called “Restrict access to Application Load Balancers”. • If the custom origin requires custom access control logic, you can use Lambda@Edge to implement that logic, as described in this blog post: Serving Private Content Using Amazon CloudFront & Lambda@Edge. How CloudFront delivers content After some initial setup, CloudFront works together with your website or application and speeds up delivery of your content. This section explains how CloudFront serves your content when viewers request it. Topics • How CloudFront delivers content to your users • How CloudFront works with regional edge caches How CloudFront delivers content to your users After you configure CloudFront to"} +{"global_id": 1363, "doc_id": "cloudfront", "chunk_id": "7", "question_id": 4, "question": "What can be used to implement custom access control logic for a custom origin?", "answer_span": "you can use Lambda@Edge to implement that logic", "chunk": "serve private content from your own custom origin, in addition to using signed URLs or signed cookies. To serve private content using CloudFront, you do the following: Encrypt specific fields throughout system processing 6 Amazon CloudFront Developer Guide • Require that your users (viewers) access content using signed URLs or signed cookies. • Restrict access to your origin so that it's only available from CloudFront's origin-facing servers. To do this, you can do one of the following: • For an Amazon S3 origin, you can use an origin access control (OAC). • For a custom origin, you can do the following: • If the custom origin is protected by an Amazon VPC security group or AWS Firewall Manager, you can use the CloudFront managed prefix list to allow inbound traffic to your origin from only CloudFront's origin-facing IP addresses. • Use a custom HTTP header to restrict access to only requests from CloudFront. For more information, see the section called “Restrict access to files on custom origins” and the section called “Add custom headers to origin requests”. For an example that uses a custom header to restrict access to an Application Load Balancer origin, see the section called “Restrict access to Application Load Balancers”. • If the custom origin requires custom access control logic, you can use Lambda@Edge to implement that logic, as described in this blog post: Serving Private Content Using Amazon CloudFront & Lambda@Edge. How CloudFront delivers content After some initial setup, CloudFront works together with your website or application and speeds up delivery of your content. This section explains how CloudFront serves your content when viewers request it. Topics • How CloudFront delivers content to your users • How CloudFront works with regional edge caches How CloudFront delivers content to your users After you configure CloudFront to"} +{"global_id": 1364, "doc_id": "cloudfront", "chunk_id": "8", "question_id": 1, "question": "What happens when users request your objects after configuring CloudFront?", "answer_span": "After you configure CloudFront to deliver your content, here’s what happens when users request your objects: 1. A user accesses your website or application and sends a request for an object, such as an image file or an HTML file.", "chunk": "or application and speeds up delivery of your content. This section explains how CloudFront serves your content when viewers request it. Topics • How CloudFront delivers content to your users • How CloudFront works with regional edge caches How CloudFront delivers content to your users After you configure CloudFront to deliver your content, here’s what happens when users request your objects: 1. A user accesses your website or application and sends a request for an object, such as an image file or an HTML file. How CloudFront delivers content 7 Amazon CloudFront Developer Guide 2. DNS routes the request to the CloudFront POP (edge location) that can best serve the request, typically the nearest CloudFront POP in terms of latency. 3. CloudFront checks its cache for the requested object. If the object is in the cache, CloudFront returns it to the user. If the object is not in the cache, CloudFront does the following: a. CloudFront compares the request with the specifications in your distribution and forwards the request to your origin server for the corresponding object—for example, to your Amazon S3 bucket or your HTTP server. b. The origin server sends the object back to the edge location. c. As soon as the first byte arrives from the origin, CloudFront begins to forward the object to the user. CloudFront also adds the object to the cache for the next time someone requests it. How CloudFront works with regional edge caches CloudFront points of presence (also known as POPs or edge locations) make sure that popular content can be served quickly to your viewers. CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content."} +{"global_id": 1365, "doc_id": "cloudfront", "chunk_id": "8", "question_id": 2, "question": "How does DNS route the request for content?", "answer_span": "DNS routes the request to the CloudFront POP (edge location) that can best serve the request, typically the nearest CloudFront POP in terms of latency.", "chunk": "or application and speeds up delivery of your content. This section explains how CloudFront serves your content when viewers request it. Topics • How CloudFront delivers content to your users • How CloudFront works with regional edge caches How CloudFront delivers content to your users After you configure CloudFront to deliver your content, here’s what happens when users request your objects: 1. A user accesses your website or application and sends a request for an object, such as an image file or an HTML file. How CloudFront delivers content 7 Amazon CloudFront Developer Guide 2. DNS routes the request to the CloudFront POP (edge location) that can best serve the request, typically the nearest CloudFront POP in terms of latency. 3. CloudFront checks its cache for the requested object. If the object is in the cache, CloudFront returns it to the user. If the object is not in the cache, CloudFront does the following: a. CloudFront compares the request with the specifications in your distribution and forwards the request to your origin server for the corresponding object—for example, to your Amazon S3 bucket or your HTTP server. b. The origin server sends the object back to the edge location. c. As soon as the first byte arrives from the origin, CloudFront begins to forward the object to the user. CloudFront also adds the object to the cache for the next time someone requests it. How CloudFront works with regional edge caches CloudFront points of presence (also known as POPs or edge locations) make sure that popular content can be served quickly to your viewers. CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content."} +{"global_id": 1366, "doc_id": "cloudfront", "chunk_id": "8", "question_id": 3, "question": "What does CloudFront do if the requested object is not in the cache?", "answer_span": "If the object is not in the cache, CloudFront does the following: a. CloudFront compares the request with the specifications in your distribution and forwards the request to your origin server for the corresponding object—for example, to your Amazon S3 bucket or your HTTP server.", "chunk": "or application and speeds up delivery of your content. This section explains how CloudFront serves your content when viewers request it. Topics • How CloudFront delivers content to your users • How CloudFront works with regional edge caches How CloudFront delivers content to your users After you configure CloudFront to deliver your content, here’s what happens when users request your objects: 1. A user accesses your website or application and sends a request for an object, such as an image file or an HTML file. How CloudFront delivers content 7 Amazon CloudFront Developer Guide 2. DNS routes the request to the CloudFront POP (edge location) that can best serve the request, typically the nearest CloudFront POP in terms of latency. 3. CloudFront checks its cache for the requested object. If the object is in the cache, CloudFront returns it to the user. If the object is not in the cache, CloudFront does the following: a. CloudFront compares the request with the specifications in your distribution and forwards the request to your origin server for the corresponding object—for example, to your Amazon S3 bucket or your HTTP server. b. The origin server sends the object back to the edge location. c. As soon as the first byte arrives from the origin, CloudFront begins to forward the object to the user. CloudFront also adds the object to the cache for the next time someone requests it. How CloudFront works with regional edge caches CloudFront points of presence (also known as POPs or edge locations) make sure that popular content can be served quickly to your viewers. CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content."} +{"global_id": 1367, "doc_id": "cloudfront", "chunk_id": "8", "question_id": 4, "question": "What is the purpose of regional edge caches in CloudFront?", "answer_span": "CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content.", "chunk": "or application and speeds up delivery of your content. This section explains how CloudFront serves your content when viewers request it. Topics • How CloudFront delivers content to your users • How CloudFront works with regional edge caches How CloudFront delivers content to your users After you configure CloudFront to deliver your content, here’s what happens when users request your objects: 1. A user accesses your website or application and sends a request for an object, such as an image file or an HTML file. How CloudFront delivers content 7 Amazon CloudFront Developer Guide 2. DNS routes the request to the CloudFront POP (edge location) that can best serve the request, typically the nearest CloudFront POP in terms of latency. 3. CloudFront checks its cache for the requested object. If the object is in the cache, CloudFront returns it to the user. If the object is not in the cache, CloudFront does the following: a. CloudFront compares the request with the specifications in your distribution and forwards the request to your origin server for the corresponding object—for example, to your Amazon S3 bucket or your HTTP server. b. The origin server sends the object back to the edge location. c. As soon as the first byte arrives from the origin, CloudFront begins to forward the object to the user. CloudFront also adds the object to the cache for the next time someone requests it. How CloudFront works with regional edge caches CloudFront points of presence (also known as POPs or edge locations) make sure that popular content can be served quickly to your viewers. CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content."} +{"global_id": 1368, "doc_id": "cloudfront", "chunk_id": "9", "question_id": 1, "question": "What do regional edge caches help with?", "answer_span": "Regional edge caches help with all types of content, particularly content that tends to become less popular over time.", "chunk": "edge locations) make sure that popular content can be served quickly to your viewers. CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content. How CloudFront works with regional edge caches 8 Amazon CloudFront Developer Guide Regional edge caches help with all types of content, particularly content that tends to become less popular over time. Examples include user-generated content, such as video, photos, or artwork; e-commerce assets such as product photos and videos; and news and event-related content that might suddenly find new popularity. How regional caches work Regional edge caches are CloudFront locations that are deployed globally, close to your viewers. They’re located between your origin server and the POPs—global edge locations that serve content directly to viewers. As objects become less popular, individual POPs might remove those objects to make room for more popular content. Regional edge caches have a larger cache than an individual POP, so objects remain in the cache longer at the nearest regional edge cache location. This helps keep more of your content closer to your viewers, reducing the need for CloudFront to go back to your origin server, and improving overall performance for viewers. When a viewer makes a request on your website or through your application, DNS routes the request to the POP that can best serve the user’s request. This location is typically the nearest CloudFront edge location in terms of latency. In the POP, CloudFront checks its cache for the requested object. If the object is in the cache, CloudFront returns it to the user. If the object is not in the cache, the POP typically goes to the nearest regional edge cache to"} +{"global_id": 1369, "doc_id": "cloudfront", "chunk_id": "9", "question_id": 2, "question": "Where are regional edge caches located?", "answer_span": "Regional edge caches are CloudFront locations that are deployed globally, close to your viewers.", "chunk": "edge locations) make sure that popular content can be served quickly to your viewers. CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content. How CloudFront works with regional edge caches 8 Amazon CloudFront Developer Guide Regional edge caches help with all types of content, particularly content that tends to become less popular over time. Examples include user-generated content, such as video, photos, or artwork; e-commerce assets such as product photos and videos; and news and event-related content that might suddenly find new popularity. How regional caches work Regional edge caches are CloudFront locations that are deployed globally, close to your viewers. They’re located between your origin server and the POPs—global edge locations that serve content directly to viewers. As objects become less popular, individual POPs might remove those objects to make room for more popular content. Regional edge caches have a larger cache than an individual POP, so objects remain in the cache longer at the nearest regional edge cache location. This helps keep more of your content closer to your viewers, reducing the need for CloudFront to go back to your origin server, and improving overall performance for viewers. When a viewer makes a request on your website or through your application, DNS routes the request to the POP that can best serve the user’s request. This location is typically the nearest CloudFront edge location in terms of latency. In the POP, CloudFront checks its cache for the requested object. If the object is in the cache, CloudFront returns it to the user. If the object is not in the cache, the POP typically goes to the nearest regional edge cache to"} +{"global_id": 1370, "doc_id": "cloudfront", "chunk_id": "9", "question_id": 3, "question": "What happens when a viewer makes a request on a website using CloudFront?", "answer_span": "When a viewer makes a request on your website or through your application, DNS routes the request to the POP that can best serve the user’s request.", "chunk": "edge locations) make sure that popular content can be served quickly to your viewers. CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content. How CloudFront works with regional edge caches 8 Amazon CloudFront Developer Guide Regional edge caches help with all types of content, particularly content that tends to become less popular over time. Examples include user-generated content, such as video, photos, or artwork; e-commerce assets such as product photos and videos; and news and event-related content that might suddenly find new popularity. How regional caches work Regional edge caches are CloudFront locations that are deployed globally, close to your viewers. They’re located between your origin server and the POPs—global edge locations that serve content directly to viewers. As objects become less popular, individual POPs might remove those objects to make room for more popular content. Regional edge caches have a larger cache than an individual POP, so objects remain in the cache longer at the nearest regional edge cache location. This helps keep more of your content closer to your viewers, reducing the need for CloudFront to go back to your origin server, and improving overall performance for viewers. When a viewer makes a request on your website or through your application, DNS routes the request to the POP that can best serve the user’s request. This location is typically the nearest CloudFront edge location in terms of latency. In the POP, CloudFront checks its cache for the requested object. If the object is in the cache, CloudFront returns it to the user. If the object is not in the cache, the POP typically goes to the nearest regional edge cache to"} +{"global_id": 1371, "doc_id": "cloudfront", "chunk_id": "9", "question_id": 4, "question": "What does CloudFront do if the requested object is not in the cache?", "answer_span": "If the object is not in the cache, the POP typically goes to the nearest regional edge cache.", "chunk": "edge locations) make sure that popular content can be served quickly to your viewers. CloudFront also has regional edge caches that bring more of your content closer to your viewers, even when the content is not popular enough to stay at a POP, to help improve performance for that content. How CloudFront works with regional edge caches 8 Amazon CloudFront Developer Guide Regional edge caches help with all types of content, particularly content that tends to become less popular over time. Examples include user-generated content, such as video, photos, or artwork; e-commerce assets such as product photos and videos; and news and event-related content that might suddenly find new popularity. How regional caches work Regional edge caches are CloudFront locations that are deployed globally, close to your viewers. They’re located between your origin server and the POPs—global edge locations that serve content directly to viewers. As objects become less popular, individual POPs might remove those objects to make room for more popular content. Regional edge caches have a larger cache than an individual POP, so objects remain in the cache longer at the nearest regional edge cache location. This helps keep more of your content closer to your viewers, reducing the need for CloudFront to go back to your origin server, and improving overall performance for viewers. When a viewer makes a request on your website or through your application, DNS routes the request to the POP that can best serve the user’s request. This location is typically the nearest CloudFront edge location in terms of latency. In the POP, CloudFront checks its cache for the requested object. If the object is in the cache, CloudFront returns it to the user. If the object is not in the cache, the POP typically goes to the nearest regional edge cache to"} +{"global_id": 1372, "doc_id": "cloudfront", "chunk_id": "10", "question_id": 1, "question": "What does CloudFront do if the object is in the cache at the POP?", "answer_span": "If the object is in the cache, CloudFront returns it to the user.", "chunk": "CloudFront edge location in terms of latency. In the POP, CloudFront checks its cache for the requested object. If the object is in the cache, CloudFront returns it to the user. If the object is not in the cache, the POP typically goes to the nearest regional edge cache to fetch it. For more information about when the POP skips the regional edge cache and goes directly to the origin, see the following note. In the regional edge cache location, CloudFront again checks its cache for the requested object. If the object is in the cache, CloudFront forwards it to the POP that requested it. As soon as the first byte arrives from regional edge cache location, CloudFront begins to forward the object to the user. CloudFront also adds the object to the cache in the POP for the next time someone requests it. For objects not cached at either the POP or the regional edge cache location, CloudFront compares the request with the specifications in your distributions and forwards the request to the origin server. After your origin server sends the object back to the regional edge cache location, it is forwarded to the POP, and then CloudFront forwards it to the user. In this case, CloudFront also adds the object to the cache in the regional edge cache location in addition to the POP for the next time a viewer requests it. This makes sure that all of the POPs in a region share a local cache, eliminating multiple requests to origin servers. CloudFront also keeps persistent connections with origin servers so objects are fetched from the origins as quickly as possible. How CloudFront works with regional edge caches 9"} +{"global_id": 1373, "doc_id": "cloudfront", "chunk_id": "10", "question_id": 2, "question": "What happens if the object is not in the cache at the POP?", "answer_span": "If the object is not in the cache, the POP typically goes to the nearest regional edge cache to fetch it.", "chunk": "CloudFront edge location in terms of latency. In the POP, CloudFront checks its cache for the requested object. If the object is in the cache, CloudFront returns it to the user. If the object is not in the cache, the POP typically goes to the nearest regional edge cache to fetch it. For more information about when the POP skips the regional edge cache and goes directly to the origin, see the following note. In the regional edge cache location, CloudFront again checks its cache for the requested object. If the object is in the cache, CloudFront forwards it to the POP that requested it. As soon as the first byte arrives from regional edge cache location, CloudFront begins to forward the object to the user. CloudFront also adds the object to the cache in the POP for the next time someone requests it. For objects not cached at either the POP or the regional edge cache location, CloudFront compares the request with the specifications in your distributions and forwards the request to the origin server. After your origin server sends the object back to the regional edge cache location, it is forwarded to the POP, and then CloudFront forwards it to the user. In this case, CloudFront also adds the object to the cache in the regional edge cache location in addition to the POP for the next time a viewer requests it. This makes sure that all of the POPs in a region share a local cache, eliminating multiple requests to origin servers. CloudFront also keeps persistent connections with origin servers so objects are fetched from the origins as quickly as possible. How CloudFront works with regional edge caches 9"} +{"global_id": 1374, "doc_id": "cloudfront", "chunk_id": "10", "question_id": 3, "question": "What does CloudFront do when the first byte arrives from the regional edge cache location?", "answer_span": "As soon as the first byte arrives from regional edge cache location, CloudFront begins to forward the object to the user.", "chunk": "CloudFront edge location in terms of latency. In the POP, CloudFront checks its cache for the requested object. If the object is in the cache, CloudFront returns it to the user. If the object is not in the cache, the POP typically goes to the nearest regional edge cache to fetch it. For more information about when the POP skips the regional edge cache and goes directly to the origin, see the following note. In the regional edge cache location, CloudFront again checks its cache for the requested object. If the object is in the cache, CloudFront forwards it to the POP that requested it. As soon as the first byte arrives from regional edge cache location, CloudFront begins to forward the object to the user. CloudFront also adds the object to the cache in the POP for the next time someone requests it. For objects not cached at either the POP or the regional edge cache location, CloudFront compares the request with the specifications in your distributions and forwards the request to the origin server. After your origin server sends the object back to the regional edge cache location, it is forwarded to the POP, and then CloudFront forwards it to the user. In this case, CloudFront also adds the object to the cache in the regional edge cache location in addition to the POP for the next time a viewer requests it. This makes sure that all of the POPs in a region share a local cache, eliminating multiple requests to origin servers. CloudFront also keeps persistent connections with origin servers so objects are fetched from the origins as quickly as possible. How CloudFront works with regional edge caches 9"} +{"global_id": 1375, "doc_id": "cloudfront", "chunk_id": "10", "question_id": 4, "question": "What does CloudFront do for objects not cached at either the POP or the regional edge cache location?", "answer_span": "CloudFront compares the request with the specifications in your distributions and forwards the request to the origin server.", "chunk": "CloudFront edge location in terms of latency. In the POP, CloudFront checks its cache for the requested object. If the object is in the cache, CloudFront returns it to the user. If the object is not in the cache, the POP typically goes to the nearest regional edge cache to fetch it. For more information about when the POP skips the regional edge cache and goes directly to the origin, see the following note. In the regional edge cache location, CloudFront again checks its cache for the requested object. If the object is in the cache, CloudFront forwards it to the POP that requested it. As soon as the first byte arrives from regional edge cache location, CloudFront begins to forward the object to the user. CloudFront also adds the object to the cache in the POP for the next time someone requests it. For objects not cached at either the POP or the regional edge cache location, CloudFront compares the request with the specifications in your distributions and forwards the request to the origin server. After your origin server sends the object back to the regional edge cache location, it is forwarded to the POP, and then CloudFront forwards it to the user. In this case, CloudFront also adds the object to the cache in the regional edge cache location in addition to the POP for the next time a viewer requests it. This makes sure that all of the POPs in a region share a local cache, eliminating multiple requests to origin servers. CloudFront also keeps persistent connections with origin servers so objects are fetched from the origins as quickly as possible. How CloudFront works with regional edge caches 9"} +{"global_id": 1376, "doc_id": "cloudfront", "chunk_id": "11", "question_id": 1, "question": "What does CloudFront keep with origin servers?", "answer_span": "CloudFront also keeps persistent connections with origin servers", "chunk": "multiple requests to origin servers. CloudFront also keeps persistent connections with origin servers so objects are fetched from the origins as quickly as possible. How CloudFront works with regional edge caches 9"} +{"global_id": 1377, "doc_id": "cloudfront", "chunk_id": "11", "question_id": 2, "question": "What is the purpose of CloudFront's persistent connections?", "answer_span": "so objects are fetched from the origins as quickly as possible", "chunk": "multiple requests to origin servers. CloudFront also keeps persistent connections with origin servers so objects are fetched from the origins as quickly as possible. How CloudFront works with regional edge caches 9"} +{"global_id": 1378, "doc_id": "cloudfront", "chunk_id": "11", "question_id": 3, "question": "What type of requests does CloudFront handle?", "answer_span": "multiple requests to origin servers", "chunk": "multiple requests to origin servers. CloudFront also keeps persistent connections with origin servers so objects are fetched from the origins as quickly as possible. How CloudFront works with regional edge caches 9"} +{"global_id": 1379, "doc_id": "cloudfront", "chunk_id": "11", "question_id": 4, "question": "What does the text mention about how CloudFront works?", "answer_span": "How CloudFront works with regional edge caches", "chunk": "multiple requests to origin servers. CloudFront also keeps persistent connections with origin servers so objects are fetched from the origins as quickly as possible. How CloudFront works with regional edge caches 9"} +{"global_id": 1380, "doc_id": "bedrock", "chunk_id": "0", "question_id": 1, "question": "What is Amazon Bedrock?", "answer_span": "Amazon Bedrock is a fully managed service that makes high-performing foundation models (FMs) from leading AI companies and Amazon available for your use through a unified API.", "chunk": "Amazon Bedrock User Guide What is Amazon Bedrock? Amazon Bedrock is a fully managed service that makes high-performing foundation models (FMs) from leading AI companies and Amazon available for your use through a unified API. You can choose from a wide range of foundation models to find the model that is best suited for your use case. Amazon Bedrock also offers a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top foundation models for your use cases, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. With Amazon Bedrock's serverless experience, you can get started quickly, privately customize foundation models with your own data, and easily and securely integrate and deploy them into your applications using AWS tools without having to manage any infrastructure. Topics • What can I do with Amazon Bedrock? • How do I get started with Amazon Bedrock? • Amazon Bedrock pricing • Key terminology What can I do with Amazon Bedrock? You can use Amazon Bedrock to do the following: • Experiment with prompts and configurations – Submit prompts and generate responses with model inference by sending prompts using different configurations and foundation models to generate responses. You can use the API or the text, image, and chat playgrounds in the console to experiment in a graphical interface. When you're ready, set up your application to make requests to the InvokeModel APIs. • Augment response generation with information from your data sources – Create knowledge bases by uploading data sources to be queried in order to augment a foundation model's generation of responses. What can I do with"} +{"global_id": 1381, "doc_id": "bedrock", "chunk_id": "0", "question_id": 2, "question": "What can I do with Amazon Bedrock?", "answer_span": "You can use Amazon Bedrock to do the following: • Experiment with prompts and configurations – Submit prompts and generate responses with model inference by sending prompts using different configurations and foundation models to generate responses.", "chunk": "Amazon Bedrock User Guide What is Amazon Bedrock? Amazon Bedrock is a fully managed service that makes high-performing foundation models (FMs) from leading AI companies and Amazon available for your use through a unified API. You can choose from a wide range of foundation models to find the model that is best suited for your use case. Amazon Bedrock also offers a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top foundation models for your use cases, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. With Amazon Bedrock's serverless experience, you can get started quickly, privately customize foundation models with your own data, and easily and securely integrate and deploy them into your applications using AWS tools without having to manage any infrastructure. Topics • What can I do with Amazon Bedrock? • How do I get started with Amazon Bedrock? • Amazon Bedrock pricing • Key terminology What can I do with Amazon Bedrock? You can use Amazon Bedrock to do the following: • Experiment with prompts and configurations – Submit prompts and generate responses with model inference by sending prompts using different configurations and foundation models to generate responses. You can use the API or the text, image, and chat playgrounds in the console to experiment in a graphical interface. When you're ready, set up your application to make requests to the InvokeModel APIs. • Augment response generation with information from your data sources – Create knowledge bases by uploading data sources to be queried in order to augment a foundation model's generation of responses. What can I do with"} +{"global_id": 1382, "doc_id": "bedrock", "chunk_id": "0", "question_id": 3, "question": "How can I customize foundation models using Amazon Bedrock?", "answer_span": "Using Amazon Bedrock, you can easily experiment with and evaluate top foundation models for your use cases, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG).", "chunk": "Amazon Bedrock User Guide What is Amazon Bedrock? Amazon Bedrock is a fully managed service that makes high-performing foundation models (FMs) from leading AI companies and Amazon available for your use through a unified API. You can choose from a wide range of foundation models to find the model that is best suited for your use case. Amazon Bedrock also offers a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top foundation models for your use cases, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. With Amazon Bedrock's serverless experience, you can get started quickly, privately customize foundation models with your own data, and easily and securely integrate and deploy them into your applications using AWS tools without having to manage any infrastructure. Topics • What can I do with Amazon Bedrock? • How do I get started with Amazon Bedrock? • Amazon Bedrock pricing • Key terminology What can I do with Amazon Bedrock? You can use Amazon Bedrock to do the following: • Experiment with prompts and configurations – Submit prompts and generate responses with model inference by sending prompts using different configurations and foundation models to generate responses. You can use the API or the text, image, and chat playgrounds in the console to experiment in a graphical interface. When you're ready, set up your application to make requests to the InvokeModel APIs. • Augment response generation with information from your data sources – Create knowledge bases by uploading data sources to be queried in order to augment a foundation model's generation of responses. What can I do with"} +{"global_id": 1383, "doc_id": "bedrock", "chunk_id": "0", "question_id": 4, "question": "What experience does Amazon Bedrock offer?", "answer_span": "With Amazon Bedrock's serverless experience, you can get started quickly, privately customize foundation models with your own data, and easily and securely integrate and deploy them into your applications using AWS tools without having to manage any infrastructure.", "chunk": "Amazon Bedrock User Guide What is Amazon Bedrock? Amazon Bedrock is a fully managed service that makes high-performing foundation models (FMs) from leading AI companies and Amazon available for your use through a unified API. You can choose from a wide range of foundation models to find the model that is best suited for your use case. Amazon Bedrock also offers a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top foundation models for your use cases, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. With Amazon Bedrock's serverless experience, you can get started quickly, privately customize foundation models with your own data, and easily and securely integrate and deploy them into your applications using AWS tools without having to manage any infrastructure. Topics • What can I do with Amazon Bedrock? • How do I get started with Amazon Bedrock? • Amazon Bedrock pricing • Key terminology What can I do with Amazon Bedrock? You can use Amazon Bedrock to do the following: • Experiment with prompts and configurations – Submit prompts and generate responses with model inference by sending prompts using different configurations and foundation models to generate responses. You can use the API or the text, image, and chat playgrounds in the console to experiment in a graphical interface. When you're ready, set up your application to make requests to the InvokeModel APIs. • Augment response generation with information from your data sources – Create knowledge bases by uploading data sources to be queried in order to augment a foundation model's generation of responses. What can I do with"} +{"global_id": 1384, "doc_id": "bedrock", "chunk_id": "1", "question_id": 1, "question": "What can I do with Amazon Bedrock?", "answer_span": "What can I do with Amazon Bedrock? 1 Amazon Bedrock User Guide", "chunk": "When you're ready, set up your application to make requests to the InvokeModel APIs. • Augment response generation with information from your data sources – Create knowledge bases by uploading data sources to be queried in order to augment a foundation model's generation of responses. What can I do with Amazon Bedrock? 1 Amazon Bedrock User Guide • Create applications that reason through how to help a customer – Build agents that use foundation models, make API calls, and (optionally) query knowledge bases in order to reason through and carry out tasks for your customers. • Adapt models to specific tasks and domains with training data – Customize an Amazon Bedrock foundation model by providing training data for fine-tuning or continued-pretraining in order to adjust a model's parameters and improve its performance on specific tasks or in certain domains. • Improve your FM-based application's efficiency and output – Purchase Provisioned Throughput for a foundation model in order to run inference on models more efficiently and at discounted rates. • Determine the best model for your use case – Evaluate outputs of different models with built-in or custom prompt datasets to determine the model that is best suited for your application. • Prevent inappropriate or unwanted content – Use guardrails to implement safeguards for your generative AI applications. • Optimize your FM's latency – Get faster response times and improved responsiveness for AI applications with Latency-optimized inference for foundation models. Note The Latency Optimized Inference feature is in preview release for Amazon Bedrock and is subject to change. To learn about Regions that support Amazon Bedrock and the foundation models and features that Amazon Bedrock supports, see Supported foundation models in Amazon Bedrock and Feature support by AWS Region in Amazon Bedrock. How do I get started with Amazon Bedrock? We"} +{"global_id": 1385, "doc_id": "bedrock", "chunk_id": "1", "question_id": 2, "question": "How do I get started with Amazon Bedrock?", "answer_span": "How do I get started with Amazon Bedrock?", "chunk": "When you're ready, set up your application to make requests to the InvokeModel APIs. • Augment response generation with information from your data sources – Create knowledge bases by uploading data sources to be queried in order to augment a foundation model's generation of responses. What can I do with Amazon Bedrock? 1 Amazon Bedrock User Guide • Create applications that reason through how to help a customer – Build agents that use foundation models, make API calls, and (optionally) query knowledge bases in order to reason through and carry out tasks for your customers. • Adapt models to specific tasks and domains with training data – Customize an Amazon Bedrock foundation model by providing training data for fine-tuning or continued-pretraining in order to adjust a model's parameters and improve its performance on specific tasks or in certain domains. • Improve your FM-based application's efficiency and output – Purchase Provisioned Throughput for a foundation model in order to run inference on models more efficiently and at discounted rates. • Determine the best model for your use case – Evaluate outputs of different models with built-in or custom prompt datasets to determine the model that is best suited for your application. • Prevent inappropriate or unwanted content – Use guardrails to implement safeguards for your generative AI applications. • Optimize your FM's latency – Get faster response times and improved responsiveness for AI applications with Latency-optimized inference for foundation models. Note The Latency Optimized Inference feature is in preview release for Amazon Bedrock and is subject to change. To learn about Regions that support Amazon Bedrock and the foundation models and features that Amazon Bedrock supports, see Supported foundation models in Amazon Bedrock and Feature support by AWS Region in Amazon Bedrock. How do I get started with Amazon Bedrock? We"} +{"global_id": 1386, "doc_id": "bedrock", "chunk_id": "1", "question_id": 3, "question": "What is the purpose of guardrails in generative AI applications?", "answer_span": "Use guardrails to implement safeguards for your generative AI applications.", "chunk": "When you're ready, set up your application to make requests to the InvokeModel APIs. • Augment response generation with information from your data sources – Create knowledge bases by uploading data sources to be queried in order to augment a foundation model's generation of responses. What can I do with Amazon Bedrock? 1 Amazon Bedrock User Guide • Create applications that reason through how to help a customer – Build agents that use foundation models, make API calls, and (optionally) query knowledge bases in order to reason through and carry out tasks for your customers. • Adapt models to specific tasks and domains with training data – Customize an Amazon Bedrock foundation model by providing training data for fine-tuning or continued-pretraining in order to adjust a model's parameters and improve its performance on specific tasks or in certain domains. • Improve your FM-based application's efficiency and output – Purchase Provisioned Throughput for a foundation model in order to run inference on models more efficiently and at discounted rates. • Determine the best model for your use case – Evaluate outputs of different models with built-in or custom prompt datasets to determine the model that is best suited for your application. • Prevent inappropriate or unwanted content – Use guardrails to implement safeguards for your generative AI applications. • Optimize your FM's latency – Get faster response times and improved responsiveness for AI applications with Latency-optimized inference for foundation models. Note The Latency Optimized Inference feature is in preview release for Amazon Bedrock and is subject to change. To learn about Regions that support Amazon Bedrock and the foundation models and features that Amazon Bedrock supports, see Supported foundation models in Amazon Bedrock and Feature support by AWS Region in Amazon Bedrock. How do I get started with Amazon Bedrock? We"} +{"global_id": 1387, "doc_id": "bedrock", "chunk_id": "1", "question_id": 4, "question": "What feature is in preview release for Amazon Bedrock?", "answer_span": "The Latency Optimized Inference feature is in preview release for Amazon Bedrock and is subject to change.", "chunk": "When you're ready, set up your application to make requests to the InvokeModel APIs. • Augment response generation with information from your data sources – Create knowledge bases by uploading data sources to be queried in order to augment a foundation model's generation of responses. What can I do with Amazon Bedrock? 1 Amazon Bedrock User Guide • Create applications that reason through how to help a customer – Build agents that use foundation models, make API calls, and (optionally) query knowledge bases in order to reason through and carry out tasks for your customers. • Adapt models to specific tasks and domains with training data – Customize an Amazon Bedrock foundation model by providing training data for fine-tuning or continued-pretraining in order to adjust a model's parameters and improve its performance on specific tasks or in certain domains. • Improve your FM-based application's efficiency and output – Purchase Provisioned Throughput for a foundation model in order to run inference on models more efficiently and at discounted rates. • Determine the best model for your use case – Evaluate outputs of different models with built-in or custom prompt datasets to determine the model that is best suited for your application. • Prevent inappropriate or unwanted content – Use guardrails to implement safeguards for your generative AI applications. • Optimize your FM's latency – Get faster response times and improved responsiveness for AI applications with Latency-optimized inference for foundation models. Note The Latency Optimized Inference feature is in preview release for Amazon Bedrock and is subject to change. To learn about Regions that support Amazon Bedrock and the foundation models and features that Amazon Bedrock supports, see Supported foundation models in Amazon Bedrock and Feature support by AWS Region in Amazon Bedrock. How do I get started with Amazon Bedrock? We"} +{"global_id": 1388, "doc_id": "bedrock", "chunk_id": "2", "question_id": 1, "question": "What should I do to get started with Amazon Bedrock?", "answer_span": "We recommend that you start with Amazon Bedrock by doing the following: 1. Familiarize yourself with the terms and concepts that Amazon Bedrock uses. 2. Understand how AWS charges you for using Amazon Bedrock. 3. Try the Getting started with Amazon Bedrock tutorials.", "chunk": "Amazon Bedrock and is subject to change. To learn about Regions that support Amazon Bedrock and the foundation models and features that Amazon Bedrock supports, see Supported foundation models in Amazon Bedrock and Feature support by AWS Region in Amazon Bedrock. How do I get started with Amazon Bedrock? We recommend that you start with Amazon Bedrock by doing the following: 1. Familiarize yourself with the terms and concepts that Amazon Bedrock uses. 2. Understand how AWS charges you for using Amazon Bedrock. 3. Try the Getting started with Amazon Bedrock tutorials. In the tutorials, you learn how to use the playgrounds in Amazon Bedrock console. You also learn and how to use the AWS SDK to call Amazon Bedrock API operations. How do I get started with Amazon Bedrock? 2 Amazon Bedrock User Guide 4. Read the documentation for the features that you want to include in your application. Amazon Bedrock pricing When you sign up for AWS, your AWS account is automatically signed up for all services in AWS, including Amazon Bedrock. However, you are charged only for the services that you use. For information about pricing for different Amazon Bedrock resources, see Amazon Bedrock pricing. To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console. To learn more about AWS account billing, see the AWS Billing User Guide. If you have questions concerning AWS billing and AWS accounts, contact AWS Support. With Amazon Bedrock, you pay to run inference on any of the third-party foundation models. Pricing is based on the volume of input tokens and output tokens, and on whether you have purchased Provisioned Throughput for the model. For more information, see the Model providers page in the Amazon Bedrock console. For each model, pricing is"} +{"global_id": 1389, "doc_id": "bedrock", "chunk_id": "2", "question_id": 2, "question": "How can I see my bill for Amazon Bedrock?", "answer_span": "To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console.", "chunk": "Amazon Bedrock and is subject to change. To learn about Regions that support Amazon Bedrock and the foundation models and features that Amazon Bedrock supports, see Supported foundation models in Amazon Bedrock and Feature support by AWS Region in Amazon Bedrock. How do I get started with Amazon Bedrock? We recommend that you start with Amazon Bedrock by doing the following: 1. Familiarize yourself with the terms and concepts that Amazon Bedrock uses. 2. Understand how AWS charges you for using Amazon Bedrock. 3. Try the Getting started with Amazon Bedrock tutorials. In the tutorials, you learn how to use the playgrounds in Amazon Bedrock console. You also learn and how to use the AWS SDK to call Amazon Bedrock API operations. How do I get started with Amazon Bedrock? 2 Amazon Bedrock User Guide 4. Read the documentation for the features that you want to include in your application. Amazon Bedrock pricing When you sign up for AWS, your AWS account is automatically signed up for all services in AWS, including Amazon Bedrock. However, you are charged only for the services that you use. For information about pricing for different Amazon Bedrock resources, see Amazon Bedrock pricing. To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console. To learn more about AWS account billing, see the AWS Billing User Guide. If you have questions concerning AWS billing and AWS accounts, contact AWS Support. With Amazon Bedrock, you pay to run inference on any of the third-party foundation models. Pricing is based on the volume of input tokens and output tokens, and on whether you have purchased Provisioned Throughput for the model. For more information, see the Model providers page in the Amazon Bedrock console. For each model, pricing is"} +{"global_id": 1390, "doc_id": "bedrock", "chunk_id": "2", "question_id": 3, "question": "What is the basis for pricing with Amazon Bedrock?", "answer_span": "Pricing is based on the volume of input tokens and output tokens, and on whether you have purchased Provisioned Throughput for the model.", "chunk": "Amazon Bedrock and is subject to change. To learn about Regions that support Amazon Bedrock and the foundation models and features that Amazon Bedrock supports, see Supported foundation models in Amazon Bedrock and Feature support by AWS Region in Amazon Bedrock. How do I get started with Amazon Bedrock? We recommend that you start with Amazon Bedrock by doing the following: 1. Familiarize yourself with the terms and concepts that Amazon Bedrock uses. 2. Understand how AWS charges you for using Amazon Bedrock. 3. Try the Getting started with Amazon Bedrock tutorials. In the tutorials, you learn how to use the playgrounds in Amazon Bedrock console. You also learn and how to use the AWS SDK to call Amazon Bedrock API operations. How do I get started with Amazon Bedrock? 2 Amazon Bedrock User Guide 4. Read the documentation for the features that you want to include in your application. Amazon Bedrock pricing When you sign up for AWS, your AWS account is automatically signed up for all services in AWS, including Amazon Bedrock. However, you are charged only for the services that you use. For information about pricing for different Amazon Bedrock resources, see Amazon Bedrock pricing. To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console. To learn more about AWS account billing, see the AWS Billing User Guide. If you have questions concerning AWS billing and AWS accounts, contact AWS Support. With Amazon Bedrock, you pay to run inference on any of the third-party foundation models. Pricing is based on the volume of input tokens and output tokens, and on whether you have purchased Provisioned Throughput for the model. For more information, see the Model providers page in the Amazon Bedrock console. For each model, pricing is"} +{"global_id": 1391, "doc_id": "bedrock", "chunk_id": "2", "question_id": 4, "question": "Where can I find information about pricing for different Amazon Bedrock resources?", "answer_span": "For information about pricing for different Amazon Bedrock resources, see Amazon Bedrock pricing.", "chunk": "Amazon Bedrock and is subject to change. To learn about Regions that support Amazon Bedrock and the foundation models and features that Amazon Bedrock supports, see Supported foundation models in Amazon Bedrock and Feature support by AWS Region in Amazon Bedrock. How do I get started with Amazon Bedrock? We recommend that you start with Amazon Bedrock by doing the following: 1. Familiarize yourself with the terms and concepts that Amazon Bedrock uses. 2. Understand how AWS charges you for using Amazon Bedrock. 3. Try the Getting started with Amazon Bedrock tutorials. In the tutorials, you learn how to use the playgrounds in Amazon Bedrock console. You also learn and how to use the AWS SDK to call Amazon Bedrock API operations. How do I get started with Amazon Bedrock? 2 Amazon Bedrock User Guide 4. Read the documentation for the features that you want to include in your application. Amazon Bedrock pricing When you sign up for AWS, your AWS account is automatically signed up for all services in AWS, including Amazon Bedrock. However, you are charged only for the services that you use. For information about pricing for different Amazon Bedrock resources, see Amazon Bedrock pricing. To see your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost Management console. To learn more about AWS account billing, see the AWS Billing User Guide. If you have questions concerning AWS billing and AWS accounts, contact AWS Support. With Amazon Bedrock, you pay to run inference on any of the third-party foundation models. Pricing is based on the volume of input tokens and output tokens, and on whether you have purchased Provisioned Throughput for the model. For more information, see the Model providers page in the Amazon Bedrock console. For each model, pricing is"} +{"global_id": 1392, "doc_id": "bedrock", "chunk_id": "3", "question_id": 1, "question": "What is pricing based on for running inference on third-party foundation models?", "answer_span": "Pricing is based on the volume of input tokens and output tokens, and on whether you have purchased Provisioned Throughput for the model.", "chunk": "run inference on any of the third-party foundation models. Pricing is based on the volume of input tokens and output tokens, and on whether you have purchased Provisioned Throughput for the model. For more information, see the Model providers page in the Amazon Bedrock console. For each model, pricing is listed following the model version. For more information about purchasing Provisioned Throughput, see Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock. Key terminology This chapter explains terminology that will help you understand what Amazon Bedrock offers and how it works. Read through the following list to understand generative AI terminology and Amazon Bedrock's fundamental capabilities: • Foundation model (FM) – An AI model with a large number of parameters and trained on a massive amount of diverse data. A foundation model can generate a variety of responses for a wide range of use cases. Foundation models can generate text or image, and can also convert input into embeddings. Before you can use an Amazon Bedrock foundation model, you must request access. For more information about foundation models, see Supported foundation models in Amazon Bedrock. • Base model – A foundation model that is packaged by a provider and ready to use. Amazon Bedrock offers a variety of industry-leading foundation models from leading providers. For more information, see Supported foundation models in Amazon Bedrock. Amazon Bedrock pricing 3 Amazon Bedrock User Guide • Model inference – The process of a foundation model generating an output (response) from a given input (prompt). For more information, see Submit prompts and generate responses with model inference. • Prompt – An input provided to a model to guide it to generate an appropriate response or output for the input. For example, a text prompt can consist of a single line for the model"} +{"global_id": 1393, "doc_id": "bedrock", "chunk_id": "3", "question_id": 2, "question": "What is a foundation model?", "answer_span": "Foundation model (FM) – An AI model with a large number of parameters and trained on a massive amount of diverse data.", "chunk": "run inference on any of the third-party foundation models. Pricing is based on the volume of input tokens and output tokens, and on whether you have purchased Provisioned Throughput for the model. For more information, see the Model providers page in the Amazon Bedrock console. For each model, pricing is listed following the model version. For more information about purchasing Provisioned Throughput, see Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock. Key terminology This chapter explains terminology that will help you understand what Amazon Bedrock offers and how it works. Read through the following list to understand generative AI terminology and Amazon Bedrock's fundamental capabilities: • Foundation model (FM) – An AI model with a large number of parameters and trained on a massive amount of diverse data. A foundation model can generate a variety of responses for a wide range of use cases. Foundation models can generate text or image, and can also convert input into embeddings. Before you can use an Amazon Bedrock foundation model, you must request access. For more information about foundation models, see Supported foundation models in Amazon Bedrock. • Base model – A foundation model that is packaged by a provider and ready to use. Amazon Bedrock offers a variety of industry-leading foundation models from leading providers. For more information, see Supported foundation models in Amazon Bedrock. Amazon Bedrock pricing 3 Amazon Bedrock User Guide • Model inference – The process of a foundation model generating an output (response) from a given input (prompt). For more information, see Submit prompts and generate responses with model inference. • Prompt – An input provided to a model to guide it to generate an appropriate response or output for the input. For example, a text prompt can consist of a single line for the model"} +{"global_id": 1394, "doc_id": "bedrock", "chunk_id": "3", "question_id": 3, "question": "What is model inference?", "answer_span": "Model inference – The process of a foundation model generating an output (response) from a given input (prompt).", "chunk": "run inference on any of the third-party foundation models. Pricing is based on the volume of input tokens and output tokens, and on whether you have purchased Provisioned Throughput for the model. For more information, see the Model providers page in the Amazon Bedrock console. For each model, pricing is listed following the model version. For more information about purchasing Provisioned Throughput, see Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock. Key terminology This chapter explains terminology that will help you understand what Amazon Bedrock offers and how it works. Read through the following list to understand generative AI terminology and Amazon Bedrock's fundamental capabilities: • Foundation model (FM) – An AI model with a large number of parameters and trained on a massive amount of diverse data. A foundation model can generate a variety of responses for a wide range of use cases. Foundation models can generate text or image, and can also convert input into embeddings. Before you can use an Amazon Bedrock foundation model, you must request access. For more information about foundation models, see Supported foundation models in Amazon Bedrock. • Base model – A foundation model that is packaged by a provider and ready to use. Amazon Bedrock offers a variety of industry-leading foundation models from leading providers. For more information, see Supported foundation models in Amazon Bedrock. Amazon Bedrock pricing 3 Amazon Bedrock User Guide • Model inference – The process of a foundation model generating an output (response) from a given input (prompt). For more information, see Submit prompts and generate responses with model inference. • Prompt – An input provided to a model to guide it to generate an appropriate response or output for the input. For example, a text prompt can consist of a single line for the model"} +{"global_id": 1395, "doc_id": "bedrock", "chunk_id": "3", "question_id": 4, "question": "What must you do before using an Amazon Bedrock foundation model?", "answer_span": "Before you can use an Amazon Bedrock foundation model, you must request access.", "chunk": "run inference on any of the third-party foundation models. Pricing is based on the volume of input tokens and output tokens, and on whether you have purchased Provisioned Throughput for the model. For more information, see the Model providers page in the Amazon Bedrock console. For each model, pricing is listed following the model version. For more information about purchasing Provisioned Throughput, see Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock. Key terminology This chapter explains terminology that will help you understand what Amazon Bedrock offers and how it works. Read through the following list to understand generative AI terminology and Amazon Bedrock's fundamental capabilities: • Foundation model (FM) – An AI model with a large number of parameters and trained on a massive amount of diverse data. A foundation model can generate a variety of responses for a wide range of use cases. Foundation models can generate text or image, and can also convert input into embeddings. Before you can use an Amazon Bedrock foundation model, you must request access. For more information about foundation models, see Supported foundation models in Amazon Bedrock. • Base model – A foundation model that is packaged by a provider and ready to use. Amazon Bedrock offers a variety of industry-leading foundation models from leading providers. For more information, see Supported foundation models in Amazon Bedrock. Amazon Bedrock pricing 3 Amazon Bedrock User Guide • Model inference – The process of a foundation model generating an output (response) from a given input (prompt). For more information, see Submit prompts and generate responses with model inference. • Prompt – An input provided to a model to guide it to generate an appropriate response or output for the input. For example, a text prompt can consist of a single line for the model"} +{"global_id": 1396, "doc_id": "bedrock", "chunk_id": "4", "question_id": 1, "question": "What is a prompt?", "answer_span": "Prompt – An input provided to a model to guide it to generate an appropriate response or output for the input.", "chunk": "input (prompt). For more information, see Submit prompts and generate responses with model inference. • Prompt – An input provided to a model to guide it to generate an appropriate response or output for the input. For example, a text prompt can consist of a single line for the model to respond to, or it can detail instructions or a task for the model to perform. The prompt can contain the context of the task, examples of outputs, or text for a model to use in its response. Prompts can be used to carry out tasks such as classification, question answering, code generation, creative writing, and more. For more information, see Prompt engineering concepts. • Token – A sequence of characters that a model can interpret or predict as a single unit of meaning. For example, with text models, a token could correspond not just to a word, but also to a part of a word with grammatical meaning (such as \"-ed\"), a punctuation mark (such as \"?\"), or a common phrase (such as \"a lot\"). • Model parameters – Values that define a model and its behavior in interpreting input and generating responses. Model parameters are controlled and updated by providers. You can also update model parameters to create a new model through the process of model customization. • Inference parameters – Values that can be adjusted during model inference to influence a response. Inference parameters can affect how varied responses are and can also limit the length of a response or the occurrence of specified sequences. For more information and definitions of specific inference parameters, see Influence response generation with inference parameters. • Playground – A user-friendly graphical interface in the AWS Management Console in which you can experiment with running model inference to familiarize yourself with Amazon"} +{"global_id": 1397, "doc_id": "bedrock", "chunk_id": "4", "question_id": 2, "question": "What can prompts be used to carry out?", "answer_span": "Prompts can be used to carry out tasks such as classification, question answering, code generation, creative writing, and more.", "chunk": "input (prompt). For more information, see Submit prompts and generate responses with model inference. • Prompt – An input provided to a model to guide it to generate an appropriate response or output for the input. For example, a text prompt can consist of a single line for the model to respond to, or it can detail instructions or a task for the model to perform. The prompt can contain the context of the task, examples of outputs, or text for a model to use in its response. Prompts can be used to carry out tasks such as classification, question answering, code generation, creative writing, and more. For more information, see Prompt engineering concepts. • Token – A sequence of characters that a model can interpret or predict as a single unit of meaning. For example, with text models, a token could correspond not just to a word, but also to a part of a word with grammatical meaning (such as \"-ed\"), a punctuation mark (such as \"?\"), or a common phrase (such as \"a lot\"). • Model parameters – Values that define a model and its behavior in interpreting input and generating responses. Model parameters are controlled and updated by providers. You can also update model parameters to create a new model through the process of model customization. • Inference parameters – Values that can be adjusted during model inference to influence a response. Inference parameters can affect how varied responses are and can also limit the length of a response or the occurrence of specified sequences. For more information and definitions of specific inference parameters, see Influence response generation with inference parameters. • Playground – A user-friendly graphical interface in the AWS Management Console in which you can experiment with running model inference to familiarize yourself with Amazon"} +{"global_id": 1398, "doc_id": "bedrock", "chunk_id": "4", "question_id": 3, "question": "What is a token?", "answer_span": "Token – A sequence of characters that a model can interpret or predict as a single unit of meaning.", "chunk": "input (prompt). For more information, see Submit prompts and generate responses with model inference. • Prompt – An input provided to a model to guide it to generate an appropriate response or output for the input. For example, a text prompt can consist of a single line for the model to respond to, or it can detail instructions or a task for the model to perform. The prompt can contain the context of the task, examples of outputs, or text for a model to use in its response. Prompts can be used to carry out tasks such as classification, question answering, code generation, creative writing, and more. For more information, see Prompt engineering concepts. • Token – A sequence of characters that a model can interpret or predict as a single unit of meaning. For example, with text models, a token could correspond not just to a word, but also to a part of a word with grammatical meaning (such as \"-ed\"), a punctuation mark (such as \"?\"), or a common phrase (such as \"a lot\"). • Model parameters – Values that define a model and its behavior in interpreting input and generating responses. Model parameters are controlled and updated by providers. You can also update model parameters to create a new model through the process of model customization. • Inference parameters – Values that can be adjusted during model inference to influence a response. Inference parameters can affect how varied responses are and can also limit the length of a response or the occurrence of specified sequences. For more information and definitions of specific inference parameters, see Influence response generation with inference parameters. • Playground – A user-friendly graphical interface in the AWS Management Console in which you can experiment with running model inference to familiarize yourself with Amazon"} +{"global_id": 1399, "doc_id": "bedrock", "chunk_id": "4", "question_id": 4, "question": "What are model parameters?", "answer_span": "Model parameters – Values that define a model and its behavior in interpreting input and generating responses.", "chunk": "input (prompt). For more information, see Submit prompts and generate responses with model inference. • Prompt – An input provided to a model to guide it to generate an appropriate response or output for the input. For example, a text prompt can consist of a single line for the model to respond to, or it can detail instructions or a task for the model to perform. The prompt can contain the context of the task, examples of outputs, or text for a model to use in its response. Prompts can be used to carry out tasks such as classification, question answering, code generation, creative writing, and more. For more information, see Prompt engineering concepts. • Token – A sequence of characters that a model can interpret or predict as a single unit of meaning. For example, with text models, a token could correspond not just to a word, but also to a part of a word with grammatical meaning (such as \"-ed\"), a punctuation mark (such as \"?\"), or a common phrase (such as \"a lot\"). • Model parameters – Values that define a model and its behavior in interpreting input and generating responses. Model parameters are controlled and updated by providers. You can also update model parameters to create a new model through the process of model customization. • Inference parameters – Values that can be adjusted during model inference to influence a response. Inference parameters can affect how varied responses are and can also limit the length of a response or the occurrence of specified sequences. For more information and definitions of specific inference parameters, see Influence response generation with inference parameters. • Playground – A user-friendly graphical interface in the AWS Management Console in which you can experiment with running model inference to familiarize yourself with Amazon"} +{"global_id": 1400, "doc_id": "bedrock", "chunk_id": "5", "question_id": 1, "question": "What is the playground in the AWS Management Console used for?", "answer_span": "A user-friendly graphical interface in the AWS Management Console in which you can experiment with running model inference to familiarize yourself with Amazon Bedrock.", "chunk": "a response or the occurrence of specified sequences. For more information and definitions of specific inference parameters, see Influence response generation with inference parameters. • Playground – A user-friendly graphical interface in the AWS Management Console in which you can experiment with running model inference to familiarize yourself with Amazon Bedrock. Use the playground to test out the effects of different models, configurations, and inference parameters on the responses generated for different prompts that you enter. For more information, see Generate responses in the console using playgrounds. • Embedding – The process of condensing information by transforming input into a vector of numerical values, known as the embeddings, in order to compare the similarity between different objects by using a shared numerical representation. For example, sentences can be compared to determine the similarity in meaning, images can be compared to determine visual similarity, or text and image can be compared to see if they're relevant to each other. You can also combine text and image inputs into an averaged embeddings vector if it's relevant to your use case. For more information, see Submit prompts and generate responses with model inference and Retrieve data and generate AI responses with Amazon Bedrock Knowledge Bases. Key terminology 4 Amazon Bedrock User Guide • Orchestration – The process of coordinating between foundation models and enterprise data and applications in order to carry out a task. For more information, see Automate tasks in your application using AI agents. • Agent – An application that carries out orchestrations through cyclically interpreting inputs and producing outputs by using a foundation model. An agent can be used to carry out customer requests. For more information, see Automate tasks in your application using AI agents. • Retrieval augmented generation (RAG) – The process involves: 1. Querying and retrieving information"} +{"global_id": 1401, "doc_id": "bedrock", "chunk_id": "5", "question_id": 2, "question": "What does embedding refer to?", "answer_span": "The process of condensing information by transforming input into a vector of numerical values, known as the embeddings, in order to compare the similarity between different objects by using a shared numerical representation.", "chunk": "a response or the occurrence of specified sequences. For more information and definitions of specific inference parameters, see Influence response generation with inference parameters. • Playground – A user-friendly graphical interface in the AWS Management Console in which you can experiment with running model inference to familiarize yourself with Amazon Bedrock. Use the playground to test out the effects of different models, configurations, and inference parameters on the responses generated for different prompts that you enter. For more information, see Generate responses in the console using playgrounds. • Embedding – The process of condensing information by transforming input into a vector of numerical values, known as the embeddings, in order to compare the similarity between different objects by using a shared numerical representation. For example, sentences can be compared to determine the similarity in meaning, images can be compared to determine visual similarity, or text and image can be compared to see if they're relevant to each other. You can also combine text and image inputs into an averaged embeddings vector if it's relevant to your use case. For more information, see Submit prompts and generate responses with model inference and Retrieve data and generate AI responses with Amazon Bedrock Knowledge Bases. Key terminology 4 Amazon Bedrock User Guide • Orchestration – The process of coordinating between foundation models and enterprise data and applications in order to carry out a task. For more information, see Automate tasks in your application using AI agents. • Agent – An application that carries out orchestrations through cyclically interpreting inputs and producing outputs by using a foundation model. An agent can be used to carry out customer requests. For more information, see Automate tasks in your application using AI agents. • Retrieval augmented generation (RAG) – The process involves: 1. Querying and retrieving information"} +{"global_id": 1402, "doc_id": "bedrock", "chunk_id": "5", "question_id": 3, "question": "What is orchestration?", "answer_span": "The process of coordinating between foundation models and enterprise data and applications in order to carry out a task.", "chunk": "a response or the occurrence of specified sequences. For more information and definitions of specific inference parameters, see Influence response generation with inference parameters. • Playground – A user-friendly graphical interface in the AWS Management Console in which you can experiment with running model inference to familiarize yourself with Amazon Bedrock. Use the playground to test out the effects of different models, configurations, and inference parameters on the responses generated for different prompts that you enter. For more information, see Generate responses in the console using playgrounds. • Embedding – The process of condensing information by transforming input into a vector of numerical values, known as the embeddings, in order to compare the similarity between different objects by using a shared numerical representation. For example, sentences can be compared to determine the similarity in meaning, images can be compared to determine visual similarity, or text and image can be compared to see if they're relevant to each other. You can also combine text and image inputs into an averaged embeddings vector if it's relevant to your use case. For more information, see Submit prompts and generate responses with model inference and Retrieve data and generate AI responses with Amazon Bedrock Knowledge Bases. Key terminology 4 Amazon Bedrock User Guide • Orchestration – The process of coordinating between foundation models and enterprise data and applications in order to carry out a task. For more information, see Automate tasks in your application using AI agents. • Agent – An application that carries out orchestrations through cyclically interpreting inputs and producing outputs by using a foundation model. An agent can be used to carry out customer requests. For more information, see Automate tasks in your application using AI agents. • Retrieval augmented generation (RAG) – The process involves: 1. Querying and retrieving information"} +{"global_id": 1403, "doc_id": "bedrock", "chunk_id": "5", "question_id": 4, "question": "What is an agent?", "answer_span": "An application that carries out orchestrations through cyclically interpreting inputs and producing outputs by using a foundation model.", "chunk": "a response or the occurrence of specified sequences. For more information and definitions of specific inference parameters, see Influence response generation with inference parameters. • Playground – A user-friendly graphical interface in the AWS Management Console in which you can experiment with running model inference to familiarize yourself with Amazon Bedrock. Use the playground to test out the effects of different models, configurations, and inference parameters on the responses generated for different prompts that you enter. For more information, see Generate responses in the console using playgrounds. • Embedding – The process of condensing information by transforming input into a vector of numerical values, known as the embeddings, in order to compare the similarity between different objects by using a shared numerical representation. For example, sentences can be compared to determine the similarity in meaning, images can be compared to determine visual similarity, or text and image can be compared to see if they're relevant to each other. You can also combine text and image inputs into an averaged embeddings vector if it's relevant to your use case. For more information, see Submit prompts and generate responses with model inference and Retrieve data and generate AI responses with Amazon Bedrock Knowledge Bases. Key terminology 4 Amazon Bedrock User Guide • Orchestration – The process of coordinating between foundation models and enterprise data and applications in order to carry out a task. For more information, see Automate tasks in your application using AI agents. • Agent – An application that carries out orchestrations through cyclically interpreting inputs and producing outputs by using a foundation model. An agent can be used to carry out customer requests. For more information, see Automate tasks in your application using AI agents. • Retrieval augmented generation (RAG) – The process involves: 1. Querying and retrieving information"} +{"global_id": 1404, "doc_id": "bedrock", "chunk_id": "6", "question_id": 1, "question": "What is the process of Retrieval augmented generation (RAG)?", "answer_span": "The process involves: 1. Querying and retrieving information from a data source 2. Augmenting a prompt with this information to provide better context to the foundation model 3. Obtaining a better response from the foundation model using the additional context", "chunk": "out orchestrations through cyclically interpreting inputs and producing outputs by using a foundation model. An agent can be used to carry out customer requests. For more information, see Automate tasks in your application using AI agents. • Retrieval augmented generation (RAG) – The process involves: 1. Querying and retrieving information from a data source 2. Augmenting a prompt with this information to provide better context to the foundation model 3. Obtaining a better response from the foundation model using the additional context For more information, see Retrieve data and generate AI responses with Amazon Bedrock Knowledge Bases. • Model customization – The process of using training data to adjust the model parameter values in a base model in order to create a custom model. Examples of model customization include Fine-tuning, which uses labeled data (inputs and corresponding outputs), and Continued Pre-training, which uses unlabeled data (inputs only) to adjust model parameters. For more information about model customization techniques available in Amazon Bedrock, see Customize your model to improve its performance for your use case. • Hyperparameters – Values that can be adjusted for model customization to control the training process and, consequently, the output custom model. For more information and definitions of specific hyperparameters, see Custom model hyperparameters. • Model evaluation – The process of evaluating and comparing model outputs in order to determine the model that is best suited for a use case. For more information, see Evaluate the performance of Amazon Bedrock resources. • Provisioned Throughput – A level of throughput that you purchase for a base or custom model in order to increase the amount and/or rate of tokens processed during model inference. When you purchase Provisioned Throughput for a model, a provisioned model is created that can be used to carry out model inference. For more"} +{"global_id": 1405, "doc_id": "bedrock", "chunk_id": "6", "question_id": 2, "question": "What is model customization?", "answer_span": "The process of using training data to adjust the model parameter values in a base model in order to create a custom model.", "chunk": "out orchestrations through cyclically interpreting inputs and producing outputs by using a foundation model. An agent can be used to carry out customer requests. For more information, see Automate tasks in your application using AI agents. • Retrieval augmented generation (RAG) – The process involves: 1. Querying and retrieving information from a data source 2. Augmenting a prompt with this information to provide better context to the foundation model 3. Obtaining a better response from the foundation model using the additional context For more information, see Retrieve data and generate AI responses with Amazon Bedrock Knowledge Bases. • Model customization – The process of using training data to adjust the model parameter values in a base model in order to create a custom model. Examples of model customization include Fine-tuning, which uses labeled data (inputs and corresponding outputs), and Continued Pre-training, which uses unlabeled data (inputs only) to adjust model parameters. For more information about model customization techniques available in Amazon Bedrock, see Customize your model to improve its performance for your use case. • Hyperparameters – Values that can be adjusted for model customization to control the training process and, consequently, the output custom model. For more information and definitions of specific hyperparameters, see Custom model hyperparameters. • Model evaluation – The process of evaluating and comparing model outputs in order to determine the model that is best suited for a use case. For more information, see Evaluate the performance of Amazon Bedrock resources. • Provisioned Throughput – A level of throughput that you purchase for a base or custom model in order to increase the amount and/or rate of tokens processed during model inference. When you purchase Provisioned Throughput for a model, a provisioned model is created that can be used to carry out model inference. For more"} +{"global_id": 1406, "doc_id": "bedrock", "chunk_id": "6", "question_id": 3, "question": "What are hyperparameters?", "answer_span": "Values that can be adjusted for model customization to control the training process and, consequently, the output custom model.", "chunk": "out orchestrations through cyclically interpreting inputs and producing outputs by using a foundation model. An agent can be used to carry out customer requests. For more information, see Automate tasks in your application using AI agents. • Retrieval augmented generation (RAG) – The process involves: 1. Querying and retrieving information from a data source 2. Augmenting a prompt with this information to provide better context to the foundation model 3. Obtaining a better response from the foundation model using the additional context For more information, see Retrieve data and generate AI responses with Amazon Bedrock Knowledge Bases. • Model customization – The process of using training data to adjust the model parameter values in a base model in order to create a custom model. Examples of model customization include Fine-tuning, which uses labeled data (inputs and corresponding outputs), and Continued Pre-training, which uses unlabeled data (inputs only) to adjust model parameters. For more information about model customization techniques available in Amazon Bedrock, see Customize your model to improve its performance for your use case. • Hyperparameters – Values that can be adjusted for model customization to control the training process and, consequently, the output custom model. For more information and definitions of specific hyperparameters, see Custom model hyperparameters. • Model evaluation – The process of evaluating and comparing model outputs in order to determine the model that is best suited for a use case. For more information, see Evaluate the performance of Amazon Bedrock resources. • Provisioned Throughput – A level of throughput that you purchase for a base or custom model in order to increase the amount and/or rate of tokens processed during model inference. When you purchase Provisioned Throughput for a model, a provisioned model is created that can be used to carry out model inference. For more"} +{"global_id": 1407, "doc_id": "bedrock", "chunk_id": "6", "question_id": 4, "question": "What is Provisioned Throughput?", "answer_span": "A level of throughput that you purchase for a base or custom model in order to increase the amount and/or rate of tokens processed during model inference.", "chunk": "out orchestrations through cyclically interpreting inputs and producing outputs by using a foundation model. An agent can be used to carry out customer requests. For more information, see Automate tasks in your application using AI agents. • Retrieval augmented generation (RAG) – The process involves: 1. Querying and retrieving information from a data source 2. Augmenting a prompt with this information to provide better context to the foundation model 3. Obtaining a better response from the foundation model using the additional context For more information, see Retrieve data and generate AI responses with Amazon Bedrock Knowledge Bases. • Model customization – The process of using training data to adjust the model parameter values in a base model in order to create a custom model. Examples of model customization include Fine-tuning, which uses labeled data (inputs and corresponding outputs), and Continued Pre-training, which uses unlabeled data (inputs only) to adjust model parameters. For more information about model customization techniques available in Amazon Bedrock, see Customize your model to improve its performance for your use case. • Hyperparameters – Values that can be adjusted for model customization to control the training process and, consequently, the output custom model. For more information and definitions of specific hyperparameters, see Custom model hyperparameters. • Model evaluation – The process of evaluating and comparing model outputs in order to determine the model that is best suited for a use case. For more information, see Evaluate the performance of Amazon Bedrock resources. • Provisioned Throughput – A level of throughput that you purchase for a base or custom model in order to increase the amount and/or rate of tokens processed during model inference. When you purchase Provisioned Throughput for a model, a provisioned model is created that can be used to carry out model inference. For more"} +{"global_id": 1408, "doc_id": "bedrock", "chunk_id": "7", "question_id": 1, "question": "What is the purpose of purchasing Provisioned Throughput for a model?", "answer_span": "in order to increase the amount and/or rate of tokens processed during model inference.", "chunk": "level of throughput that you purchase for a base or custom model in order to increase the amount and/or rate of tokens processed during model inference. When you purchase Provisioned Throughput for a model, a provisioned model is created that can be used to carry out model inference. For more information, see Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock. Key terminology 5"} +{"global_id": 1409, "doc_id": "bedrock", "chunk_id": "7", "question_id": 2, "question": "What is created when you purchase Provisioned Throughput for a model?", "answer_span": "a provisioned model is created that can be used to carry out model inference.", "chunk": "level of throughput that you purchase for a base or custom model in order to increase the amount and/or rate of tokens processed during model inference. When you purchase Provisioned Throughput for a model, a provisioned model is created that can be used to carry out model inference. For more information, see Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock. Key terminology 5"} +{"global_id": 1410, "doc_id": "bedrock", "chunk_id": "7", "question_id": 3, "question": "Where can you find more information about increasing model invocation capacity?", "answer_span": "see Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock.", "chunk": "level of throughput that you purchase for a base or custom model in order to increase the amount and/or rate of tokens processed during model inference. When you purchase Provisioned Throughput for a model, a provisioned model is created that can be used to carry out model inference. For more information, see Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock. Key terminology 5"} +{"global_id": 1411, "doc_id": "bedrock", "chunk_id": "7", "question_id": 4, "question": "What does the term 'Key terminology' refer to in the text?", "answer_span": "Key terminology 5", "chunk": "level of throughput that you purchase for a base or custom model in order to increase the amount and/or rate of tokens processed during model inference. When you purchase Provisioned Throughput for a model, a provisioned model is created that can be used to carry out model inference. For more information, see Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock. Key terminology 5"} +{"global_id": 1412, "doc_id": "outposts", "chunk_id": "0", "question_id": 1, "question": "What is AWS Outposts?", "answer_span": "AWS Outposts is a fully managed service that extends AWS infrastructure, services, APIs, and tools to customer premises.", "chunk": "AWS Outposts User Guide for Outposts racks What is AWS Outposts? AWS Outposts is a fully managed service that extends AWS infrastructure, services, APIs, and tools to customer premises. By providing local access to AWS managed infrastructure, AWS Outposts enables customers to build and run applications on premises using the same programming interfaces as in AWS Regions, while using local compute and storage resources for lower latency and local data processing needs. An Outpost is a pool of AWS compute and storage capacity deployed at a customer site. AWS operates, monitors, and manages this capacity as part of an AWS Region. You can create subnets on your Outpost and specify them when you create AWS resources such as EC2 instances, EBS volumes, ECS clusters, and RDS instances. Instances in Outpost subnets communicate with other instances in the AWS Region using private IP addresses, all within the same VPC. Note You can't connect an Outpost to another Outpost or Local Zone that is within the same VPC. For more information, see the AWS Outposts product page. Key concepts These are the key concepts for AWS Outposts. • Outpost site – The customer-managed physical buildings where AWS will install your Outpost. A site must meet the facility, networking, and power requirements for your Outpost. • Outpost capacity – Compute and storage resources available on the Outpost. You can view and manage the capacity for your Outpost from the AWS Outposts console. AWS Outposts supports self-service capacity management that you can define at the Outposts level to reconfigure all of the assets in an Outposts or specifically for each individual asset. An Outpost asset can be a single server within an Outposts rack or an Outposts server. • Outpost equipment – Physical hardware that provides access to the AWS Outposts service. The hardware"} +{"global_id": 1413, "doc_id": "outposts", "chunk_id": "0", "question_id": 2, "question": "What can you create on your Outpost?", "answer_span": "You can create subnets on your Outpost and specify them when you create AWS resources such as EC2 instances, EBS volumes, ECS clusters, and RDS instances.", "chunk": "AWS Outposts User Guide for Outposts racks What is AWS Outposts? AWS Outposts is a fully managed service that extends AWS infrastructure, services, APIs, and tools to customer premises. By providing local access to AWS managed infrastructure, AWS Outposts enables customers to build and run applications on premises using the same programming interfaces as in AWS Regions, while using local compute and storage resources for lower latency and local data processing needs. An Outpost is a pool of AWS compute and storage capacity deployed at a customer site. AWS operates, monitors, and manages this capacity as part of an AWS Region. You can create subnets on your Outpost and specify them when you create AWS resources such as EC2 instances, EBS volumes, ECS clusters, and RDS instances. Instances in Outpost subnets communicate with other instances in the AWS Region using private IP addresses, all within the same VPC. Note You can't connect an Outpost to another Outpost or Local Zone that is within the same VPC. For more information, see the AWS Outposts product page. Key concepts These are the key concepts for AWS Outposts. • Outpost site – The customer-managed physical buildings where AWS will install your Outpost. A site must meet the facility, networking, and power requirements for your Outpost. • Outpost capacity – Compute and storage resources available on the Outpost. You can view and manage the capacity for your Outpost from the AWS Outposts console. AWS Outposts supports self-service capacity management that you can define at the Outposts level to reconfigure all of the assets in an Outposts or specifically for each individual asset. An Outpost asset can be a single server within an Outposts rack or an Outposts server. • Outpost equipment – Physical hardware that provides access to the AWS Outposts service. The hardware"} +{"global_id": 1414, "doc_id": "outposts", "chunk_id": "0", "question_id": 3, "question": "What is an Outpost?", "answer_span": "An Outpost is a pool of AWS compute and storage capacity deployed at a customer site.", "chunk": "AWS Outposts User Guide for Outposts racks What is AWS Outposts? AWS Outposts is a fully managed service that extends AWS infrastructure, services, APIs, and tools to customer premises. By providing local access to AWS managed infrastructure, AWS Outposts enables customers to build and run applications on premises using the same programming interfaces as in AWS Regions, while using local compute and storage resources for lower latency and local data processing needs. An Outpost is a pool of AWS compute and storage capacity deployed at a customer site. AWS operates, monitors, and manages this capacity as part of an AWS Region. You can create subnets on your Outpost and specify them when you create AWS resources such as EC2 instances, EBS volumes, ECS clusters, and RDS instances. Instances in Outpost subnets communicate with other instances in the AWS Region using private IP addresses, all within the same VPC. Note You can't connect an Outpost to another Outpost or Local Zone that is within the same VPC. For more information, see the AWS Outposts product page. Key concepts These are the key concepts for AWS Outposts. • Outpost site – The customer-managed physical buildings where AWS will install your Outpost. A site must meet the facility, networking, and power requirements for your Outpost. • Outpost capacity – Compute and storage resources available on the Outpost. You can view and manage the capacity for your Outpost from the AWS Outposts console. AWS Outposts supports self-service capacity management that you can define at the Outposts level to reconfigure all of the assets in an Outposts or specifically for each individual asset. An Outpost asset can be a single server within an Outposts rack or an Outposts server. • Outpost equipment – Physical hardware that provides access to the AWS Outposts service. The hardware"} +{"global_id": 1415, "doc_id": "outposts", "chunk_id": "0", "question_id": 4, "question": "What are the key concepts for AWS Outposts?", "answer_span": "These are the key concepts for AWS Outposts.", "chunk": "AWS Outposts User Guide for Outposts racks What is AWS Outposts? AWS Outposts is a fully managed service that extends AWS infrastructure, services, APIs, and tools to customer premises. By providing local access to AWS managed infrastructure, AWS Outposts enables customers to build and run applications on premises using the same programming interfaces as in AWS Regions, while using local compute and storage resources for lower latency and local data processing needs. An Outpost is a pool of AWS compute and storage capacity deployed at a customer site. AWS operates, monitors, and manages this capacity as part of an AWS Region. You can create subnets on your Outpost and specify them when you create AWS resources such as EC2 instances, EBS volumes, ECS clusters, and RDS instances. Instances in Outpost subnets communicate with other instances in the AWS Region using private IP addresses, all within the same VPC. Note You can't connect an Outpost to another Outpost or Local Zone that is within the same VPC. For more information, see the AWS Outposts product page. Key concepts These are the key concepts for AWS Outposts. • Outpost site – The customer-managed physical buildings where AWS will install your Outpost. A site must meet the facility, networking, and power requirements for your Outpost. • Outpost capacity – Compute and storage resources available on the Outpost. You can view and manage the capacity for your Outpost from the AWS Outposts console. AWS Outposts supports self-service capacity management that you can define at the Outposts level to reconfigure all of the assets in an Outposts or specifically for each individual asset. An Outpost asset can be a single server within an Outposts rack or an Outposts server. • Outpost equipment – Physical hardware that provides access to the AWS Outposts service. The hardware"} +{"global_id": 1416, "doc_id": "outposts", "chunk_id": "1", "question_id": 1, "question": "What is an Outpost asset?", "answer_span": "An Outpost asset can be a single server within an Outposts rack or an Outposts server.", "chunk": "the Outposts level to reconfigure all of the assets in an Outposts or specifically for each individual asset. An Outpost asset can be a single server within an Outposts rack or an Outposts server. • Outpost equipment – Physical hardware that provides access to the AWS Outposts service. The hardware includes racks, servers, switches, and cabling owned and managed by AWS. Key concepts 1 AWS Outposts User Guide for Outposts racks • Outposts racks – An Outpost form factor that is an industry-standard 42U rack. Outposts racks include rack-mountable servers, switches, a network patch panel, a power shelf and blank panels. • Outposts ACE racks – The Aggregation, Core, Edge (ACE) rack acts as a network aggregation point for multi-rack Outpost deployments. The ACE rack reduces the number of physical networking port and logical interface requirements by providing connectivity between multiple Outpost compute racks in your logical Outposts and your on-premise network. You must install an ACE rack if you have four or more compute racks. If you have less than four compute racks but plan to expand to four or more racks in the future, we recommend that you install an ACE rack at the earliest. For additional information on ACE racks, see Scaling AWS Outposts rack deployments with ACE racks. • Outposts servers – An Outpost form factor that is an industry-standard 1U or 2U server, which can be installed in a standard EIA-310D 19 compliant 4 post rack. Outposts servers provide local compute and networking services to sites that have limited space or smaller capacity requirements. • Outpost owner – The account owner for the account that places the AWS Outposts order. After AWS engages with the customer, the owner may include additional points of contact. AWS will communicate with the contacts to clarify orders, installation appointments,"} +{"global_id": 1417, "doc_id": "outposts", "chunk_id": "1", "question_id": 2, "question": "What does Outpost equipment include?", "answer_span": "The hardware includes racks, servers, switches, and cabling owned and managed by AWS.", "chunk": "the Outposts level to reconfigure all of the assets in an Outposts or specifically for each individual asset. An Outpost asset can be a single server within an Outposts rack or an Outposts server. • Outpost equipment – Physical hardware that provides access to the AWS Outposts service. The hardware includes racks, servers, switches, and cabling owned and managed by AWS. Key concepts 1 AWS Outposts User Guide for Outposts racks • Outposts racks – An Outpost form factor that is an industry-standard 42U rack. Outposts racks include rack-mountable servers, switches, a network patch panel, a power shelf and blank panels. • Outposts ACE racks – The Aggregation, Core, Edge (ACE) rack acts as a network aggregation point for multi-rack Outpost deployments. The ACE rack reduces the number of physical networking port and logical interface requirements by providing connectivity between multiple Outpost compute racks in your logical Outposts and your on-premise network. You must install an ACE rack if you have four or more compute racks. If you have less than four compute racks but plan to expand to four or more racks in the future, we recommend that you install an ACE rack at the earliest. For additional information on ACE racks, see Scaling AWS Outposts rack deployments with ACE racks. • Outposts servers – An Outpost form factor that is an industry-standard 1U or 2U server, which can be installed in a standard EIA-310D 19 compliant 4 post rack. Outposts servers provide local compute and networking services to sites that have limited space or smaller capacity requirements. • Outpost owner – The account owner for the account that places the AWS Outposts order. After AWS engages with the customer, the owner may include additional points of contact. AWS will communicate with the contacts to clarify orders, installation appointments,"} +{"global_id": 1418, "doc_id": "outposts", "chunk_id": "1", "question_id": 3, "question": "What is the purpose of the Outposts ACE rack?", "answer_span": "The ACE rack reduces the number of physical networking port and logical interface requirements by providing connectivity between multiple Outpost compute racks in your logical Outposts and your on-premise network.", "chunk": "the Outposts level to reconfigure all of the assets in an Outposts or specifically for each individual asset. An Outpost asset can be a single server within an Outposts rack or an Outposts server. • Outpost equipment – Physical hardware that provides access to the AWS Outposts service. The hardware includes racks, servers, switches, and cabling owned and managed by AWS. Key concepts 1 AWS Outposts User Guide for Outposts racks • Outposts racks – An Outpost form factor that is an industry-standard 42U rack. Outposts racks include rack-mountable servers, switches, a network patch panel, a power shelf and blank panels. • Outposts ACE racks – The Aggregation, Core, Edge (ACE) rack acts as a network aggregation point for multi-rack Outpost deployments. The ACE rack reduces the number of physical networking port and logical interface requirements by providing connectivity between multiple Outpost compute racks in your logical Outposts and your on-premise network. You must install an ACE rack if you have four or more compute racks. If you have less than four compute racks but plan to expand to four or more racks in the future, we recommend that you install an ACE rack at the earliest. For additional information on ACE racks, see Scaling AWS Outposts rack deployments with ACE racks. • Outposts servers – An Outpost form factor that is an industry-standard 1U or 2U server, which can be installed in a standard EIA-310D 19 compliant 4 post rack. Outposts servers provide local compute and networking services to sites that have limited space or smaller capacity requirements. • Outpost owner – The account owner for the account that places the AWS Outposts order. After AWS engages with the customer, the owner may include additional points of contact. AWS will communicate with the contacts to clarify orders, installation appointments,"} +{"global_id": 1419, "doc_id": "outposts", "chunk_id": "1", "question_id": 4, "question": "What is the recommendation if you plan to expand to four or more compute racks?", "answer_span": "we recommend that you install an ACE rack at the earliest.", "chunk": "the Outposts level to reconfigure all of the assets in an Outposts or specifically for each individual asset. An Outpost asset can be a single server within an Outposts rack or an Outposts server. • Outpost equipment – Physical hardware that provides access to the AWS Outposts service. The hardware includes racks, servers, switches, and cabling owned and managed by AWS. Key concepts 1 AWS Outposts User Guide for Outposts racks • Outposts racks – An Outpost form factor that is an industry-standard 42U rack. Outposts racks include rack-mountable servers, switches, a network patch panel, a power shelf and blank panels. • Outposts ACE racks – The Aggregation, Core, Edge (ACE) rack acts as a network aggregation point for multi-rack Outpost deployments. The ACE rack reduces the number of physical networking port and logical interface requirements by providing connectivity between multiple Outpost compute racks in your logical Outposts and your on-premise network. You must install an ACE rack if you have four or more compute racks. If you have less than four compute racks but plan to expand to four or more racks in the future, we recommend that you install an ACE rack at the earliest. For additional information on ACE racks, see Scaling AWS Outposts rack deployments with ACE racks. • Outposts servers – An Outpost form factor that is an industry-standard 1U or 2U server, which can be installed in a standard EIA-310D 19 compliant 4 post rack. Outposts servers provide local compute and networking services to sites that have limited space or smaller capacity requirements. • Outpost owner – The account owner for the account that places the AWS Outposts order. After AWS engages with the customer, the owner may include additional points of contact. AWS will communicate with the contacts to clarify orders, installation appointments,"} +{"global_id": 1420, "doc_id": "outposts", "chunk_id": "2", "question_id": 1, "question": "What is the role of the Outpost owner?", "answer_span": "The account owner for the account that places the AWS Outposts order.", "chunk": "sites that have limited space or smaller capacity requirements. • Outpost owner – The account owner for the account that places the AWS Outposts order. After AWS engages with the customer, the owner may include additional points of contact. AWS will communicate with the contacts to clarify orders, installation appointments, and hardware maintenance and replacement. Contact AWS Support Center if the contact information changes. • Service link – Network route that enables communication between your Outpost and its associated AWS Region. Each Outpost is an extension of an Availability Zone and its associated Region. • Local gateway (LGW) – A logical interconnect virtual router that enables communication between an Outposts rack and your on-premises network. • Local network interface – A network interface that enables communication from an Outposts server and your on-premises network. AWS resources on Outposts You can create the following resources on your Outpost to support low-latency workloads that must run in close proximity to on-premises data and applications: AWS resources on Outposts 2 AWS Outposts User Guide for Outposts racks How AWS Outposts works AWS Outposts is designed to operate with a constant and consistent connection between your Outpost and an AWS Region. To achieve this connection to the Region, and to the local workloads in your on-premises environment, you must connect your Outpost to your on-premises network. Your on-premises network must provide wide area network (WAN) access back to the Region. It must also provide LAN or WAN access to the local network where your on-premises workloads or applications reside. The following diagram illustrates both Outpost form factors. Contents • Network components • VPCs and subnets • Routing 12 AWS Outposts User Guide for Outposts racks • DNS • Service link • Local gateways • Local network interfaces Network components AWS Outposts extends an Amazon"} +{"global_id": 1421, "doc_id": "outposts", "chunk_id": "2", "question_id": 2, "question": "What does the service link enable?", "answer_span": "Network route that enables communication between your Outpost and its associated AWS Region.", "chunk": "sites that have limited space or smaller capacity requirements. • Outpost owner – The account owner for the account that places the AWS Outposts order. After AWS engages with the customer, the owner may include additional points of contact. AWS will communicate with the contacts to clarify orders, installation appointments, and hardware maintenance and replacement. Contact AWS Support Center if the contact information changes. • Service link – Network route that enables communication between your Outpost and its associated AWS Region. Each Outpost is an extension of an Availability Zone and its associated Region. • Local gateway (LGW) – A logical interconnect virtual router that enables communication between an Outposts rack and your on-premises network. • Local network interface – A network interface that enables communication from an Outposts server and your on-premises network. AWS resources on Outposts You can create the following resources on your Outpost to support low-latency workloads that must run in close proximity to on-premises data and applications: AWS resources on Outposts 2 AWS Outposts User Guide for Outposts racks How AWS Outposts works AWS Outposts is designed to operate with a constant and consistent connection between your Outpost and an AWS Region. To achieve this connection to the Region, and to the local workloads in your on-premises environment, you must connect your Outpost to your on-premises network. Your on-premises network must provide wide area network (WAN) access back to the Region. It must also provide LAN or WAN access to the local network where your on-premises workloads or applications reside. The following diagram illustrates both Outpost form factors. Contents • Network components • VPCs and subnets • Routing 12 AWS Outposts User Guide for Outposts racks • DNS • Service link • Local gateways • Local network interfaces Network components AWS Outposts extends an Amazon"} +{"global_id": 1422, "doc_id": "outposts", "chunk_id": "2", "question_id": 3, "question": "What is a local gateway (LGW)?", "answer_span": "A logical interconnect virtual router that enables communication between an Outposts rack and your on-premises network.", "chunk": "sites that have limited space or smaller capacity requirements. • Outpost owner – The account owner for the account that places the AWS Outposts order. After AWS engages with the customer, the owner may include additional points of contact. AWS will communicate with the contacts to clarify orders, installation appointments, and hardware maintenance and replacement. Contact AWS Support Center if the contact information changes. • Service link – Network route that enables communication between your Outpost and its associated AWS Region. Each Outpost is an extension of an Availability Zone and its associated Region. • Local gateway (LGW) – A logical interconnect virtual router that enables communication between an Outposts rack and your on-premises network. • Local network interface – A network interface that enables communication from an Outposts server and your on-premises network. AWS resources on Outposts You can create the following resources on your Outpost to support low-latency workloads that must run in close proximity to on-premises data and applications: AWS resources on Outposts 2 AWS Outposts User Guide for Outposts racks How AWS Outposts works AWS Outposts is designed to operate with a constant and consistent connection between your Outpost and an AWS Region. To achieve this connection to the Region, and to the local workloads in your on-premises environment, you must connect your Outpost to your on-premises network. Your on-premises network must provide wide area network (WAN) access back to the Region. It must also provide LAN or WAN access to the local network where your on-premises workloads or applications reside. The following diagram illustrates both Outpost form factors. Contents • Network components • VPCs and subnets • Routing 12 AWS Outposts User Guide for Outposts racks • DNS • Service link • Local gateways • Local network interfaces Network components AWS Outposts extends an Amazon"} +{"global_id": 1423, "doc_id": "outposts", "chunk_id": "2", "question_id": 4, "question": "What must your on-premises network provide to connect to the AWS Region?", "answer_span": "Your on-premises network must provide wide area network (WAN) access back to the Region.", "chunk": "sites that have limited space or smaller capacity requirements. • Outpost owner – The account owner for the account that places the AWS Outposts order. After AWS engages with the customer, the owner may include additional points of contact. AWS will communicate with the contacts to clarify orders, installation appointments, and hardware maintenance and replacement. Contact AWS Support Center if the contact information changes. • Service link – Network route that enables communication between your Outpost and its associated AWS Region. Each Outpost is an extension of an Availability Zone and its associated Region. • Local gateway (LGW) – A logical interconnect virtual router that enables communication between an Outposts rack and your on-premises network. • Local network interface – A network interface that enables communication from an Outposts server and your on-premises network. AWS resources on Outposts You can create the following resources on your Outpost to support low-latency workloads that must run in close proximity to on-premises data and applications: AWS resources on Outposts 2 AWS Outposts User Guide for Outposts racks How AWS Outposts works AWS Outposts is designed to operate with a constant and consistent connection between your Outpost and an AWS Region. To achieve this connection to the Region, and to the local workloads in your on-premises environment, you must connect your Outpost to your on-premises network. Your on-premises network must provide wide area network (WAN) access back to the Region. It must also provide LAN or WAN access to the local network where your on-premises workloads or applications reside. The following diagram illustrates both Outpost form factors. Contents • Network components • VPCs and subnets • Routing 12 AWS Outposts User Guide for Outposts racks • DNS • Service link • Local gateways • Local network interfaces Network components AWS Outposts extends an Amazon"} +{"global_id": 1424, "doc_id": "outposts", "chunk_id": "3", "question_id": 1, "question": "What does AWS Outposts extend from an AWS Region?", "answer_span": "AWS Outposts extends an Amazon VPC from an AWS Region to an Outpost", "chunk": "on-premises workloads or applications reside. The following diagram illustrates both Outpost form factors. Contents • Network components • VPCs and subnets • Routing 12 AWS Outposts User Guide for Outposts racks • DNS • Service link • Local gateways • Local network interfaces Network components AWS Outposts extends an Amazon VPC from an AWS Region to an Outpost with the VPC components that are accessible in the Region, including internet gateways, virtual private gateways, Amazon VPC Transit Gateways, and VPC endpoints. An Outpost is homed to an Availability Zone in the Region and is an extension of that Availability Zone that you can use for resiliency. The following diagram shows the network components for your Outpost. • An AWS Region and an on-premises network • A VPC with multiple subnets in the Region • An Outpost in the on-premises network • Connectivity between the Outpost and local network provided: • For Outposts racks: a local gateway • For Outposts servers: a local network interface (LNI) Network components 13 AWS Outposts User Guide for Outposts racks VPCs and subnets A virtual private cloud (VPC) spans all Availability Zones in its AWS Region. You can extend any VPC in the Region to your Outpost by adding an Outpost subnet. To add an Outpost subnet to a VPC, specify the Amazon Resource Name (ARN) of the Outpost when you create the subnet. Outposts support multiple subnets. You can specify the EC2 instance subnet when you launch the EC2 instance in your Outpost. You can't specify the underlying hardware where the instance is deployed, because the Outpost is a pool of AWS compute and storage capacity. Each Outpost can support multiple VPCs that can have one or more Outpost subnets. For information about VPC quotas, see Amazon VPC Quotas in the Amazon VPC User"} +{"global_id": 1425, "doc_id": "outposts", "chunk_id": "3", "question_id": 2, "question": "What is an Outpost homed to?", "answer_span": "An Outpost is homed to an Availability Zone in the Region", "chunk": "on-premises workloads or applications reside. The following diagram illustrates both Outpost form factors. Contents • Network components • VPCs and subnets • Routing 12 AWS Outposts User Guide for Outposts racks • DNS • Service link • Local gateways • Local network interfaces Network components AWS Outposts extends an Amazon VPC from an AWS Region to an Outpost with the VPC components that are accessible in the Region, including internet gateways, virtual private gateways, Amazon VPC Transit Gateways, and VPC endpoints. An Outpost is homed to an Availability Zone in the Region and is an extension of that Availability Zone that you can use for resiliency. The following diagram shows the network components for your Outpost. • An AWS Region and an on-premises network • A VPC with multiple subnets in the Region • An Outpost in the on-premises network • Connectivity between the Outpost and local network provided: • For Outposts racks: a local gateway • For Outposts servers: a local network interface (LNI) Network components 13 AWS Outposts User Guide for Outposts racks VPCs and subnets A virtual private cloud (VPC) spans all Availability Zones in its AWS Region. You can extend any VPC in the Region to your Outpost by adding an Outpost subnet. To add an Outpost subnet to a VPC, specify the Amazon Resource Name (ARN) of the Outpost when you create the subnet. Outposts support multiple subnets. You can specify the EC2 instance subnet when you launch the EC2 instance in your Outpost. You can't specify the underlying hardware where the instance is deployed, because the Outpost is a pool of AWS compute and storage capacity. Each Outpost can support multiple VPCs that can have one or more Outpost subnets. For information about VPC quotas, see Amazon VPC Quotas in the Amazon VPC User"} +{"global_id": 1426, "doc_id": "outposts", "chunk_id": "3", "question_id": 3, "question": "What must you specify to add an Outpost subnet to a VPC?", "answer_span": "specify the Amazon Resource Name (ARN) of the Outpost when you create the subnet", "chunk": "on-premises workloads or applications reside. The following diagram illustrates both Outpost form factors. Contents • Network components • VPCs and subnets • Routing 12 AWS Outposts User Guide for Outposts racks • DNS • Service link • Local gateways • Local network interfaces Network components AWS Outposts extends an Amazon VPC from an AWS Region to an Outpost with the VPC components that are accessible in the Region, including internet gateways, virtual private gateways, Amazon VPC Transit Gateways, and VPC endpoints. An Outpost is homed to an Availability Zone in the Region and is an extension of that Availability Zone that you can use for resiliency. The following diagram shows the network components for your Outpost. • An AWS Region and an on-premises network • A VPC with multiple subnets in the Region • An Outpost in the on-premises network • Connectivity between the Outpost and local network provided: • For Outposts racks: a local gateway • For Outposts servers: a local network interface (LNI) Network components 13 AWS Outposts User Guide for Outposts racks VPCs and subnets A virtual private cloud (VPC) spans all Availability Zones in its AWS Region. You can extend any VPC in the Region to your Outpost by adding an Outpost subnet. To add an Outpost subnet to a VPC, specify the Amazon Resource Name (ARN) of the Outpost when you create the subnet. Outposts support multiple subnets. You can specify the EC2 instance subnet when you launch the EC2 instance in your Outpost. You can't specify the underlying hardware where the instance is deployed, because the Outpost is a pool of AWS compute and storage capacity. Each Outpost can support multiple VPCs that can have one or more Outpost subnets. For information about VPC quotas, see Amazon VPC Quotas in the Amazon VPC User"} +{"global_id": 1427, "doc_id": "outposts", "chunk_id": "3", "question_id": 4, "question": "Can you specify the underlying hardware where the EC2 instance is deployed?", "answer_span": "You can't specify the underlying hardware where the instance is deployed", "chunk": "on-premises workloads or applications reside. The following diagram illustrates both Outpost form factors. Contents • Network components • VPCs and subnets • Routing 12 AWS Outposts User Guide for Outposts racks • DNS • Service link • Local gateways • Local network interfaces Network components AWS Outposts extends an Amazon VPC from an AWS Region to an Outpost with the VPC components that are accessible in the Region, including internet gateways, virtual private gateways, Amazon VPC Transit Gateways, and VPC endpoints. An Outpost is homed to an Availability Zone in the Region and is an extension of that Availability Zone that you can use for resiliency. The following diagram shows the network components for your Outpost. • An AWS Region and an on-premises network • A VPC with multiple subnets in the Region • An Outpost in the on-premises network • Connectivity between the Outpost and local network provided: • For Outposts racks: a local gateway • For Outposts servers: a local network interface (LNI) Network components 13 AWS Outposts User Guide for Outposts racks VPCs and subnets A virtual private cloud (VPC) spans all Availability Zones in its AWS Region. You can extend any VPC in the Region to your Outpost by adding an Outpost subnet. To add an Outpost subnet to a VPC, specify the Amazon Resource Name (ARN) of the Outpost when you create the subnet. Outposts support multiple subnets. You can specify the EC2 instance subnet when you launch the EC2 instance in your Outpost. You can't specify the underlying hardware where the instance is deployed, because the Outpost is a pool of AWS compute and storage capacity. Each Outpost can support multiple VPCs that can have one or more Outpost subnets. For information about VPC quotas, see Amazon VPC Quotas in the Amazon VPC User"} +{"global_id": 1428, "doc_id": "outposts", "chunk_id": "4", "question_id": 1, "question": "What is the Outpost a pool of?", "answer_span": "the Outpost is a pool of AWS compute and storage capacity.", "chunk": "can't specify the underlying hardware where the instance is deployed, because the Outpost is a pool of AWS compute and storage capacity. Each Outpost can support multiple VPCs that can have one or more Outpost subnets. For information about VPC quotas, see Amazon VPC Quotas in the Amazon VPC User Guide. You create Outpost subnets from the VPC CIDR range of the VPC where you created the Outpost. You can use the Outpost address ranges for resources, such as EC2 instances that reside in the Outpost subnet. Routing By default, every Outpost subnet inherits the main route table from its VPC. You can create a custom route table and associate it with an Outpost subnet. The route tables for Outpost subnets work as they do for Availability Zone subnets. You can specify IP addresses, internet gateways, local gateways, virtual private gateways, and peering connections as destinations. For example, each Outpost subnet, either through the inherited main route table, or a custom table, inherits the VPC local route. This means that all traffic in the VPC, including the Outpost subnet with a destination in the VPC CIDR remains routed in the VPC. Outpost subnet route tables can include the following destinations: • VPC CIDR range – AWS defines this at installation. This is the local route and applies to all VPC routing, including traffic between Outpost instances in the same VPC. • AWS Region destinations – This includes prefix lists for Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB gateway endpoint, AWS Transit Gateways, virtual private gateways, internet gateways, and VPC peering. If you have a peering connection with multiple VPCs on the same Outpost, the traffic between the VPCs remains in the Outpost and does not use the service link back to the Region. VPCs and subnets 14"} +{"global_id": 1429, "doc_id": "outposts", "chunk_id": "4", "question_id": 2, "question": "What can each Outpost support?", "answer_span": "Each Outpost can support multiple VPCs that can have one or more Outpost subnets.", "chunk": "can't specify the underlying hardware where the instance is deployed, because the Outpost is a pool of AWS compute and storage capacity. Each Outpost can support multiple VPCs that can have one or more Outpost subnets. For information about VPC quotas, see Amazon VPC Quotas in the Amazon VPC User Guide. You create Outpost subnets from the VPC CIDR range of the VPC where you created the Outpost. You can use the Outpost address ranges for resources, such as EC2 instances that reside in the Outpost subnet. Routing By default, every Outpost subnet inherits the main route table from its VPC. You can create a custom route table and associate it with an Outpost subnet. The route tables for Outpost subnets work as they do for Availability Zone subnets. You can specify IP addresses, internet gateways, local gateways, virtual private gateways, and peering connections as destinations. For example, each Outpost subnet, either through the inherited main route table, or a custom table, inherits the VPC local route. This means that all traffic in the VPC, including the Outpost subnet with a destination in the VPC CIDR remains routed in the VPC. Outpost subnet route tables can include the following destinations: • VPC CIDR range – AWS defines this at installation. This is the local route and applies to all VPC routing, including traffic between Outpost instances in the same VPC. • AWS Region destinations – This includes prefix lists for Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB gateway endpoint, AWS Transit Gateways, virtual private gateways, internet gateways, and VPC peering. If you have a peering connection with multiple VPCs on the same Outpost, the traffic between the VPCs remains in the Outpost and does not use the service link back to the Region. VPCs and subnets 14"} +{"global_id": 1430, "doc_id": "outposts", "chunk_id": "4", "question_id": 3, "question": "What do Outpost subnet route tables inherit by default?", "answer_span": "every Outpost subnet inherits the main route table from its VPC.", "chunk": "can't specify the underlying hardware where the instance is deployed, because the Outpost is a pool of AWS compute and storage capacity. Each Outpost can support multiple VPCs that can have one or more Outpost subnets. For information about VPC quotas, see Amazon VPC Quotas in the Amazon VPC User Guide. You create Outpost subnets from the VPC CIDR range of the VPC where you created the Outpost. You can use the Outpost address ranges for resources, such as EC2 instances that reside in the Outpost subnet. Routing By default, every Outpost subnet inherits the main route table from its VPC. You can create a custom route table and associate it with an Outpost subnet. The route tables for Outpost subnets work as they do for Availability Zone subnets. You can specify IP addresses, internet gateways, local gateways, virtual private gateways, and peering connections as destinations. For example, each Outpost subnet, either through the inherited main route table, or a custom table, inherits the VPC local route. This means that all traffic in the VPC, including the Outpost subnet with a destination in the VPC CIDR remains routed in the VPC. Outpost subnet route tables can include the following destinations: • VPC CIDR range – AWS defines this at installation. This is the local route and applies to all VPC routing, including traffic between Outpost instances in the same VPC. • AWS Region destinations – This includes prefix lists for Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB gateway endpoint, AWS Transit Gateways, virtual private gateways, internet gateways, and VPC peering. If you have a peering connection with multiple VPCs on the same Outpost, the traffic between the VPCs remains in the Outpost and does not use the service link back to the Region. VPCs and subnets 14"} +{"global_id": 1431, "doc_id": "outposts", "chunk_id": "4", "question_id": 4, "question": "What does the local route apply to?", "answer_span": "This is the local route and applies to all VPC routing, including traffic between Outpost instances in the same VPC.", "chunk": "can't specify the underlying hardware where the instance is deployed, because the Outpost is a pool of AWS compute and storage capacity. Each Outpost can support multiple VPCs that can have one or more Outpost subnets. For information about VPC quotas, see Amazon VPC Quotas in the Amazon VPC User Guide. You create Outpost subnets from the VPC CIDR range of the VPC where you created the Outpost. You can use the Outpost address ranges for resources, such as EC2 instances that reside in the Outpost subnet. Routing By default, every Outpost subnet inherits the main route table from its VPC. You can create a custom route table and associate it with an Outpost subnet. The route tables for Outpost subnets work as they do for Availability Zone subnets. You can specify IP addresses, internet gateways, local gateways, virtual private gateways, and peering connections as destinations. For example, each Outpost subnet, either through the inherited main route table, or a custom table, inherits the VPC local route. This means that all traffic in the VPC, including the Outpost subnet with a destination in the VPC CIDR remains routed in the VPC. Outpost subnet route tables can include the following destinations: • VPC CIDR range – AWS defines this at installation. This is the local route and applies to all VPC routing, including traffic between Outpost instances in the same VPC. • AWS Region destinations – This includes prefix lists for Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB gateway endpoint, AWS Transit Gateways, virtual private gateways, internet gateways, and VPC peering. If you have a peering connection with multiple VPCs on the same Outpost, the traffic between the VPCs remains in the Outpost and does not use the service link back to the Region. VPCs and subnets 14"} +{"global_id": 1432, "doc_id": "outposts", "chunk_id": "5", "question_id": 1, "question": "What types of gateways are mentioned in the chunk?", "answer_span": "AWS Transit Gateways, virtual private gateways, internet gateways, and VPC peering.", "chunk": "AWS Transit Gateways, virtual private gateways, internet gateways, and VPC peering. If you have a peering connection with multiple VPCs on the same Outpost, the traffic between the VPCs remains in the Outpost and does not use the service link back to the Region. VPCs and subnets 14"} +{"global_id": 1433, "doc_id": "outposts", "chunk_id": "5", "question_id": 2, "question": "What happens to the traffic between VPCs on the same Outpost?", "answer_span": "the traffic between the VPCs remains in the Outpost and does not use the service link back to the Region.", "chunk": "AWS Transit Gateways, virtual private gateways, internet gateways, and VPC peering. If you have a peering connection with multiple VPCs on the same Outpost, the traffic between the VPCs remains in the Outpost and does not use the service link back to the Region. VPCs and subnets 14"} +{"global_id": 1434, "doc_id": "outposts", "chunk_id": "5", "question_id": 3, "question": "What is the relationship between VPCs and subnets mentioned in the chunk?", "answer_span": "VPCs and subnets", "chunk": "AWS Transit Gateways, virtual private gateways, internet gateways, and VPC peering. If you have a peering connection with multiple VPCs on the same Outpost, the traffic between the VPCs remains in the Outpost and does not use the service link back to the Region. VPCs and subnets 14"} +{"global_id": 1435, "doc_id": "outposts", "chunk_id": "5", "question_id": 4, "question": "What is required for a peering connection to exist?", "answer_span": "a peering connection with multiple VPCs on the same Outpost", "chunk": "AWS Transit Gateways, virtual private gateways, internet gateways, and VPC peering. If you have a peering connection with multiple VPCs on the same Outpost, the traffic between the VPCs remains in the Outpost and does not use the service link back to the Region. VPCs and subnets 14"} +{"global_id": 1436, "doc_id": "dynamodb", "chunk_id": "0", "question_id": 1, "question": "What type of database is Amazon DynamoDB?", "answer_span": "Amazon DynamoDB is a serverless, NoSQL, fully managed database with single-digit millisecond performance at any scale.", "chunk": "Amazon DynamoDB Developer Guide What is Amazon DynamoDB? Amazon DynamoDB is a serverless, NoSQL, fully managed database with single-digit millisecond performance at any scale. DynamoDB addresses your needs to overcome scaling and operational complexities of relational databases. DynamoDB is purpose-built and optimized for operational workloads that require consistent performance at any scale. For example, DynamoDB delivers consistent single-digit millisecond performance for a shopping cart use case, whether you've 10 or 100 million users. Launched in 2012, DynamoDB continues to help you move away from relational databases while reducing cost and improving performance at scale. Customers across all sizes, industries, and geographies use DynamoDB to build modern, serverless applications that can start small and scale globally. DynamoDB scales to support tables of virtually any size while providing consistent single-digit millisecond performance and high availability. For events, such as Amazon Prime Day, DynamoDB powers multiple high-traffic Amazon properties and systems, including Alexa, Amazon.com sites, and all Amazon fulfillment centers. For such events, DynamoDB APIs have handled trillions of calls from Amazon properties and systems. DynamoDB continuously serves hundreds of customers with tables that have peak traffic of over half a million requests per second. It also serves hundreds of customers whose table sizes exceed 200 TB, and processes over one billion requests per hour. Topics • Characteristics of DynamoDB • DynamoDB use cases • Capabilities of DynamoDB • Service integrations • Security • Resilience • Accessing DynamoDB • DynamoDB pricing • Getting started with DynamoDB API Version 2012-08-10 1 Amazon DynamoDB Developer Guide Characteristics of DynamoDB Serverless With DynamoDB, you don't need to provision any servers, or patch, manage, install, maintain, or operate any software. DynamoDB provides zero downtime maintenance. It has no versions (major, minor, or patch), and there are no maintenance windows. DynamoDB's on-demand capacity mode offers pay-as-you-go pricing for"} +{"global_id": 1437, "doc_id": "dynamodb", "chunk_id": "0", "question_id": 2, "question": "When was DynamoDB launched?", "answer_span": "Launched in 2012, DynamoDB continues to help you move away from relational databases while reducing cost and improving performance at scale.", "chunk": "Amazon DynamoDB Developer Guide What is Amazon DynamoDB? Amazon DynamoDB is a serverless, NoSQL, fully managed database with single-digit millisecond performance at any scale. DynamoDB addresses your needs to overcome scaling and operational complexities of relational databases. DynamoDB is purpose-built and optimized for operational workloads that require consistent performance at any scale. For example, DynamoDB delivers consistent single-digit millisecond performance for a shopping cart use case, whether you've 10 or 100 million users. Launched in 2012, DynamoDB continues to help you move away from relational databases while reducing cost and improving performance at scale. Customers across all sizes, industries, and geographies use DynamoDB to build modern, serverless applications that can start small and scale globally. DynamoDB scales to support tables of virtually any size while providing consistent single-digit millisecond performance and high availability. For events, such as Amazon Prime Day, DynamoDB powers multiple high-traffic Amazon properties and systems, including Alexa, Amazon.com sites, and all Amazon fulfillment centers. For such events, DynamoDB APIs have handled trillions of calls from Amazon properties and systems. DynamoDB continuously serves hundreds of customers with tables that have peak traffic of over half a million requests per second. It also serves hundreds of customers whose table sizes exceed 200 TB, and processes over one billion requests per hour. Topics • Characteristics of DynamoDB • DynamoDB use cases • Capabilities of DynamoDB • Service integrations • Security • Resilience • Accessing DynamoDB • DynamoDB pricing • Getting started with DynamoDB API Version 2012-08-10 1 Amazon DynamoDB Developer Guide Characteristics of DynamoDB Serverless With DynamoDB, you don't need to provision any servers, or patch, manage, install, maintain, or operate any software. DynamoDB provides zero downtime maintenance. It has no versions (major, minor, or patch), and there are no maintenance windows. DynamoDB's on-demand capacity mode offers pay-as-you-go pricing for"} +{"global_id": 1438, "doc_id": "dynamodb", "chunk_id": "0", "question_id": 3, "question": "What performance does DynamoDB deliver for a shopping cart use case?", "answer_span": "DynamoDB delivers consistent single-digit millisecond performance for a shopping cart use case, whether you've 10 or 100 million users.", "chunk": "Amazon DynamoDB Developer Guide What is Amazon DynamoDB? Amazon DynamoDB is a serverless, NoSQL, fully managed database with single-digit millisecond performance at any scale. DynamoDB addresses your needs to overcome scaling and operational complexities of relational databases. DynamoDB is purpose-built and optimized for operational workloads that require consistent performance at any scale. For example, DynamoDB delivers consistent single-digit millisecond performance for a shopping cart use case, whether you've 10 or 100 million users. Launched in 2012, DynamoDB continues to help you move away from relational databases while reducing cost and improving performance at scale. Customers across all sizes, industries, and geographies use DynamoDB to build modern, serverless applications that can start small and scale globally. DynamoDB scales to support tables of virtually any size while providing consistent single-digit millisecond performance and high availability. For events, such as Amazon Prime Day, DynamoDB powers multiple high-traffic Amazon properties and systems, including Alexa, Amazon.com sites, and all Amazon fulfillment centers. For such events, DynamoDB APIs have handled trillions of calls from Amazon properties and systems. DynamoDB continuously serves hundreds of customers with tables that have peak traffic of over half a million requests per second. It also serves hundreds of customers whose table sizes exceed 200 TB, and processes over one billion requests per hour. Topics • Characteristics of DynamoDB • DynamoDB use cases • Capabilities of DynamoDB • Service integrations • Security • Resilience • Accessing DynamoDB • DynamoDB pricing • Getting started with DynamoDB API Version 2012-08-10 1 Amazon DynamoDB Developer Guide Characteristics of DynamoDB Serverless With DynamoDB, you don't need to provision any servers, or patch, manage, install, maintain, or operate any software. DynamoDB provides zero downtime maintenance. It has no versions (major, minor, or patch), and there are no maintenance windows. DynamoDB's on-demand capacity mode offers pay-as-you-go pricing for"} +{"global_id": 1439, "doc_id": "dynamodb", "chunk_id": "0", "question_id": 4, "question": "What does DynamoDB's on-demand capacity mode offer?", "answer_span": "DynamoDB's on-demand capacity mode offers pay-as-you-go pricing for", "chunk": "Amazon DynamoDB Developer Guide What is Amazon DynamoDB? Amazon DynamoDB is a serverless, NoSQL, fully managed database with single-digit millisecond performance at any scale. DynamoDB addresses your needs to overcome scaling and operational complexities of relational databases. DynamoDB is purpose-built and optimized for operational workloads that require consistent performance at any scale. For example, DynamoDB delivers consistent single-digit millisecond performance for a shopping cart use case, whether you've 10 or 100 million users. Launched in 2012, DynamoDB continues to help you move away from relational databases while reducing cost and improving performance at scale. Customers across all sizes, industries, and geographies use DynamoDB to build modern, serverless applications that can start small and scale globally. DynamoDB scales to support tables of virtually any size while providing consistent single-digit millisecond performance and high availability. For events, such as Amazon Prime Day, DynamoDB powers multiple high-traffic Amazon properties and systems, including Alexa, Amazon.com sites, and all Amazon fulfillment centers. For such events, DynamoDB APIs have handled trillions of calls from Amazon properties and systems. DynamoDB continuously serves hundreds of customers with tables that have peak traffic of over half a million requests per second. It also serves hundreds of customers whose table sizes exceed 200 TB, and processes over one billion requests per hour. Topics • Characteristics of DynamoDB • DynamoDB use cases • Capabilities of DynamoDB • Service integrations • Security • Resilience • Accessing DynamoDB • DynamoDB pricing • Getting started with DynamoDB API Version 2012-08-10 1 Amazon DynamoDB Developer Guide Characteristics of DynamoDB Serverless With DynamoDB, you don't need to provision any servers, or patch, manage, install, maintain, or operate any software. DynamoDB provides zero downtime maintenance. It has no versions (major, minor, or patch), and there are no maintenance windows. DynamoDB's on-demand capacity mode offers pay-as-you-go pricing for"} +{"global_id": 1440, "doc_id": "dynamodb", "chunk_id": "1", "question_id": 1, "question": "What does DynamoDB provide in terms of maintenance?", "answer_span": "DynamoDB provides zero downtime maintenance.", "chunk": "Guide Characteristics of DynamoDB Serverless With DynamoDB, you don't need to provision any servers, or patch, manage, install, maintain, or operate any software. DynamoDB provides zero downtime maintenance. It has no versions (major, minor, or patch), and there are no maintenance windows. DynamoDB's on-demand capacity mode offers pay-as-you-go pricing for read and write requests so you only pay for what you use. With on-demand, DynamoDB instantly scales up or down your tables to adjust for capacity and maintains performance with zero administration. It also scales down to zero so you don't pay for throughput when your table doesn't have traffic and there are no cold starts. NoSQL As a NoSQL database, DynamoDB is purpose-built to deliver improved performance, scalability, manageability, and flexibility compared to traditional relational databases. To support a wide variety of use cases, DynamoDB supports both key-value and document data models. Unlike relational databases, DynamoDB doesn't support a JOIN operator. We recommend that you denormalize your data model to reduce database round trips and processing power needed to answer queries. As a NoSQL database, DynamoDB provides strong read consistency and ACID transactions to build enterprise-grade applications. Fully managed As a fully managed database service, DynamoDB handles the undifferentiated heavy lifting of managing a database so that you can focus on building value for your customers. It handles setup, configurations, maintenance, high availability, hardware provisioning, security, backups, monitoring, and more. This ensures that when you create a DynamoDB table, it's instantly ready for production workloads. DynamoDB constantly improves its availability, reliability, performance, security, and functionality without requiring upgrades or downtime. Single-digit millisecond performance at any scale DynamoDB was purpose-built to improve upon the performance and scalability of relational databases to deliver single-digit millisecond performance at any scale. To achieve this scale and Characteristics API Version 2012-08-10 2 Amazon DynamoDB"} +{"global_id": 1441, "doc_id": "dynamodb", "chunk_id": "1", "question_id": 2, "question": "How does DynamoDB's on-demand capacity mode pricing work?", "answer_span": "DynamoDB's on-demand capacity mode offers pay-as-you-go pricing for read and write requests so you only pay for what you use.", "chunk": "Guide Characteristics of DynamoDB Serverless With DynamoDB, you don't need to provision any servers, or patch, manage, install, maintain, or operate any software. DynamoDB provides zero downtime maintenance. It has no versions (major, minor, or patch), and there are no maintenance windows. DynamoDB's on-demand capacity mode offers pay-as-you-go pricing for read and write requests so you only pay for what you use. With on-demand, DynamoDB instantly scales up or down your tables to adjust for capacity and maintains performance with zero administration. It also scales down to zero so you don't pay for throughput when your table doesn't have traffic and there are no cold starts. NoSQL As a NoSQL database, DynamoDB is purpose-built to deliver improved performance, scalability, manageability, and flexibility compared to traditional relational databases. To support a wide variety of use cases, DynamoDB supports both key-value and document data models. Unlike relational databases, DynamoDB doesn't support a JOIN operator. We recommend that you denormalize your data model to reduce database round trips and processing power needed to answer queries. As a NoSQL database, DynamoDB provides strong read consistency and ACID transactions to build enterprise-grade applications. Fully managed As a fully managed database service, DynamoDB handles the undifferentiated heavy lifting of managing a database so that you can focus on building value for your customers. It handles setup, configurations, maintenance, high availability, hardware provisioning, security, backups, monitoring, and more. This ensures that when you create a DynamoDB table, it's instantly ready for production workloads. DynamoDB constantly improves its availability, reliability, performance, security, and functionality without requiring upgrades or downtime. Single-digit millisecond performance at any scale DynamoDB was purpose-built to improve upon the performance and scalability of relational databases to deliver single-digit millisecond performance at any scale. To achieve this scale and Characteristics API Version 2012-08-10 2 Amazon DynamoDB"} +{"global_id": 1442, "doc_id": "dynamodb", "chunk_id": "1", "question_id": 3, "question": "What type of database is DynamoDB?", "answer_span": "As a NoSQL database, DynamoDB is purpose-built to deliver improved performance, scalability, manageability, and flexibility compared to traditional relational databases.", "chunk": "Guide Characteristics of DynamoDB Serverless With DynamoDB, you don't need to provision any servers, or patch, manage, install, maintain, or operate any software. DynamoDB provides zero downtime maintenance. It has no versions (major, minor, or patch), and there are no maintenance windows. DynamoDB's on-demand capacity mode offers pay-as-you-go pricing for read and write requests so you only pay for what you use. With on-demand, DynamoDB instantly scales up or down your tables to adjust for capacity and maintains performance with zero administration. It also scales down to zero so you don't pay for throughput when your table doesn't have traffic and there are no cold starts. NoSQL As a NoSQL database, DynamoDB is purpose-built to deliver improved performance, scalability, manageability, and flexibility compared to traditional relational databases. To support a wide variety of use cases, DynamoDB supports both key-value and document data models. Unlike relational databases, DynamoDB doesn't support a JOIN operator. We recommend that you denormalize your data model to reduce database round trips and processing power needed to answer queries. As a NoSQL database, DynamoDB provides strong read consistency and ACID transactions to build enterprise-grade applications. Fully managed As a fully managed database service, DynamoDB handles the undifferentiated heavy lifting of managing a database so that you can focus on building value for your customers. It handles setup, configurations, maintenance, high availability, hardware provisioning, security, backups, monitoring, and more. This ensures that when you create a DynamoDB table, it's instantly ready for production workloads. DynamoDB constantly improves its availability, reliability, performance, security, and functionality without requiring upgrades or downtime. Single-digit millisecond performance at any scale DynamoDB was purpose-built to improve upon the performance and scalability of relational databases to deliver single-digit millisecond performance at any scale. To achieve this scale and Characteristics API Version 2012-08-10 2 Amazon DynamoDB"} +{"global_id": 1443, "doc_id": "dynamodb", "chunk_id": "1", "question_id": 4, "question": "What does DynamoDB handle as a fully managed database service?", "answer_span": "DynamoDB handles the undifferentiated heavy lifting of managing a database so that you can focus on building value for your customers.", "chunk": "Guide Characteristics of DynamoDB Serverless With DynamoDB, you don't need to provision any servers, or patch, manage, install, maintain, or operate any software. DynamoDB provides zero downtime maintenance. It has no versions (major, minor, or patch), and there are no maintenance windows. DynamoDB's on-demand capacity mode offers pay-as-you-go pricing for read and write requests so you only pay for what you use. With on-demand, DynamoDB instantly scales up or down your tables to adjust for capacity and maintains performance with zero administration. It also scales down to zero so you don't pay for throughput when your table doesn't have traffic and there are no cold starts. NoSQL As a NoSQL database, DynamoDB is purpose-built to deliver improved performance, scalability, manageability, and flexibility compared to traditional relational databases. To support a wide variety of use cases, DynamoDB supports both key-value and document data models. Unlike relational databases, DynamoDB doesn't support a JOIN operator. We recommend that you denormalize your data model to reduce database round trips and processing power needed to answer queries. As a NoSQL database, DynamoDB provides strong read consistency and ACID transactions to build enterprise-grade applications. Fully managed As a fully managed database service, DynamoDB handles the undifferentiated heavy lifting of managing a database so that you can focus on building value for your customers. It handles setup, configurations, maintenance, high availability, hardware provisioning, security, backups, monitoring, and more. This ensures that when you create a DynamoDB table, it's instantly ready for production workloads. DynamoDB constantly improves its availability, reliability, performance, security, and functionality without requiring upgrades or downtime. Single-digit millisecond performance at any scale DynamoDB was purpose-built to improve upon the performance and scalability of relational databases to deliver single-digit millisecond performance at any scale. To achieve this scale and Characteristics API Version 2012-08-10 2 Amazon DynamoDB"} +{"global_id": 1444, "doc_id": "dynamodb", "chunk_id": "2", "question_id": 1, "question": "What is the performance characteristic of DynamoDB?", "answer_span": "DynamoDB delivers consistent single-digit millisecond performance for your application, whether you've 100 or 100 million users.", "chunk": "availability, reliability, performance, security, and functionality without requiring upgrades or downtime. Single-digit millisecond performance at any scale DynamoDB was purpose-built to improve upon the performance and scalability of relational databases to deliver single-digit millisecond performance at any scale. To achieve this scale and Characteristics API Version 2012-08-10 2 Amazon DynamoDB Developer Guide performance, DynamoDB is optimized for high-performance workloads and provides APIs that encourage efficient database usage. It omits features that are inefficient and non-performing at scale, for example, JOIN operations. DynamoDB delivers consistent single-digit millisecond performance for your application, whether you've 100 or 100 million users. DynamoDB use cases Customers across all sizes, industries, and geographies use DynamoDB to build modern, serverless applications that can start small and scale globally. DynamoDB is ideal for use cases that require consistent performance at any scale with little to zero operational overhead. The following list presents some use cases where you can use DynamoDB: • Financial service applications – Suppose you're a financial services company building applications, such as live trading and routing, loan management, token generation, and transaction ledgers. With DynamoDB global tables, your applications can respond to events and serve traffic from your chosen AWS Regions with fast, local read and write performance. DynamoDB is suitable for applications with the most stringent availability requirements. It removes the operational burden of manually scaling instances for increased storage or throughput, versioning, and licensing. You can use DynamoDB transactions to achieve atomicity, consistency, isolation, and durability (ACID) across one or more tables with a single request. (ACID) transactions suit workloads that include processing financial transactions or fulfilling orders. DynamoDB instantly accommodates your workloads as they ramp up or down, enabling you to efficiently scale your database for market conditions, such as trading hours. • Gaming applications – As a gaming company, you can"} +{"global_id": 1445, "doc_id": "dynamodb", "chunk_id": "2", "question_id": 2, "question": "What type of applications is DynamoDB suitable for?", "answer_span": "DynamoDB is suitable for applications with the most stringent availability requirements.", "chunk": "availability, reliability, performance, security, and functionality without requiring upgrades or downtime. Single-digit millisecond performance at any scale DynamoDB was purpose-built to improve upon the performance and scalability of relational databases to deliver single-digit millisecond performance at any scale. To achieve this scale and Characteristics API Version 2012-08-10 2 Amazon DynamoDB Developer Guide performance, DynamoDB is optimized for high-performance workloads and provides APIs that encourage efficient database usage. It omits features that are inefficient and non-performing at scale, for example, JOIN operations. DynamoDB delivers consistent single-digit millisecond performance for your application, whether you've 100 or 100 million users. DynamoDB use cases Customers across all sizes, industries, and geographies use DynamoDB to build modern, serverless applications that can start small and scale globally. DynamoDB is ideal for use cases that require consistent performance at any scale with little to zero operational overhead. The following list presents some use cases where you can use DynamoDB: • Financial service applications – Suppose you're a financial services company building applications, such as live trading and routing, loan management, token generation, and transaction ledgers. With DynamoDB global tables, your applications can respond to events and serve traffic from your chosen AWS Regions with fast, local read and write performance. DynamoDB is suitable for applications with the most stringent availability requirements. It removes the operational burden of manually scaling instances for increased storage or throughput, versioning, and licensing. You can use DynamoDB transactions to achieve atomicity, consistency, isolation, and durability (ACID) across one or more tables with a single request. (ACID) transactions suit workloads that include processing financial transactions or fulfilling orders. DynamoDB instantly accommodates your workloads as they ramp up or down, enabling you to efficiently scale your database for market conditions, such as trading hours. • Gaming applications – As a gaming company, you can"} +{"global_id": 1446, "doc_id": "dynamodb", "chunk_id": "2", "question_id": 3, "question": "What does DynamoDB transactions achieve?", "answer_span": "You can use DynamoDB transactions to achieve atomicity, consistency, isolation, and durability (ACID) across one or more tables with a single request.", "chunk": "availability, reliability, performance, security, and functionality without requiring upgrades or downtime. Single-digit millisecond performance at any scale DynamoDB was purpose-built to improve upon the performance and scalability of relational databases to deliver single-digit millisecond performance at any scale. To achieve this scale and Characteristics API Version 2012-08-10 2 Amazon DynamoDB Developer Guide performance, DynamoDB is optimized for high-performance workloads and provides APIs that encourage efficient database usage. It omits features that are inefficient and non-performing at scale, for example, JOIN operations. DynamoDB delivers consistent single-digit millisecond performance for your application, whether you've 100 or 100 million users. DynamoDB use cases Customers across all sizes, industries, and geographies use DynamoDB to build modern, serverless applications that can start small and scale globally. DynamoDB is ideal for use cases that require consistent performance at any scale with little to zero operational overhead. The following list presents some use cases where you can use DynamoDB: • Financial service applications – Suppose you're a financial services company building applications, such as live trading and routing, loan management, token generation, and transaction ledgers. With DynamoDB global tables, your applications can respond to events and serve traffic from your chosen AWS Regions with fast, local read and write performance. DynamoDB is suitable for applications with the most stringent availability requirements. It removes the operational burden of manually scaling instances for increased storage or throughput, versioning, and licensing. You can use DynamoDB transactions to achieve atomicity, consistency, isolation, and durability (ACID) across one or more tables with a single request. (ACID) transactions suit workloads that include processing financial transactions or fulfilling orders. DynamoDB instantly accommodates your workloads as they ramp up or down, enabling you to efficiently scale your database for market conditions, such as trading hours. • Gaming applications – As a gaming company, you can"} +{"global_id": 1447, "doc_id": "dynamodb", "chunk_id": "2", "question_id": 4, "question": "What is a use case for financial service applications using DynamoDB?", "answer_span": "Suppose you're a financial services company building applications, such as live trading and routing, loan management, token generation, and transaction ledgers.", "chunk": "availability, reliability, performance, security, and functionality without requiring upgrades or downtime. Single-digit millisecond performance at any scale DynamoDB was purpose-built to improve upon the performance and scalability of relational databases to deliver single-digit millisecond performance at any scale. To achieve this scale and Characteristics API Version 2012-08-10 2 Amazon DynamoDB Developer Guide performance, DynamoDB is optimized for high-performance workloads and provides APIs that encourage efficient database usage. It omits features that are inefficient and non-performing at scale, for example, JOIN operations. DynamoDB delivers consistent single-digit millisecond performance for your application, whether you've 100 or 100 million users. DynamoDB use cases Customers across all sizes, industries, and geographies use DynamoDB to build modern, serverless applications that can start small and scale globally. DynamoDB is ideal for use cases that require consistent performance at any scale with little to zero operational overhead. The following list presents some use cases where you can use DynamoDB: • Financial service applications – Suppose you're a financial services company building applications, such as live trading and routing, loan management, token generation, and transaction ledgers. With DynamoDB global tables, your applications can respond to events and serve traffic from your chosen AWS Regions with fast, local read and write performance. DynamoDB is suitable for applications with the most stringent availability requirements. It removes the operational burden of manually scaling instances for increased storage or throughput, versioning, and licensing. You can use DynamoDB transactions to achieve atomicity, consistency, isolation, and durability (ACID) across one or more tables with a single request. (ACID) transactions suit workloads that include processing financial transactions or fulfilling orders. DynamoDB instantly accommodates your workloads as they ramp up or down, enabling you to efficiently scale your database for market conditions, such as trading hours. • Gaming applications – As a gaming company, you can"} +{"global_id": 1448, "doc_id": "dynamodb", "chunk_id": "3", "question_id": 1, "question": "What type of transactions suit workloads that include processing financial transactions or fulfilling orders?", "answer_span": "ACID transactions suit workloads that include processing financial transactions or fulfilling orders.", "chunk": "a single request. (ACID) transactions suit workloads that include processing financial transactions or fulfilling orders. DynamoDB instantly accommodates your workloads as they ramp up or down, enabling you to efficiently scale your database for market conditions, such as trading hours. • Gaming applications – As a gaming company, you can use DynamoDB for all parts of game platforms, for example, game state, player data, session history, and leaderboards. Choose DynamoDB for its scale, consistent performance, and the ease of operations provided by its serverless architecture. DynamoDB is well suited for scale-out architectures needed to support successful games. It quickly scales your game’s throughput both in and out (scale to zero with no cold start). This scalability optimizes your architecture's efficiency whether you’re scaling out for peak traffic or scaling back when gameplay usage is low. • Streaming applications – Media and entertainment companies use DynamoDB as a metadata index for content, content management service, or to serve near real-time sports statistics. They also use DynamoDB to run user watchlist and bookmarking services and process billions of daily customer events for generating recommendations. These customers benefit from DynamoDB's Use cases API Version 2012-08-10 3 Amazon DynamoDB Developer Guide scalability, performance, and resiliency. DynamoDB scales to workload changes as they ramp up or down, enabling streaming media use cases that can support any levels of demand. To learn more about how customers from different industries use DynamoDB, see Amazon DynamoDB Customers and This is My Architecture. Capabilities of DynamoDB Multi-active replication with global tables Global tables provide multi-active replication of your data across your chosen AWS Regions with 99.999% availability. Global tables deliver a fully managed solution for deploying a multi-Region, multi-active database, without building and maintaining your own replication solution. With global tables, you can specify the AWS Regions where you"} +{"global_id": 1449, "doc_id": "dynamodb", "chunk_id": "3", "question_id": 2, "question": "What is DynamoDB well suited for in gaming applications?", "answer_span": "DynamoDB is well suited for scale-out architectures needed to support successful games.", "chunk": "a single request. (ACID) transactions suit workloads that include processing financial transactions or fulfilling orders. DynamoDB instantly accommodates your workloads as they ramp up or down, enabling you to efficiently scale your database for market conditions, such as trading hours. • Gaming applications – As a gaming company, you can use DynamoDB for all parts of game platforms, for example, game state, player data, session history, and leaderboards. Choose DynamoDB for its scale, consistent performance, and the ease of operations provided by its serverless architecture. DynamoDB is well suited for scale-out architectures needed to support successful games. It quickly scales your game’s throughput both in and out (scale to zero with no cold start). This scalability optimizes your architecture's efficiency whether you’re scaling out for peak traffic or scaling back when gameplay usage is low. • Streaming applications – Media and entertainment companies use DynamoDB as a metadata index for content, content management service, or to serve near real-time sports statistics. They also use DynamoDB to run user watchlist and bookmarking services and process billions of daily customer events for generating recommendations. These customers benefit from DynamoDB's Use cases API Version 2012-08-10 3 Amazon DynamoDB Developer Guide scalability, performance, and resiliency. DynamoDB scales to workload changes as they ramp up or down, enabling streaming media use cases that can support any levels of demand. To learn more about how customers from different industries use DynamoDB, see Amazon DynamoDB Customers and This is My Architecture. Capabilities of DynamoDB Multi-active replication with global tables Global tables provide multi-active replication of your data across your chosen AWS Regions with 99.999% availability. Global tables deliver a fully managed solution for deploying a multi-Region, multi-active database, without building and maintaining your own replication solution. With global tables, you can specify the AWS Regions where you"} +{"global_id": 1450, "doc_id": "dynamodb", "chunk_id": "3", "question_id": 3, "question": "How do media and entertainment companies use DynamoDB?", "answer_span": "Media and entertainment companies use DynamoDB as a metadata index for content, content management service, or to serve near real-time sports statistics.", "chunk": "a single request. (ACID) transactions suit workloads that include processing financial transactions or fulfilling orders. DynamoDB instantly accommodates your workloads as they ramp up or down, enabling you to efficiently scale your database for market conditions, such as trading hours. • Gaming applications – As a gaming company, you can use DynamoDB for all parts of game platforms, for example, game state, player data, session history, and leaderboards. Choose DynamoDB for its scale, consistent performance, and the ease of operations provided by its serverless architecture. DynamoDB is well suited for scale-out architectures needed to support successful games. It quickly scales your game’s throughput both in and out (scale to zero with no cold start). This scalability optimizes your architecture's efficiency whether you’re scaling out for peak traffic or scaling back when gameplay usage is low. • Streaming applications – Media and entertainment companies use DynamoDB as a metadata index for content, content management service, or to serve near real-time sports statistics. They also use DynamoDB to run user watchlist and bookmarking services and process billions of daily customer events for generating recommendations. These customers benefit from DynamoDB's Use cases API Version 2012-08-10 3 Amazon DynamoDB Developer Guide scalability, performance, and resiliency. DynamoDB scales to workload changes as they ramp up or down, enabling streaming media use cases that can support any levels of demand. To learn more about how customers from different industries use DynamoDB, see Amazon DynamoDB Customers and This is My Architecture. Capabilities of DynamoDB Multi-active replication with global tables Global tables provide multi-active replication of your data across your chosen AWS Regions with 99.999% availability. Global tables deliver a fully managed solution for deploying a multi-Region, multi-active database, without building and maintaining your own replication solution. With global tables, you can specify the AWS Regions where you"} +{"global_id": 1451, "doc_id": "dynamodb", "chunk_id": "3", "question_id": 4, "question": "What availability do global tables provide?", "answer_span": "Global tables provide multi-active replication of your data across your chosen AWS Regions with 99.999% availability.", "chunk": "a single request. (ACID) transactions suit workloads that include processing financial transactions or fulfilling orders. DynamoDB instantly accommodates your workloads as they ramp up or down, enabling you to efficiently scale your database for market conditions, such as trading hours. • Gaming applications – As a gaming company, you can use DynamoDB for all parts of game platforms, for example, game state, player data, session history, and leaderboards. Choose DynamoDB for its scale, consistent performance, and the ease of operations provided by its serverless architecture. DynamoDB is well suited for scale-out architectures needed to support successful games. It quickly scales your game’s throughput both in and out (scale to zero with no cold start). This scalability optimizes your architecture's efficiency whether you’re scaling out for peak traffic or scaling back when gameplay usage is low. • Streaming applications – Media and entertainment companies use DynamoDB as a metadata index for content, content management service, or to serve near real-time sports statistics. They also use DynamoDB to run user watchlist and bookmarking services and process billions of daily customer events for generating recommendations. These customers benefit from DynamoDB's Use cases API Version 2012-08-10 3 Amazon DynamoDB Developer Guide scalability, performance, and resiliency. DynamoDB scales to workload changes as they ramp up or down, enabling streaming media use cases that can support any levels of demand. To learn more about how customers from different industries use DynamoDB, see Amazon DynamoDB Customers and This is My Architecture. Capabilities of DynamoDB Multi-active replication with global tables Global tables provide multi-active replication of your data across your chosen AWS Regions with 99.999% availability. Global tables deliver a fully managed solution for deploying a multi-Region, multi-active database, without building and maintaining your own replication solution. With global tables, you can specify the AWS Regions where you"} +{"global_id": 1452, "doc_id": "dynamodb", "chunk_id": "4", "question_id": 1, "question": "What do global tables provide?", "answer_span": "Global tables provide multi-active replication of your data across your chosen AWS Regions with 99.999% availability.", "chunk": "global tables Global tables provide multi-active replication of your data across your chosen AWS Regions with 99.999% availability. Global tables deliver a fully managed solution for deploying a multi-Region, multi-active database, without building and maintaining your own replication solution. With global tables, you can specify the AWS Regions where you want the tables to be available. DynamoDB replicates ongoing data changes to all of these tables. Your globally distributed applications can access data locally in your selected Regions to achieve single-digit millisecond read and write performance. Because global tables are multi-active, you don't need a primary table. This means there are no complicated or delayed fail-overs, or database downtime when failing over an application between Regions. ACID transactions DynamoDB is built for mission-critical workloads. It includes (ACID) transactions support for applications that require complex business logic. DynamoDB provides native, server-side support for transactions, simplifying the developer experience of making coordinated, all-or-nothing changes to multiple items within and across tables. Change data capture for event-driven architectures DynamoDB supports streaming of item-level change data capture (CDC) records in near-real time. It offers two streaming models for CDC: DynamoDB Streams and Kinesis Data Streams for DynamoDB. Whenever an application creates, updates, or deletes items in a table, streams records a time-ordered sequence of every item-level change in near-real time. This makes DynamoDB Streams ideal for applications with event-driven architecture to consume and act upon the changes. Capabilities API Version 2012-08-10 4 Amazon DynamoDB Developer Guide Secondary indexes DynamoDB offers the option to create both global and local secondary indexes, which let you query the table data using an alternate key. With these secondary indexes, you can access data with attributes other than the primary key, giving you maximum flexibility in accessing your data. Service integrations DynamoDB broadly integrates with several AWS services to"} +{"global_id": 1453, "doc_id": "dynamodb", "chunk_id": "4", "question_id": 2, "question": "What is the benefit of using global tables?", "answer_span": "Global tables deliver a fully managed solution for deploying a multi-Region, multi-active database, without building and maintaining your own replication solution.", "chunk": "global tables Global tables provide multi-active replication of your data across your chosen AWS Regions with 99.999% availability. Global tables deliver a fully managed solution for deploying a multi-Region, multi-active database, without building and maintaining your own replication solution. With global tables, you can specify the AWS Regions where you want the tables to be available. DynamoDB replicates ongoing data changes to all of these tables. Your globally distributed applications can access data locally in your selected Regions to achieve single-digit millisecond read and write performance. Because global tables are multi-active, you don't need a primary table. This means there are no complicated or delayed fail-overs, or database downtime when failing over an application between Regions. ACID transactions DynamoDB is built for mission-critical workloads. It includes (ACID) transactions support for applications that require complex business logic. DynamoDB provides native, server-side support for transactions, simplifying the developer experience of making coordinated, all-or-nothing changes to multiple items within and across tables. Change data capture for event-driven architectures DynamoDB supports streaming of item-level change data capture (CDC) records in near-real time. It offers two streaming models for CDC: DynamoDB Streams and Kinesis Data Streams for DynamoDB. Whenever an application creates, updates, or deletes items in a table, streams records a time-ordered sequence of every item-level change in near-real time. This makes DynamoDB Streams ideal for applications with event-driven architecture to consume and act upon the changes. Capabilities API Version 2012-08-10 4 Amazon DynamoDB Developer Guide Secondary indexes DynamoDB offers the option to create both global and local secondary indexes, which let you query the table data using an alternate key. With these secondary indexes, you can access data with attributes other than the primary key, giving you maximum flexibility in accessing your data. Service integrations DynamoDB broadly integrates with several AWS services to"} +{"global_id": 1454, "doc_id": "dynamodb", "chunk_id": "4", "question_id": 3, "question": "What kind of support does DynamoDB include for applications that require complex business logic?", "answer_span": "It includes (ACID) transactions support for applications that require complex business logic.", "chunk": "global tables Global tables provide multi-active replication of your data across your chosen AWS Regions with 99.999% availability. Global tables deliver a fully managed solution for deploying a multi-Region, multi-active database, without building and maintaining your own replication solution. With global tables, you can specify the AWS Regions where you want the tables to be available. DynamoDB replicates ongoing data changes to all of these tables. Your globally distributed applications can access data locally in your selected Regions to achieve single-digit millisecond read and write performance. Because global tables are multi-active, you don't need a primary table. This means there are no complicated or delayed fail-overs, or database downtime when failing over an application between Regions. ACID transactions DynamoDB is built for mission-critical workloads. It includes (ACID) transactions support for applications that require complex business logic. DynamoDB provides native, server-side support for transactions, simplifying the developer experience of making coordinated, all-or-nothing changes to multiple items within and across tables. Change data capture for event-driven architectures DynamoDB supports streaming of item-level change data capture (CDC) records in near-real time. It offers two streaming models for CDC: DynamoDB Streams and Kinesis Data Streams for DynamoDB. Whenever an application creates, updates, or deletes items in a table, streams records a time-ordered sequence of every item-level change in near-real time. This makes DynamoDB Streams ideal for applications with event-driven architecture to consume and act upon the changes. Capabilities API Version 2012-08-10 4 Amazon DynamoDB Developer Guide Secondary indexes DynamoDB offers the option to create both global and local secondary indexes, which let you query the table data using an alternate key. With these secondary indexes, you can access data with attributes other than the primary key, giving you maximum flexibility in accessing your data. Service integrations DynamoDB broadly integrates with several AWS services to"} +{"global_id": 1455, "doc_id": "dynamodb", "chunk_id": "4", "question_id": 4, "question": "What does DynamoDB support for event-driven architectures?", "answer_span": "DynamoDB supports streaming of item-level change data capture (CDC) records in near-real time.", "chunk": "global tables Global tables provide multi-active replication of your data across your chosen AWS Regions with 99.999% availability. Global tables deliver a fully managed solution for deploying a multi-Region, multi-active database, without building and maintaining your own replication solution. With global tables, you can specify the AWS Regions where you want the tables to be available. DynamoDB replicates ongoing data changes to all of these tables. Your globally distributed applications can access data locally in your selected Regions to achieve single-digit millisecond read and write performance. Because global tables are multi-active, you don't need a primary table. This means there are no complicated or delayed fail-overs, or database downtime when failing over an application between Regions. ACID transactions DynamoDB is built for mission-critical workloads. It includes (ACID) transactions support for applications that require complex business logic. DynamoDB provides native, server-side support for transactions, simplifying the developer experience of making coordinated, all-or-nothing changes to multiple items within and across tables. Change data capture for event-driven architectures DynamoDB supports streaming of item-level change data capture (CDC) records in near-real time. It offers two streaming models for CDC: DynamoDB Streams and Kinesis Data Streams for DynamoDB. Whenever an application creates, updates, or deletes items in a table, streams records a time-ordered sequence of every item-level change in near-real time. This makes DynamoDB Streams ideal for applications with event-driven architecture to consume and act upon the changes. Capabilities API Version 2012-08-10 4 Amazon DynamoDB Developer Guide Secondary indexes DynamoDB offers the option to create both global and local secondary indexes, which let you query the table data using an alternate key. With these secondary indexes, you can access data with attributes other than the primary key, giving you maximum flexibility in accessing your data. Service integrations DynamoDB broadly integrates with several AWS services to"} +{"global_id": 1456, "doc_id": "dynamodb", "chunk_id": "5", "question_id": 1, "question": "What do both global and local secondary indexes allow you to do?", "answer_span": "both global and local secondary indexes, which let you query the table data using an alternate key.", "chunk": "both global and local secondary indexes, which let you query the table data using an alternate key. With these secondary indexes, you can access data with attributes other than the primary key, giving you maximum flexibility in accessing your data. Service integrations DynamoDB broadly integrates with several AWS services to help you get more value from your data, eliminate undifferentiated heavy lifting, and operate your workloads at scale. Some examples are: AWS CloudFormation, Amazon CloudWatch, Amazon S3, AWS Identity and Access Management (IAM), and AWS Auto Scaling. The following sections describe some of the service integrations that you can perform using DynamoDB: Serverless integrations To build end-to-end serverless applications, DynamoDB integrates natively with a number of serverless AWS services. For example, you can integrate DynamoDB with AWS Lambda to create triggers, which are pieces of code that automatically respond to events in DynamoDB Streams. With triggers, you can build event-driven applications that react to data modifications in DynamoDB tables. For cost optimization, you can filter events that Lambda processes from a DynamoDB stream. The following list presents some examples of serverless integrations with DynamoDB: • AWS AppSync for creating GraphQL APIs • Amazon API Gateway for creating REST APIs • Lambda for serverless compute • Amazon Kinesis Data Streams for change data capture (CDC) Importing and exporting data to Amazon S3 Integrating DynamoDB with Amazon S3 enables you to easily export data to an Amazon S3 bucket for analytics and machine learning. DynamoDB supports full table exports and incremental exports to export changed, updated, or deleted data between a specified time period. You can also import data from Amazon S3 into a new DynamoDB table. Secondary indexes API Version 2012-08-10 5 Amazon DynamoDB Developer Guide Zero-ETL integration DynamoDB supports zero-ETL integration with Amazon Redshift and Using an OpenSearch Ingestion pipeline"} +{"global_id": 1457, "doc_id": "dynamodb", "chunk_id": "5", "question_id": 2, "question": "What is one example of a service that integrates with DynamoDB?", "answer_span": "Some examples are: AWS CloudFormation, Amazon CloudWatch, Amazon S3, AWS Identity and Access Management (IAM), and AWS Auto Scaling.", "chunk": "both global and local secondary indexes, which let you query the table data using an alternate key. With these secondary indexes, you can access data with attributes other than the primary key, giving you maximum flexibility in accessing your data. Service integrations DynamoDB broadly integrates with several AWS services to help you get more value from your data, eliminate undifferentiated heavy lifting, and operate your workloads at scale. Some examples are: AWS CloudFormation, Amazon CloudWatch, Amazon S3, AWS Identity and Access Management (IAM), and AWS Auto Scaling. The following sections describe some of the service integrations that you can perform using DynamoDB: Serverless integrations To build end-to-end serverless applications, DynamoDB integrates natively with a number of serverless AWS services. For example, you can integrate DynamoDB with AWS Lambda to create triggers, which are pieces of code that automatically respond to events in DynamoDB Streams. With triggers, you can build event-driven applications that react to data modifications in DynamoDB tables. For cost optimization, you can filter events that Lambda processes from a DynamoDB stream. The following list presents some examples of serverless integrations with DynamoDB: • AWS AppSync for creating GraphQL APIs • Amazon API Gateway for creating REST APIs • Lambda for serverless compute • Amazon Kinesis Data Streams for change data capture (CDC) Importing and exporting data to Amazon S3 Integrating DynamoDB with Amazon S3 enables you to easily export data to an Amazon S3 bucket for analytics and machine learning. DynamoDB supports full table exports and incremental exports to export changed, updated, or deleted data between a specified time period. You can also import data from Amazon S3 into a new DynamoDB table. Secondary indexes API Version 2012-08-10 5 Amazon DynamoDB Developer Guide Zero-ETL integration DynamoDB supports zero-ETL integration with Amazon Redshift and Using an OpenSearch Ingestion pipeline"} +{"global_id": 1458, "doc_id": "dynamodb", "chunk_id": "5", "question_id": 3, "question": "What can you integrate with AWS Lambda to create triggers?", "answer_span": "you can integrate DynamoDB with AWS Lambda to create triggers, which are pieces of code that automatically respond to events in DynamoDB Streams.", "chunk": "both global and local secondary indexes, which let you query the table data using an alternate key. With these secondary indexes, you can access data with attributes other than the primary key, giving you maximum flexibility in accessing your data. Service integrations DynamoDB broadly integrates with several AWS services to help you get more value from your data, eliminate undifferentiated heavy lifting, and operate your workloads at scale. Some examples are: AWS CloudFormation, Amazon CloudWatch, Amazon S3, AWS Identity and Access Management (IAM), and AWS Auto Scaling. The following sections describe some of the service integrations that you can perform using DynamoDB: Serverless integrations To build end-to-end serverless applications, DynamoDB integrates natively with a number of serverless AWS services. For example, you can integrate DynamoDB with AWS Lambda to create triggers, which are pieces of code that automatically respond to events in DynamoDB Streams. With triggers, you can build event-driven applications that react to data modifications in DynamoDB tables. For cost optimization, you can filter events that Lambda processes from a DynamoDB stream. The following list presents some examples of serverless integrations with DynamoDB: • AWS AppSync for creating GraphQL APIs • Amazon API Gateway for creating REST APIs • Lambda for serverless compute • Amazon Kinesis Data Streams for change data capture (CDC) Importing and exporting data to Amazon S3 Integrating DynamoDB with Amazon S3 enables you to easily export data to an Amazon S3 bucket for analytics and machine learning. DynamoDB supports full table exports and incremental exports to export changed, updated, or deleted data between a specified time period. You can also import data from Amazon S3 into a new DynamoDB table. Secondary indexes API Version 2012-08-10 5 Amazon DynamoDB Developer Guide Zero-ETL integration DynamoDB supports zero-ETL integration with Amazon Redshift and Using an OpenSearch Ingestion pipeline"} +{"global_id": 1459, "doc_id": "dynamodb", "chunk_id": "5", "question_id": 4, "question": "What does integrating DynamoDB with Amazon S3 enable you to do?", "answer_span": "Integrating DynamoDB with Amazon S3 enables you to easily export data to an Amazon S3 bucket for analytics and machine learning.", "chunk": "both global and local secondary indexes, which let you query the table data using an alternate key. With these secondary indexes, you can access data with attributes other than the primary key, giving you maximum flexibility in accessing your data. Service integrations DynamoDB broadly integrates with several AWS services to help you get more value from your data, eliminate undifferentiated heavy lifting, and operate your workloads at scale. Some examples are: AWS CloudFormation, Amazon CloudWatch, Amazon S3, AWS Identity and Access Management (IAM), and AWS Auto Scaling. The following sections describe some of the service integrations that you can perform using DynamoDB: Serverless integrations To build end-to-end serverless applications, DynamoDB integrates natively with a number of serverless AWS services. For example, you can integrate DynamoDB with AWS Lambda to create triggers, which are pieces of code that automatically respond to events in DynamoDB Streams. With triggers, you can build event-driven applications that react to data modifications in DynamoDB tables. For cost optimization, you can filter events that Lambda processes from a DynamoDB stream. The following list presents some examples of serverless integrations with DynamoDB: • AWS AppSync for creating GraphQL APIs • Amazon API Gateway for creating REST APIs • Lambda for serverless compute • Amazon Kinesis Data Streams for change data capture (CDC) Importing and exporting data to Amazon S3 Integrating DynamoDB with Amazon S3 enables you to easily export data to an Amazon S3 bucket for analytics and machine learning. DynamoDB supports full table exports and incremental exports to export changed, updated, or deleted data between a specified time period. You can also import data from Amazon S3 into a new DynamoDB table. Secondary indexes API Version 2012-08-10 5 Amazon DynamoDB Developer Guide Zero-ETL integration DynamoDB supports zero-ETL integration with Amazon Redshift and Using an OpenSearch Ingestion pipeline"} +{"global_id": 1460, "doc_id": "dynamodb", "chunk_id": "6", "question_id": 1, "question": "What does DynamoDB support for integration with Amazon Redshift?", "answer_span": "DynamoDB supports zero-ETL integration with Amazon Redshift", "chunk": "to export changed, updated, or deleted data between a specified time period. You can also import data from Amazon S3 into a new DynamoDB table. Secondary indexes API Version 2012-08-10 5 Amazon DynamoDB Developer Guide Zero-ETL integration DynamoDB supports zero-ETL integration with Amazon Redshift and Using an OpenSearch Ingestion pipeline with Amazon DynamoDB. These integrations enable you to run complex analytics and use advanced search capabilities on your DynamoDB table data. For example, you can perform full-text and vector search, and semantic search on your DynamoDB data. Zero-ETL integrations have no impact on production workloads running on DynamoDB. Caching DynamoDB Accelerator (DAX) is a fully managed, highly available caching service built for DynamoDB. DAX delivers up to 10 times performance improvement – from milliseconds to microseconds – even at millions of requests per second. DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring you to manage cache invalidation, data population, or cluster management. Security DynamoDB utilizes IAM to help you securely control access to your DynamoDB resources. With IAM, you can centrally manage permissions that control which DynamoDB users can access resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. Because DynamoDB utilizes IAM, there are no user names or passwords for accessing DynamoDB. Because you don't have any complicated password rotation policies to manage, it simplifies your security posture. With IAM, you can also enable fine-grained access control to provide authorization at the attribute level. You can also define resource-based policies with support for IAM Access Analyzer and Block Public Access (BPA) to simplify policy management. By default, DynamoDB encrypts all customer data at rest. Encryption at rest enhances the security of your data by using encryption keys stored in AWS Key"} +{"global_id": 1461, "doc_id": "dynamodb", "chunk_id": "6", "question_id": 2, "question": "What is the performance improvement delivered by DAX?", "answer_span": "DAX delivers up to 10 times performance improvement – from milliseconds to microseconds", "chunk": "to export changed, updated, or deleted data between a specified time period. You can also import data from Amazon S3 into a new DynamoDB table. Secondary indexes API Version 2012-08-10 5 Amazon DynamoDB Developer Guide Zero-ETL integration DynamoDB supports zero-ETL integration with Amazon Redshift and Using an OpenSearch Ingestion pipeline with Amazon DynamoDB. These integrations enable you to run complex analytics and use advanced search capabilities on your DynamoDB table data. For example, you can perform full-text and vector search, and semantic search on your DynamoDB data. Zero-ETL integrations have no impact on production workloads running on DynamoDB. Caching DynamoDB Accelerator (DAX) is a fully managed, highly available caching service built for DynamoDB. DAX delivers up to 10 times performance improvement – from milliseconds to microseconds – even at millions of requests per second. DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring you to manage cache invalidation, data population, or cluster management. Security DynamoDB utilizes IAM to help you securely control access to your DynamoDB resources. With IAM, you can centrally manage permissions that control which DynamoDB users can access resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. Because DynamoDB utilizes IAM, there are no user names or passwords for accessing DynamoDB. Because you don't have any complicated password rotation policies to manage, it simplifies your security posture. With IAM, you can also enable fine-grained access control to provide authorization at the attribute level. You can also define resource-based policies with support for IAM Access Analyzer and Block Public Access (BPA) to simplify policy management. By default, DynamoDB encrypts all customer data at rest. Encryption at rest enhances the security of your data by using encryption keys stored in AWS Key"} +{"global_id": 1462, "doc_id": "dynamodb", "chunk_id": "6", "question_id": 3, "question": "How does DynamoDB utilize IAM for security?", "answer_span": "DynamoDB utilizes IAM to help you securely control access to your DynamoDB resources", "chunk": "to export changed, updated, or deleted data between a specified time period. You can also import data from Amazon S3 into a new DynamoDB table. Secondary indexes API Version 2012-08-10 5 Amazon DynamoDB Developer Guide Zero-ETL integration DynamoDB supports zero-ETL integration with Amazon Redshift and Using an OpenSearch Ingestion pipeline with Amazon DynamoDB. These integrations enable you to run complex analytics and use advanced search capabilities on your DynamoDB table data. For example, you can perform full-text and vector search, and semantic search on your DynamoDB data. Zero-ETL integrations have no impact on production workloads running on DynamoDB. Caching DynamoDB Accelerator (DAX) is a fully managed, highly available caching service built for DynamoDB. DAX delivers up to 10 times performance improvement – from milliseconds to microseconds – even at millions of requests per second. DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring you to manage cache invalidation, data population, or cluster management. Security DynamoDB utilizes IAM to help you securely control access to your DynamoDB resources. With IAM, you can centrally manage permissions that control which DynamoDB users can access resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. Because DynamoDB utilizes IAM, there are no user names or passwords for accessing DynamoDB. Because you don't have any complicated password rotation policies to manage, it simplifies your security posture. With IAM, you can also enable fine-grained access control to provide authorization at the attribute level. You can also define resource-based policies with support for IAM Access Analyzer and Block Public Access (BPA) to simplify policy management. By default, DynamoDB encrypts all customer data at rest. Encryption at rest enhances the security of your data by using encryption keys stored in AWS Key"} +{"global_id": 1463, "doc_id": "dynamodb", "chunk_id": "6", "question_id": 4, "question": "What does DynamoDB do by default to enhance data security?", "answer_span": "By default, DynamoDB encrypts all customer data at rest", "chunk": "to export changed, updated, or deleted data between a specified time period. You can also import data from Amazon S3 into a new DynamoDB table. Secondary indexes API Version 2012-08-10 5 Amazon DynamoDB Developer Guide Zero-ETL integration DynamoDB supports zero-ETL integration with Amazon Redshift and Using an OpenSearch Ingestion pipeline with Amazon DynamoDB. These integrations enable you to run complex analytics and use advanced search capabilities on your DynamoDB table data. For example, you can perform full-text and vector search, and semantic search on your DynamoDB data. Zero-ETL integrations have no impact on production workloads running on DynamoDB. Caching DynamoDB Accelerator (DAX) is a fully managed, highly available caching service built for DynamoDB. DAX delivers up to 10 times performance improvement – from milliseconds to microseconds – even at millions of requests per second. DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring you to manage cache invalidation, data population, or cluster management. Security DynamoDB utilizes IAM to help you securely control access to your DynamoDB resources. With IAM, you can centrally manage permissions that control which DynamoDB users can access resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. Because DynamoDB utilizes IAM, there are no user names or passwords for accessing DynamoDB. Because you don't have any complicated password rotation policies to manage, it simplifies your security posture. With IAM, you can also enable fine-grained access control to provide authorization at the attribute level. You can also define resource-based policies with support for IAM Access Analyzer and Block Public Access (BPA) to simplify policy management. By default, DynamoDB encrypts all customer data at rest. Encryption at rest enhances the security of your data by using encryption keys stored in AWS Key"} +{"global_id": 1464, "doc_id": "dynamodb", "chunk_id": "7", "question_id": 1, "question": "What does DynamoDB do by default with customer data?", "answer_span": "By default, DynamoDB encrypts all customer data at rest.", "chunk": "the attribute level. You can also define resource-based policies with support for IAM Access Analyzer and Block Public Access (BPA) to simplify policy management. By default, DynamoDB encrypts all customer data at rest. Encryption at rest enhances the security of your data by using encryption keys stored in AWS Key Management Service (AWS KMS). With encryption at rest, you can build security-sensitive applications that meet strict encryption compliance and regulatory requirements. When you access an encrypted table, DynamoDB decrypts the table data transparently. You don't have to change any code or applications to use or manage encrypted tables. DynamoDB continues to deliver the same single-digit millisecond latency that you've come to expect, and all DynamoDB queries work seamlessly on your encrypted data. You can specify whether DynamoDB should use an AWS owned key (default encryption type), AWS managed key, or a Customer managed key to encrypt user data. The default encryption using AWSZero-ETL integration API Version 2012-08-10 6 Amazon DynamoDB Developer Guide Best practices for designing and architecting with DynamoDB Use this section to quickly find recommendations for maximizing performance and minimizing throughput costs when working with DynamoDB. Topics • NoSQL design for DynamoDB • Using the DynamoDB Well-Architected Lens to optimize your DynamoDB workload • Best practices for designing and using partition keys effectively in DynamoDB • Best practices for using sort keys to organize data in DynamoDB • Best practices for using secondary indexes in DynamoDB • Best practices for storing large items and attributes in DynamoDB • Best practices for handling time series data in DynamoDB • Best practices for managing many-to-many relationships in DynamoDB tables • Best practices for querying and scanning data in DynamoDB • Best practices for DynamoDB table design • Best practices for DynamoDB global table design • Best practices for managing the"} +{"global_id": 1465, "doc_id": "dynamodb", "chunk_id": "7", "question_id": 2, "question": "What enhances the security of your data in DynamoDB?", "answer_span": "Encryption at rest enhances the security of your data by using encryption keys stored in AWS Key Management Service (AWS KMS).", "chunk": "the attribute level. You can also define resource-based policies with support for IAM Access Analyzer and Block Public Access (BPA) to simplify policy management. By default, DynamoDB encrypts all customer data at rest. Encryption at rest enhances the security of your data by using encryption keys stored in AWS Key Management Service (AWS KMS). With encryption at rest, you can build security-sensitive applications that meet strict encryption compliance and regulatory requirements. When you access an encrypted table, DynamoDB decrypts the table data transparently. You don't have to change any code or applications to use or manage encrypted tables. DynamoDB continues to deliver the same single-digit millisecond latency that you've come to expect, and all DynamoDB queries work seamlessly on your encrypted data. You can specify whether DynamoDB should use an AWS owned key (default encryption type), AWS managed key, or a Customer managed key to encrypt user data. The default encryption using AWSZero-ETL integration API Version 2012-08-10 6 Amazon DynamoDB Developer Guide Best practices for designing and architecting with DynamoDB Use this section to quickly find recommendations for maximizing performance and minimizing throughput costs when working with DynamoDB. Topics • NoSQL design for DynamoDB • Using the DynamoDB Well-Architected Lens to optimize your DynamoDB workload • Best practices for designing and using partition keys effectively in DynamoDB • Best practices for using sort keys to organize data in DynamoDB • Best practices for using secondary indexes in DynamoDB • Best practices for storing large items and attributes in DynamoDB • Best practices for handling time series data in DynamoDB • Best practices for managing many-to-many relationships in DynamoDB tables • Best practices for querying and scanning data in DynamoDB • Best practices for DynamoDB table design • Best practices for DynamoDB global table design • Best practices for managing the"} +{"global_id": 1466, "doc_id": "dynamodb", "chunk_id": "7", "question_id": 3, "question": "What is the default encryption type used by DynamoDB?", "answer_span": "You can specify whether DynamoDB should use an AWS owned key (default encryption type), AWS managed key, or a Customer managed key to encrypt user data.", "chunk": "the attribute level. You can also define resource-based policies with support for IAM Access Analyzer and Block Public Access (BPA) to simplify policy management. By default, DynamoDB encrypts all customer data at rest. Encryption at rest enhances the security of your data by using encryption keys stored in AWS Key Management Service (AWS KMS). With encryption at rest, you can build security-sensitive applications that meet strict encryption compliance and regulatory requirements. When you access an encrypted table, DynamoDB decrypts the table data transparently. You don't have to change any code or applications to use or manage encrypted tables. DynamoDB continues to deliver the same single-digit millisecond latency that you've come to expect, and all DynamoDB queries work seamlessly on your encrypted data. You can specify whether DynamoDB should use an AWS owned key (default encryption type), AWS managed key, or a Customer managed key to encrypt user data. The default encryption using AWSZero-ETL integration API Version 2012-08-10 6 Amazon DynamoDB Developer Guide Best practices for designing and architecting with DynamoDB Use this section to quickly find recommendations for maximizing performance and minimizing throughput costs when working with DynamoDB. Topics • NoSQL design for DynamoDB • Using the DynamoDB Well-Architected Lens to optimize your DynamoDB workload • Best practices for designing and using partition keys effectively in DynamoDB • Best practices for using sort keys to organize data in DynamoDB • Best practices for using secondary indexes in DynamoDB • Best practices for storing large items and attributes in DynamoDB • Best practices for handling time series data in DynamoDB • Best practices for managing many-to-many relationships in DynamoDB tables • Best practices for querying and scanning data in DynamoDB • Best practices for DynamoDB table design • Best practices for DynamoDB global table design • Best practices for managing the"} +{"global_id": 1467, "doc_id": "dynamodb", "chunk_id": "7", "question_id": 4, "question": "What does the section on best practices for designing and architecting with DynamoDB provide?", "answer_span": "Use this section to quickly find recommendations for maximizing performance and minimizing throughput costs when working with DynamoDB.", "chunk": "the attribute level. You can also define resource-based policies with support for IAM Access Analyzer and Block Public Access (BPA) to simplify policy management. By default, DynamoDB encrypts all customer data at rest. Encryption at rest enhances the security of your data by using encryption keys stored in AWS Key Management Service (AWS KMS). With encryption at rest, you can build security-sensitive applications that meet strict encryption compliance and regulatory requirements. When you access an encrypted table, DynamoDB decrypts the table data transparently. You don't have to change any code or applications to use or manage encrypted tables. DynamoDB continues to deliver the same single-digit millisecond latency that you've come to expect, and all DynamoDB queries work seamlessly on your encrypted data. You can specify whether DynamoDB should use an AWS owned key (default encryption type), AWS managed key, or a Customer managed key to encrypt user data. The default encryption using AWSZero-ETL integration API Version 2012-08-10 6 Amazon DynamoDB Developer Guide Best practices for designing and architecting with DynamoDB Use this section to quickly find recommendations for maximizing performance and minimizing throughput costs when working with DynamoDB. Topics • NoSQL design for DynamoDB • Using the DynamoDB Well-Architected Lens to optimize your DynamoDB workload • Best practices for designing and using partition keys effectively in DynamoDB • Best practices for using sort keys to organize data in DynamoDB • Best practices for using secondary indexes in DynamoDB • Best practices for storing large items and attributes in DynamoDB • Best practices for handling time series data in DynamoDB • Best practices for managing many-to-many relationships in DynamoDB tables • Best practices for querying and scanning data in DynamoDB • Best practices for DynamoDB table design • Best practices for DynamoDB global table design • Best practices for managing the"} +{"global_id": 1468, "doc_id": "dynamodb", "chunk_id": "8", "question_id": 1, "question": "What are the best practices for handling time series data in DynamoDB?", "answer_span": "Best practices for handling time series data in DynamoDB", "chunk": "Best practices for handling time series data in DynamoDB • Best practices for managing many-to-many relationships in DynamoDB tables • Best practices for querying and scanning data in DynamoDB • Best practices for DynamoDB table design • Best practices for DynamoDB global table design • Best practices for managing the control plane in DynamoDB • Best practices for using bulk data operations in DynamoDB • Best practices for implementing version control in DynamoDB • Best practices for understanding your AWS billing and usage reports in DynamoDB • Migrating a DynamoDB table from one account to another • Prescriptive guidance to integrate DAX with DynamoDB applications • Considerations when using AWS PrivateLink for Amazon DynamoDB NoSQL design for DynamoDB NoSQL database systems like Amazon DynamoDB use alternative models for data management, such as key-value pairs or document storage. When you switch from a relational database NoSQL design API Version 2012-08-10 3045 Amazon DynamoDB Developer Guide management system to a NoSQL database system like DynamoDB, it's important to understand the key differences and specific design approaches. Topics • Differences between relational data design and NoSQL • Two key concepts for NoSQL design • Approaching NoSQL design • NoSQL Workbench for DynamoDB Differences between relational data design and NoSQL Relational database systems (RDBMS) and NoSQL databases have different strengths and weaknesses: • In RDBMS, data can be queried flexibly, but queries are relatively expensive and don't scale well in high-traffic situations (see First steps for modeling relational data in DynamoDB). • In a NoSQL database such as DynamoDB, data can be queried efficiently in a limited number of ways, outside of which queries can be expensive and slow. These differences make database design different between the two systems: • In RDBMS, you design for flexibility without worrying about implementation details or performance. Query"} +{"global_id": 1469, "doc_id": "dynamodb", "chunk_id": "8", "question_id": 2, "question": "What is a key difference between RDBMS and NoSQL databases?", "answer_span": "Relational database systems (RDBMS) and NoSQL databases have different strengths and weaknesses", "chunk": "Best practices for handling time series data in DynamoDB • Best practices for managing many-to-many relationships in DynamoDB tables • Best practices for querying and scanning data in DynamoDB • Best practices for DynamoDB table design • Best practices for DynamoDB global table design • Best practices for managing the control plane in DynamoDB • Best practices for using bulk data operations in DynamoDB • Best practices for implementing version control in DynamoDB • Best practices for understanding your AWS billing and usage reports in DynamoDB • Migrating a DynamoDB table from one account to another • Prescriptive guidance to integrate DAX with DynamoDB applications • Considerations when using AWS PrivateLink for Amazon DynamoDB NoSQL design for DynamoDB NoSQL database systems like Amazon DynamoDB use alternative models for data management, such as key-value pairs or document storage. When you switch from a relational database NoSQL design API Version 2012-08-10 3045 Amazon DynamoDB Developer Guide management system to a NoSQL database system like DynamoDB, it's important to understand the key differences and specific design approaches. Topics • Differences between relational data design and NoSQL • Two key concepts for NoSQL design • Approaching NoSQL design • NoSQL Workbench for DynamoDB Differences between relational data design and NoSQL Relational database systems (RDBMS) and NoSQL databases have different strengths and weaknesses: • In RDBMS, data can be queried flexibly, but queries are relatively expensive and don't scale well in high-traffic situations (see First steps for modeling relational data in DynamoDB). • In a NoSQL database such as DynamoDB, data can be queried efficiently in a limited number of ways, outside of which queries can be expensive and slow. These differences make database design different between the two systems: • In RDBMS, you design for flexibility without worrying about implementation details or performance. Query"} +{"global_id": 1470, "doc_id": "dynamodb", "chunk_id": "8", "question_id": 3, "question": "What can be said about querying data in RDBMS?", "answer_span": "In RDBMS, data can be queried flexibly, but queries are relatively expensive and don't scale well in high-traffic situations", "chunk": "Best practices for handling time series data in DynamoDB • Best practices for managing many-to-many relationships in DynamoDB tables • Best practices for querying and scanning data in DynamoDB • Best practices for DynamoDB table design • Best practices for DynamoDB global table design • Best practices for managing the control plane in DynamoDB • Best practices for using bulk data operations in DynamoDB • Best practices for implementing version control in DynamoDB • Best practices for understanding your AWS billing and usage reports in DynamoDB • Migrating a DynamoDB table from one account to another • Prescriptive guidance to integrate DAX with DynamoDB applications • Considerations when using AWS PrivateLink for Amazon DynamoDB NoSQL design for DynamoDB NoSQL database systems like Amazon DynamoDB use alternative models for data management, such as key-value pairs or document storage. When you switch from a relational database NoSQL design API Version 2012-08-10 3045 Amazon DynamoDB Developer Guide management system to a NoSQL database system like DynamoDB, it's important to understand the key differences and specific design approaches. Topics • Differences between relational data design and NoSQL • Two key concepts for NoSQL design • Approaching NoSQL design • NoSQL Workbench for DynamoDB Differences between relational data design and NoSQL Relational database systems (RDBMS) and NoSQL databases have different strengths and weaknesses: • In RDBMS, data can be queried flexibly, but queries are relatively expensive and don't scale well in high-traffic situations (see First steps for modeling relational data in DynamoDB). • In a NoSQL database such as DynamoDB, data can be queried efficiently in a limited number of ways, outside of which queries can be expensive and slow. These differences make database design different between the two systems: • In RDBMS, you design for flexibility without worrying about implementation details or performance. Query"} +{"global_id": 1471, "doc_id": "dynamodb", "chunk_id": "8", "question_id": 4, "question": "What is a characteristic of querying data in a NoSQL database like DynamoDB?", "answer_span": "In a NoSQL database such as DynamoDB, data can be queried efficiently in a limited number of ways", "chunk": "Best practices for handling time series data in DynamoDB • Best practices for managing many-to-many relationships in DynamoDB tables • Best practices for querying and scanning data in DynamoDB • Best practices for DynamoDB table design • Best practices for DynamoDB global table design • Best practices for managing the control plane in DynamoDB • Best practices for using bulk data operations in DynamoDB • Best practices for implementing version control in DynamoDB • Best practices for understanding your AWS billing and usage reports in DynamoDB • Migrating a DynamoDB table from one account to another • Prescriptive guidance to integrate DAX with DynamoDB applications • Considerations when using AWS PrivateLink for Amazon DynamoDB NoSQL design for DynamoDB NoSQL database systems like Amazon DynamoDB use alternative models for data management, such as key-value pairs or document storage. When you switch from a relational database NoSQL design API Version 2012-08-10 3045 Amazon DynamoDB Developer Guide management system to a NoSQL database system like DynamoDB, it's important to understand the key differences and specific design approaches. Topics • Differences between relational data design and NoSQL • Two key concepts for NoSQL design • Approaching NoSQL design • NoSQL Workbench for DynamoDB Differences between relational data design and NoSQL Relational database systems (RDBMS) and NoSQL databases have different strengths and weaknesses: • In RDBMS, data can be queried flexibly, but queries are relatively expensive and don't scale well in high-traffic situations (see First steps for modeling relational data in DynamoDB). • In a NoSQL database such as DynamoDB, data can be queried efficiently in a limited number of ways, outside of which queries can be expensive and slow. These differences make database design different between the two systems: • In RDBMS, you design for flexibility without worrying about implementation details or performance. Query"} +{"global_id": 1472, "doc_id": "dynamodb", "chunk_id": "9", "question_id": 1, "question": "What is a key consideration when designing a schema for DynamoDB?", "answer_span": "you shouldn't start designing your schema for DynamoDB until you know the questions it will need to answer.", "chunk": "NoSQL database such as DynamoDB, data can be queried efficiently in a limited number of ways, outside of which queries can be expensive and slow. These differences make database design different between the two systems: • In RDBMS, you design for flexibility without worrying about implementation details or performance. Query optimization generally doesn't affect schema design, but normalization is important. • In DynamoDB, you design your schema specifically to make the most common and important queries as fast and as inexpensive as possible. Your data structures are tailored to the specific requirements of your business use cases. Two key concepts for NoSQL design NoSQL design requires a different mindset than RDBMS design. For an RDBMS, you can go ahead and create a normalized data model without thinking about access patterns. You can then extend it later when new questions and query requirements arise. You can organize each type of data into its own table. NoSQL vs. RDBMS API Version 2012-08-10 3046 Amazon DynamoDB Developer Guide How NoSQL design is different • By contrast, you shouldn't start designing your schema for DynamoDB until you know the questions it will need to answer. Understanding the business problems and the application use cases up front is essential. • You should maintain as few tables as possible in a DynamoDB application. Having fewer tables keeps things more scalable, requires less permissions management, and reduces overhead for your DynamoDB application. It can also help keep backup costs lower overall. Approaching NoSQL design The first step in designing your DynamoDB application is to identify the specific query patterns that the system must satisfy. In particular, it is important to understand three fundamental properties of your application's access patterns before you begin: • Data size: Knowing how much data will be stored and requested at one time"} +{"global_id": 1473, "doc_id": "dynamodb", "chunk_id": "9", "question_id": 2, "question": "What is important to understand before beginning NoSQL design?", "answer_span": "it is important to understand three fundamental properties of your application's access patterns before you begin.", "chunk": "NoSQL database such as DynamoDB, data can be queried efficiently in a limited number of ways, outside of which queries can be expensive and slow. These differences make database design different between the two systems: • In RDBMS, you design for flexibility without worrying about implementation details or performance. Query optimization generally doesn't affect schema design, but normalization is important. • In DynamoDB, you design your schema specifically to make the most common and important queries as fast and as inexpensive as possible. Your data structures are tailored to the specific requirements of your business use cases. Two key concepts for NoSQL design NoSQL design requires a different mindset than RDBMS design. For an RDBMS, you can go ahead and create a normalized data model without thinking about access patterns. You can then extend it later when new questions and query requirements arise. You can organize each type of data into its own table. NoSQL vs. RDBMS API Version 2012-08-10 3046 Amazon DynamoDB Developer Guide How NoSQL design is different • By contrast, you shouldn't start designing your schema for DynamoDB until you know the questions it will need to answer. Understanding the business problems and the application use cases up front is essential. • You should maintain as few tables as possible in a DynamoDB application. Having fewer tables keeps things more scalable, requires less permissions management, and reduces overhead for your DynamoDB application. It can also help keep backup costs lower overall. Approaching NoSQL design The first step in designing your DynamoDB application is to identify the specific query patterns that the system must satisfy. In particular, it is important to understand three fundamental properties of your application's access patterns before you begin: • Data size: Knowing how much data will be stored and requested at one time"} +{"global_id": 1474, "doc_id": "dynamodb", "chunk_id": "9", "question_id": 3, "question": "What does having fewer tables in a DynamoDB application help with?", "answer_span": "Having fewer tables keeps things more scalable, requires less permissions management, and reduces overhead for your DynamoDB application.", "chunk": "NoSQL database such as DynamoDB, data can be queried efficiently in a limited number of ways, outside of which queries can be expensive and slow. These differences make database design different between the two systems: • In RDBMS, you design for flexibility without worrying about implementation details or performance. Query optimization generally doesn't affect schema design, but normalization is important. • In DynamoDB, you design your schema specifically to make the most common and important queries as fast and as inexpensive as possible. Your data structures are tailored to the specific requirements of your business use cases. Two key concepts for NoSQL design NoSQL design requires a different mindset than RDBMS design. For an RDBMS, you can go ahead and create a normalized data model without thinking about access patterns. You can then extend it later when new questions and query requirements arise. You can organize each type of data into its own table. NoSQL vs. RDBMS API Version 2012-08-10 3046 Amazon DynamoDB Developer Guide How NoSQL design is different • By contrast, you shouldn't start designing your schema for DynamoDB until you know the questions it will need to answer. Understanding the business problems and the application use cases up front is essential. • You should maintain as few tables as possible in a DynamoDB application. Having fewer tables keeps things more scalable, requires less permissions management, and reduces overhead for your DynamoDB application. It can also help keep backup costs lower overall. Approaching NoSQL design The first step in designing your DynamoDB application is to identify the specific query patterns that the system must satisfy. In particular, it is important to understand three fundamental properties of your application's access patterns before you begin: • Data size: Knowing how much data will be stored and requested at one time"} +{"global_id": 1475, "doc_id": "dynamodb", "chunk_id": "9", "question_id": 4, "question": "What is a difference between RDBMS and NoSQL design regarding schema?", "answer_span": "In RDBMS, you design for flexibility without worrying about implementation details or performance.", "chunk": "NoSQL database such as DynamoDB, data can be queried efficiently in a limited number of ways, outside of which queries can be expensive and slow. These differences make database design different between the two systems: • In RDBMS, you design for flexibility without worrying about implementation details or performance. Query optimization generally doesn't affect schema design, but normalization is important. • In DynamoDB, you design your schema specifically to make the most common and important queries as fast and as inexpensive as possible. Your data structures are tailored to the specific requirements of your business use cases. Two key concepts for NoSQL design NoSQL design requires a different mindset than RDBMS design. For an RDBMS, you can go ahead and create a normalized data model without thinking about access patterns. You can then extend it later when new questions and query requirements arise. You can organize each type of data into its own table. NoSQL vs. RDBMS API Version 2012-08-10 3046 Amazon DynamoDB Developer Guide How NoSQL design is different • By contrast, you shouldn't start designing your schema for DynamoDB until you know the questions it will need to answer. Understanding the business problems and the application use cases up front is essential. • You should maintain as few tables as possible in a DynamoDB application. Having fewer tables keeps things more scalable, requires less permissions management, and reduces overhead for your DynamoDB application. It can also help keep backup costs lower overall. Approaching NoSQL design The first step in designing your DynamoDB application is to identify the specific query patterns that the system must satisfy. In particular, it is important to understand three fundamental properties of your application's access patterns before you begin: • Data size: Knowing how much data will be stored and requested at one time"} +{"global_id": 1476, "doc_id": "dynamodb", "chunk_id": "10", "question_id": 1, "question": "What is important to understand before beginning to design a DynamoDB application?", "answer_span": "it is important to understand three fundamental properties of your application's access patterns before you begin", "chunk": "in designing your DynamoDB application is to identify the specific query patterns that the system must satisfy. In particular, it is important to understand three fundamental properties of your application's access patterns before you begin: • Data size: Knowing how much data will be stored and requested at one time will help determine the most effective way to partition the data. • Data shape: Instead of reshaping data when a query is processed (as an RDBMS system does), a NoSQL database organizes data so that its shape in the database corresponds with what will be queried. This is a key factor in increasing speed and scalability. • Data velocity: DynamoDB scales by increasing the number of physical partitions that are available to process queries, and by efficiently distributing data across those partitions. Knowing in advance what the peak query loads will be might help determine how to partition data to best use I/O capacity. After you identify specific query requirements, you can organize data according to general principles that govern performance: • Keep related data together. Research has shown that the principle of 'locality of reference', keeping related data together in one place, is a key factor in improving performance and response times in NoSQL systems, just as it was found to be important for optimizing routing tables many years ago. As a general rule, you should maintain as few tables as possible in a DynamoDB application. General approach API Version 2012-08-10 3047 Amazon DynamoDB Developer Guide Exceptions are cases where high-volume time series data are involved, or datasets that have very different access patterns. A single table with inverted indexes can usually enable simple queries to create and retrieve the complex hierarchical data structures required by your application. • Use sort order. Related items can be grouped together and"} +{"global_id": 1477, "doc_id": "dynamodb", "chunk_id": "10", "question_id": 2, "question": "What does knowing the data size help determine?", "answer_span": "Knowing how much data will be stored and requested at one time will help determine the most effective way to partition the data", "chunk": "in designing your DynamoDB application is to identify the specific query patterns that the system must satisfy. In particular, it is important to understand three fundamental properties of your application's access patterns before you begin: • Data size: Knowing how much data will be stored and requested at one time will help determine the most effective way to partition the data. • Data shape: Instead of reshaping data when a query is processed (as an RDBMS system does), a NoSQL database organizes data so that its shape in the database corresponds with what will be queried. This is a key factor in increasing speed and scalability. • Data velocity: DynamoDB scales by increasing the number of physical partitions that are available to process queries, and by efficiently distributing data across those partitions. Knowing in advance what the peak query loads will be might help determine how to partition data to best use I/O capacity. After you identify specific query requirements, you can organize data according to general principles that govern performance: • Keep related data together. Research has shown that the principle of 'locality of reference', keeping related data together in one place, is a key factor in improving performance and response times in NoSQL systems, just as it was found to be important for optimizing routing tables many years ago. As a general rule, you should maintain as few tables as possible in a DynamoDB application. General approach API Version 2012-08-10 3047 Amazon DynamoDB Developer Guide Exceptions are cases where high-volume time series data are involved, or datasets that have very different access patterns. A single table with inverted indexes can usually enable simple queries to create and retrieve the complex hierarchical data structures required by your application. • Use sort order. Related items can be grouped together and"} +{"global_id": 1478, "doc_id": "dynamodb", "chunk_id": "10", "question_id": 3, "question": "What is a key factor in increasing speed and scalability in a NoSQL database?", "answer_span": "a NoSQL database organizes data so that its shape in the database corresponds with what will be queried", "chunk": "in designing your DynamoDB application is to identify the specific query patterns that the system must satisfy. In particular, it is important to understand three fundamental properties of your application's access patterns before you begin: • Data size: Knowing how much data will be stored and requested at one time will help determine the most effective way to partition the data. • Data shape: Instead of reshaping data when a query is processed (as an RDBMS system does), a NoSQL database organizes data so that its shape in the database corresponds with what will be queried. This is a key factor in increasing speed and scalability. • Data velocity: DynamoDB scales by increasing the number of physical partitions that are available to process queries, and by efficiently distributing data across those partitions. Knowing in advance what the peak query loads will be might help determine how to partition data to best use I/O capacity. After you identify specific query requirements, you can organize data according to general principles that govern performance: • Keep related data together. Research has shown that the principle of 'locality of reference', keeping related data together in one place, is a key factor in improving performance and response times in NoSQL systems, just as it was found to be important for optimizing routing tables many years ago. As a general rule, you should maintain as few tables as possible in a DynamoDB application. General approach API Version 2012-08-10 3047 Amazon DynamoDB Developer Guide Exceptions are cases where high-volume time series data are involved, or datasets that have very different access patterns. A single table with inverted indexes can usually enable simple queries to create and retrieve the complex hierarchical data structures required by your application. • Use sort order. Related items can be grouped together and"} +{"global_id": 1479, "doc_id": "dynamodb", "chunk_id": "10", "question_id": 4, "question": "What principle should be followed to improve performance and response times in NoSQL systems?", "answer_span": "the principle of 'locality of reference', keeping related data together in one place, is a key factor in improving performance and response times in NoSQL systems", "chunk": "in designing your DynamoDB application is to identify the specific query patterns that the system must satisfy. In particular, it is important to understand three fundamental properties of your application's access patterns before you begin: • Data size: Knowing how much data will be stored and requested at one time will help determine the most effective way to partition the data. • Data shape: Instead of reshaping data when a query is processed (as an RDBMS system does), a NoSQL database organizes data so that its shape in the database corresponds with what will be queried. This is a key factor in increasing speed and scalability. • Data velocity: DynamoDB scales by increasing the number of physical partitions that are available to process queries, and by efficiently distributing data across those partitions. Knowing in advance what the peak query loads will be might help determine how to partition data to best use I/O capacity. After you identify specific query requirements, you can organize data according to general principles that govern performance: • Keep related data together. Research has shown that the principle of 'locality of reference', keeping related data together in one place, is a key factor in improving performance and response times in NoSQL systems, just as it was found to be important for optimizing routing tables many years ago. As a general rule, you should maintain as few tables as possible in a DynamoDB application. General approach API Version 2012-08-10 3047 Amazon DynamoDB Developer Guide Exceptions are cases where high-volume time series data are involved, or datasets that have very different access patterns. A single table with inverted indexes can usually enable simple queries to create and retrieve the complex hierarchical data structures required by your application. • Use sort order. Related items can be grouped together and"} +{"global_id": 1480, "doc_id": "dynamodb", "chunk_id": "11", "question_id": 1, "question": "What can a single table with inverted indexes enable?", "answer_span": "A single table with inverted indexes can usually enable simple queries to create and retrieve the complex hierarchical data structures required by your application.", "chunk": "where high-volume time series data are involved, or datasets that have very different access patterns. A single table with inverted indexes can usually enable simple queries to create and retrieve the complex hierarchical data structures required by your application. • Use sort order. Related items can be grouped together and queried efficiently if their key design causes them to sort together. This is an important NoSQL design strategy. • Distribute queries. It's also important that a high volume of queries not be focused on one part of the database, where they can exceed I/O capacity. Instead, you should design data keys to distribute traffic evenly across partitions as much as possible, avoiding hot spots. • Use global secondary indexes. By creating specific global secondary indexes, you can enable different queries than your main table can support, and that are still fast and relatively inexpensive. These general principles translate into some common design patterns that you can use to model data efficiently in DynamoDB. NoSQL Workbench for DynamoDB NoSQL Workbench for DynamoDB is a cross-platform, client-side GUI application that you can use for modern database development and operations. It's available for Windows, macOS, and Linux. NoSQL Workbench is a visual development tool that provides data modeling, data visualization, sample data generation, and query development features to help you design, create, query, and manage DynamoDB tables. With NoSQL Workbench for DynamoDB, you can build new data models from, or design models based on, existing data models that satisfy your application's data access patterns. You can also import and export the designed data model at the end of the process. For more information, see Building data models with NoSQL Workbench. Using the DynamoDB Well-Architected Lens to optimize your DynamoDB workload This section describes the Amazon DynamoDB Well-Architected Lens, a collection of design principles"} +{"global_id": 1481, "doc_id": "dynamodb", "chunk_id": "11", "question_id": 2, "question": "What is an important NoSQL design strategy related to item sorting?", "answer_span": "This is an important NoSQL design strategy.", "chunk": "where high-volume time series data are involved, or datasets that have very different access patterns. A single table with inverted indexes can usually enable simple queries to create and retrieve the complex hierarchical data structures required by your application. • Use sort order. Related items can be grouped together and queried efficiently if their key design causes them to sort together. This is an important NoSQL design strategy. • Distribute queries. It's also important that a high volume of queries not be focused on one part of the database, where they can exceed I/O capacity. Instead, you should design data keys to distribute traffic evenly across partitions as much as possible, avoiding hot spots. • Use global secondary indexes. By creating specific global secondary indexes, you can enable different queries than your main table can support, and that are still fast and relatively inexpensive. These general principles translate into some common design patterns that you can use to model data efficiently in DynamoDB. NoSQL Workbench for DynamoDB NoSQL Workbench for DynamoDB is a cross-platform, client-side GUI application that you can use for modern database development and operations. It's available for Windows, macOS, and Linux. NoSQL Workbench is a visual development tool that provides data modeling, data visualization, sample data generation, and query development features to help you design, create, query, and manage DynamoDB tables. With NoSQL Workbench for DynamoDB, you can build new data models from, or design models based on, existing data models that satisfy your application's data access patterns. You can also import and export the designed data model at the end of the process. For more information, see Building data models with NoSQL Workbench. Using the DynamoDB Well-Architected Lens to optimize your DynamoDB workload This section describes the Amazon DynamoDB Well-Architected Lens, a collection of design principles"} +{"global_id": 1482, "doc_id": "dynamodb", "chunk_id": "11", "question_id": 3, "question": "What should you design data keys to do in order to avoid hot spots?", "answer_span": "Instead, you should design data keys to distribute traffic evenly across partitions as much as possible, avoiding hot spots.", "chunk": "where high-volume time series data are involved, or datasets that have very different access patterns. A single table with inverted indexes can usually enable simple queries to create and retrieve the complex hierarchical data structures required by your application. • Use sort order. Related items can be grouped together and queried efficiently if their key design causes them to sort together. This is an important NoSQL design strategy. • Distribute queries. It's also important that a high volume of queries not be focused on one part of the database, where they can exceed I/O capacity. Instead, you should design data keys to distribute traffic evenly across partitions as much as possible, avoiding hot spots. • Use global secondary indexes. By creating specific global secondary indexes, you can enable different queries than your main table can support, and that are still fast and relatively inexpensive. These general principles translate into some common design patterns that you can use to model data efficiently in DynamoDB. NoSQL Workbench for DynamoDB NoSQL Workbench for DynamoDB is a cross-platform, client-side GUI application that you can use for modern database development and operations. It's available for Windows, macOS, and Linux. NoSQL Workbench is a visual development tool that provides data modeling, data visualization, sample data generation, and query development features to help you design, create, query, and manage DynamoDB tables. With NoSQL Workbench for DynamoDB, you can build new data models from, or design models based on, existing data models that satisfy your application's data access patterns. You can also import and export the designed data model at the end of the process. For more information, see Building data models with NoSQL Workbench. Using the DynamoDB Well-Architected Lens to optimize your DynamoDB workload This section describes the Amazon DynamoDB Well-Architected Lens, a collection of design principles"} +{"global_id": 1483, "doc_id": "dynamodb", "chunk_id": "11", "question_id": 4, "question": "What features does NoSQL Workbench for DynamoDB provide?", "answer_span": "NoSQL Workbench is a visual development tool that provides data modeling, data visualization, sample data generation, and query development features to help you design, create, query, and manage DynamoDB tables.", "chunk": "where high-volume time series data are involved, or datasets that have very different access patterns. A single table with inverted indexes can usually enable simple queries to create and retrieve the complex hierarchical data structures required by your application. • Use sort order. Related items can be grouped together and queried efficiently if their key design causes them to sort together. This is an important NoSQL design strategy. • Distribute queries. It's also important that a high volume of queries not be focused on one part of the database, where they can exceed I/O capacity. Instead, you should design data keys to distribute traffic evenly across partitions as much as possible, avoiding hot spots. • Use global secondary indexes. By creating specific global secondary indexes, you can enable different queries than your main table can support, and that are still fast and relatively inexpensive. These general principles translate into some common design patterns that you can use to model data efficiently in DynamoDB. NoSQL Workbench for DynamoDB NoSQL Workbench for DynamoDB is a cross-platform, client-side GUI application that you can use for modern database development and operations. It's available for Windows, macOS, and Linux. NoSQL Workbench is a visual development tool that provides data modeling, data visualization, sample data generation, and query development features to help you design, create, query, and manage DynamoDB tables. With NoSQL Workbench for DynamoDB, you can build new data models from, or design models based on, existing data models that satisfy your application's data access patterns. You can also import and export the designed data model at the end of the process. For more information, see Building data models with NoSQL Workbench. Using the DynamoDB Well-Architected Lens to optimize your DynamoDB workload This section describes the Amazon DynamoDB Well-Architected Lens, a collection of design principles"} +{"global_id": 1484, "doc_id": "dynamodb", "chunk_id": "12", "question_id": 1, "question": "What does the Amazon DynamoDB Well-Architected Lens provide?", "answer_span": "a collection of design principles and guidance for designing well-architected DynamoDB workloads.", "chunk": "patterns. You can also import and export the designed data model at the end of the process. For more information, see Building data models with NoSQL Workbench. Using the DynamoDB Well-Architected Lens to optimize your DynamoDB workload This section describes the Amazon DynamoDB Well-Architected Lens, a collection of design principles and guidance for designing well-architected DynamoDB workloads. Optimizing costs on DynamoDB tables This section covers best practices on how to optimize costs for your existing DynamoDB tables. You should look at the following strategies to see which cost optimization strategy best suits your NoSQL Workbench API Version 2012-08-10 3048 Amazon DynamoDB Developer Guide needs and approach them iteratively. Each strategy will provide an overview of what might be impacting your costs, what signs to look for, and prescriptive guidance on how to reduce them. Topics • Evaluate your costs at the table level • Evaluate your DynamoDB table's capacity mode • Evaluate your DynamoDB table's auto scaling settings • Evaluate your DynamoDB table class selection • Identify your unused resources in DynamoDB • Evaluate your DynamoDB table usage patterns • Evaluate your DynamoDB streams usage • Evaluate your provisioned capacity for right-sized provisioning in your DynamoDB table Evaluate your costs at the table level The Cost Explorer tool found within the AWS Management Console allows you to see costs broken down by type, such as read, write, storage and backup charges. You can also see these costs summarized by period such as month or day. One challenge administrators can face is when the costs of only one particular table need to be reviewed. Some of this data is available via the DynamoDB console or via calls to the DescribeTable API, however Cost Explorer does not, by default, allow you to filter or group by costs associated with a specific table."} +{"global_id": 1485, "doc_id": "dynamodb", "chunk_id": "12", "question_id": 2, "question": "What tool allows you to see costs broken down by type in DynamoDB?", "answer_span": "The Cost Explorer tool found within the AWS Management Console", "chunk": "patterns. You can also import and export the designed data model at the end of the process. For more information, see Building data models with NoSQL Workbench. Using the DynamoDB Well-Architected Lens to optimize your DynamoDB workload This section describes the Amazon DynamoDB Well-Architected Lens, a collection of design principles and guidance for designing well-architected DynamoDB workloads. Optimizing costs on DynamoDB tables This section covers best practices on how to optimize costs for your existing DynamoDB tables. You should look at the following strategies to see which cost optimization strategy best suits your NoSQL Workbench API Version 2012-08-10 3048 Amazon DynamoDB Developer Guide needs and approach them iteratively. Each strategy will provide an overview of what might be impacting your costs, what signs to look for, and prescriptive guidance on how to reduce them. Topics • Evaluate your costs at the table level • Evaluate your DynamoDB table's capacity mode • Evaluate your DynamoDB table's auto scaling settings • Evaluate your DynamoDB table class selection • Identify your unused resources in DynamoDB • Evaluate your DynamoDB table usage patterns • Evaluate your DynamoDB streams usage • Evaluate your provisioned capacity for right-sized provisioning in your DynamoDB table Evaluate your costs at the table level The Cost Explorer tool found within the AWS Management Console allows you to see costs broken down by type, such as read, write, storage and backup charges. You can also see these costs summarized by period such as month or day. One challenge administrators can face is when the costs of only one particular table need to be reviewed. Some of this data is available via the DynamoDB console or via calls to the DescribeTable API, however Cost Explorer does not, by default, allow you to filter or group by costs associated with a specific table."} +{"global_id": 1486, "doc_id": "dynamodb", "chunk_id": "12", "question_id": 3, "question": "What can you evaluate to optimize costs for your existing DynamoDB tables?", "answer_span": "best practices on how to optimize costs for your existing DynamoDB tables.", "chunk": "patterns. You can also import and export the designed data model at the end of the process. For more information, see Building data models with NoSQL Workbench. Using the DynamoDB Well-Architected Lens to optimize your DynamoDB workload This section describes the Amazon DynamoDB Well-Architected Lens, a collection of design principles and guidance for designing well-architected DynamoDB workloads. Optimizing costs on DynamoDB tables This section covers best practices on how to optimize costs for your existing DynamoDB tables. You should look at the following strategies to see which cost optimization strategy best suits your NoSQL Workbench API Version 2012-08-10 3048 Amazon DynamoDB Developer Guide needs and approach them iteratively. Each strategy will provide an overview of what might be impacting your costs, what signs to look for, and prescriptive guidance on how to reduce them. Topics • Evaluate your costs at the table level • Evaluate your DynamoDB table's capacity mode • Evaluate your DynamoDB table's auto scaling settings • Evaluate your DynamoDB table class selection • Identify your unused resources in DynamoDB • Evaluate your DynamoDB table usage patterns • Evaluate your DynamoDB streams usage • Evaluate your provisioned capacity for right-sized provisioning in your DynamoDB table Evaluate your costs at the table level The Cost Explorer tool found within the AWS Management Console allows you to see costs broken down by type, such as read, write, storage and backup charges. You can also see these costs summarized by period such as month or day. One challenge administrators can face is when the costs of only one particular table need to be reviewed. Some of this data is available via the DynamoDB console or via calls to the DescribeTable API, however Cost Explorer does not, by default, allow you to filter or group by costs associated with a specific table."} +{"global_id": 1487, "doc_id": "dynamodb", "chunk_id": "12", "question_id": 4, "question": "What challenge can administrators face regarding DynamoDB costs?", "answer_span": "One challenge administrators can face is when the costs of only one particular table need to be reviewed.", "chunk": "patterns. You can also import and export the designed data model at the end of the process. For more information, see Building data models with NoSQL Workbench. Using the DynamoDB Well-Architected Lens to optimize your DynamoDB workload This section describes the Amazon DynamoDB Well-Architected Lens, a collection of design principles and guidance for designing well-architected DynamoDB workloads. Optimizing costs on DynamoDB tables This section covers best practices on how to optimize costs for your existing DynamoDB tables. You should look at the following strategies to see which cost optimization strategy best suits your NoSQL Workbench API Version 2012-08-10 3048 Amazon DynamoDB Developer Guide needs and approach them iteratively. Each strategy will provide an overview of what might be impacting your costs, what signs to look for, and prescriptive guidance on how to reduce them. Topics • Evaluate your costs at the table level • Evaluate your DynamoDB table's capacity mode • Evaluate your DynamoDB table's auto scaling settings • Evaluate your DynamoDB table class selection • Identify your unused resources in DynamoDB • Evaluate your DynamoDB table usage patterns • Evaluate your DynamoDB streams usage • Evaluate your provisioned capacity for right-sized provisioning in your DynamoDB table Evaluate your costs at the table level The Cost Explorer tool found within the AWS Management Console allows you to see costs broken down by type, such as read, write, storage and backup charges. You can also see these costs summarized by period such as month or day. One challenge administrators can face is when the costs of only one particular table need to be reviewed. Some of this data is available via the DynamoDB console or via calls to the DescribeTable API, however Cost Explorer does not, by default, allow you to filter or group by costs associated with a specific table."} +{"global_id": 1488, "doc_id": "dynamodb", "chunk_id": "13", "question_id": 1, "question": "What does Cost Explorer's default view provide?", "answer_span": "Cost Explorer's default view provides charts showing the cost of consumed resources such as throughput and storage.", "chunk": "is when the costs of only one particular table need to be reviewed. Some of this data is available via the DynamoDB console or via calls to the DescribeTable API, however Cost Explorer does not, by default, allow you to filter or group by costs associated with a specific table. This section will show you how to use tagging to perform individual table cost analysis in Cost Explorer. Topics • How to view the costs of a single DynamoDB table • Cost Explorer's default view • How to use and apply table tags in Cost Explorer How to view the costs of a single DynamoDB table Both the Amazon DynamoDB AWS Management Console and the DescribeTable API will show you information about a single table, including the primary key schema, any indexes on the table, Cost optimization API Version 2012-08-10 3049 Amazon DynamoDB Developer Guide and the size and item count of the table and any indexes. The size of the table, plus the size of the indexes, can be used to calculate the monthly storage cost for your table. For example, $0.25 per GB in the us-east-1 region. If the table is in provisioned capacity mode, the current RCU and WCU settings are returned as well. These could be used to calculate the current read and write costs for the table, but these costs could change, especially if the table has been configured with Auto Scaling. Note If the table is in on-demand capacity mode, then DescribeTable will not help estimate throughput costs, as these are billed based on actual, not provisioned usage in any one period. Cost Explorer's default view Cost Explorer's default view provides charts showing the cost of consumed resources such as throughput and storage. You can choose to group costs by period, such as totals"} +{"global_id": 1489, "doc_id": "dynamodb", "chunk_id": "13", "question_id": 2, "question": "How can you view the costs of a single DynamoDB table?", "answer_span": "Both the Amazon DynamoDB AWS Management Console and the DescribeTable API will show you information about a single table.", "chunk": "is when the costs of only one particular table need to be reviewed. Some of this data is available via the DynamoDB console or via calls to the DescribeTable API, however Cost Explorer does not, by default, allow you to filter or group by costs associated with a specific table. This section will show you how to use tagging to perform individual table cost analysis in Cost Explorer. Topics • How to view the costs of a single DynamoDB table • Cost Explorer's default view • How to use and apply table tags in Cost Explorer How to view the costs of a single DynamoDB table Both the Amazon DynamoDB AWS Management Console and the DescribeTable API will show you information about a single table, including the primary key schema, any indexes on the table, Cost optimization API Version 2012-08-10 3049 Amazon DynamoDB Developer Guide and the size and item count of the table and any indexes. The size of the table, plus the size of the indexes, can be used to calculate the monthly storage cost for your table. For example, $0.25 per GB in the us-east-1 region. If the table is in provisioned capacity mode, the current RCU and WCU settings are returned as well. These could be used to calculate the current read and write costs for the table, but these costs could change, especially if the table has been configured with Auto Scaling. Note If the table is in on-demand capacity mode, then DescribeTable will not help estimate throughput costs, as these are billed based on actual, not provisioned usage in any one period. Cost Explorer's default view Cost Explorer's default view provides charts showing the cost of consumed resources such as throughput and storage. You can choose to group costs by period, such as totals"} +{"global_id": 1490, "doc_id": "dynamodb", "chunk_id": "13", "question_id": 3, "question": "What can be used to calculate the monthly storage cost for your table?", "answer_span": "The size of the table, plus the size of the indexes, can be used to calculate the monthly storage cost for your table.", "chunk": "is when the costs of only one particular table need to be reviewed. Some of this data is available via the DynamoDB console or via calls to the DescribeTable API, however Cost Explorer does not, by default, allow you to filter or group by costs associated with a specific table. This section will show you how to use tagging to perform individual table cost analysis in Cost Explorer. Topics • How to view the costs of a single DynamoDB table • Cost Explorer's default view • How to use and apply table tags in Cost Explorer How to view the costs of a single DynamoDB table Both the Amazon DynamoDB AWS Management Console and the DescribeTable API will show you information about a single table, including the primary key schema, any indexes on the table, Cost optimization API Version 2012-08-10 3049 Amazon DynamoDB Developer Guide and the size and item count of the table and any indexes. The size of the table, plus the size of the indexes, can be used to calculate the monthly storage cost for your table. For example, $0.25 per GB in the us-east-1 region. If the table is in provisioned capacity mode, the current RCU and WCU settings are returned as well. These could be used to calculate the current read and write costs for the table, but these costs could change, especially if the table has been configured with Auto Scaling. Note If the table is in on-demand capacity mode, then DescribeTable will not help estimate throughput costs, as these are billed based on actual, not provisioned usage in any one period. Cost Explorer's default view Cost Explorer's default view provides charts showing the cost of consumed resources such as throughput and storage. You can choose to group costs by period, such as totals"} +{"global_id": 1491, "doc_id": "dynamodb", "chunk_id": "13", "question_id": 4, "question": "What happens if the table is in on-demand capacity mode?", "answer_span": "If the table is in on-demand capacity mode, then DescribeTable will not help estimate throughput costs, as these are billed based on actual, not provisioned usage in any one period.", "chunk": "is when the costs of only one particular table need to be reviewed. Some of this data is available via the DynamoDB console or via calls to the DescribeTable API, however Cost Explorer does not, by default, allow you to filter or group by costs associated with a specific table. This section will show you how to use tagging to perform individual table cost analysis in Cost Explorer. Topics • How to view the costs of a single DynamoDB table • Cost Explorer's default view • How to use and apply table tags in Cost Explorer How to view the costs of a single DynamoDB table Both the Amazon DynamoDB AWS Management Console and the DescribeTable API will show you information about a single table, including the primary key schema, any indexes on the table, Cost optimization API Version 2012-08-10 3049 Amazon DynamoDB Developer Guide and the size and item count of the table and any indexes. The size of the table, plus the size of the indexes, can be used to calculate the monthly storage cost for your table. For example, $0.25 per GB in the us-east-1 region. If the table is in provisioned capacity mode, the current RCU and WCU settings are returned as well. These could be used to calculate the current read and write costs for the table, but these costs could change, especially if the table has been configured with Auto Scaling. Note If the table is in on-demand capacity mode, then DescribeTable will not help estimate throughput costs, as these are billed based on actual, not provisioned usage in any one period. Cost Explorer's default view Cost Explorer's default view provides charts showing the cost of consumed resources such as throughput and storage. You can choose to group costs by period, such as totals"} +{"global_id": 1492, "doc_id": "dynamodb", "chunk_id": "14", "question_id": 1, "question": "What does Cost Explorer's default view provide?", "answer_span": "Cost Explorer's default view provides charts showing the cost of consumed resources such as throughput and storage.", "chunk": "help estimate throughput costs, as these are billed based on actual, not provisioned usage in any one period. Cost Explorer's default view Cost Explorer's default view provides charts showing the cost of consumed resources such as throughput and storage. You can choose to group costs by period, such as totals by month or by day. The costs of storage, reads, writes, and other features can be broken out and compared as well. How to use and apply table tags in Cost Explorer By default, Cost Explorer does not provide a summary of the costs for any one specific table, as it will combine the costs of multiple tables into a total. However, you can use AWS resource tagging to identify each table by a metadata tag. Tags are key-value pairs you can use for a variety of purposes, such as to identify all resources belonging to a project or department. For this example, we'll assume you have a table named MyTable. Cost optimization API Version 2012-08-10 3050"} +{"global_id": 1493, "doc_id": "dynamodb", "chunk_id": "14", "question_id": 2, "question": "How can costs be grouped in Cost Explorer?", "answer_span": "You can choose to group costs by period, such as totals by month or by day.", "chunk": "help estimate throughput costs, as these are billed based on actual, not provisioned usage in any one period. Cost Explorer's default view Cost Explorer's default view provides charts showing the cost of consumed resources such as throughput and storage. You can choose to group costs by period, such as totals by month or by day. The costs of storage, reads, writes, and other features can be broken out and compared as well. How to use and apply table tags in Cost Explorer By default, Cost Explorer does not provide a summary of the costs for any one specific table, as it will combine the costs of multiple tables into a total. However, you can use AWS resource tagging to identify each table by a metadata tag. Tags are key-value pairs you can use for a variety of purposes, such as to identify all resources belonging to a project or department. For this example, we'll assume you have a table named MyTable. Cost optimization API Version 2012-08-10 3050"} +{"global_id": 1494, "doc_id": "dynamodb", "chunk_id": "14", "question_id": 3, "question": "What does Cost Explorer not provide by default for any one specific table?", "answer_span": "By default, Cost Explorer does not provide a summary of the costs for any one specific table, as it will combine the costs of multiple tables into a total.", "chunk": "help estimate throughput costs, as these are billed based on actual, not provisioned usage in any one period. Cost Explorer's default view Cost Explorer's default view provides charts showing the cost of consumed resources such as throughput and storage. You can choose to group costs by period, such as totals by month or by day. The costs of storage, reads, writes, and other features can be broken out and compared as well. How to use and apply table tags in Cost Explorer By default, Cost Explorer does not provide a summary of the costs for any one specific table, as it will combine the costs of multiple tables into a total. However, you can use AWS resource tagging to identify each table by a metadata tag. Tags are key-value pairs you can use for a variety of purposes, such as to identify all resources belonging to a project or department. For this example, we'll assume you have a table named MyTable. Cost optimization API Version 2012-08-10 3050"} +{"global_id": 1495, "doc_id": "dynamodb", "chunk_id": "14", "question_id": 4, "question": "What are tags in AWS resource tagging?", "answer_span": "Tags are key-value pairs you can use for a variety of purposes, such as to identify all resources belonging to a project or department.", "chunk": "help estimate throughput costs, as these are billed based on actual, not provisioned usage in any one period. Cost Explorer's default view Cost Explorer's default view provides charts showing the cost of consumed resources such as throughput and storage. You can choose to group costs by period, such as totals by month or by day. The costs of storage, reads, writes, and other features can be broken out and compared as well. How to use and apply table tags in Cost Explorer By default, Cost Explorer does not provide a summary of the costs for any one specific table, as it will combine the costs of multiple tables into a total. However, you can use AWS resource tagging to identify each table by a metadata tag. Tags are key-value pairs you can use for a variety of purposes, such as to identify all resources belonging to a project or department. For this example, we'll assume you have a table named MyTable. Cost optimization API Version 2012-08-10 3050"}