Free Practice · No Signup Required
30 Free AWS SOA-C03 Practice Questions
Real practice questions for the AWS CloudOps Engineer Associate (SOA-C03) exam, with answers and detailed explanations. Updated 2026.
Free questions
30
Passing score
720 out of 1000
Exam time
130 minutes
Question pool
390+ Questions
Below are 30 real practice questions for the AWS CloudOps Engineer Associate (SOA-C03) exam. Each question shows the correct answer and a detailed explanation when you reveal it. Use these to benchmark your readiness — if you score below 70% on these 30 questions, plan for at least 4 more weeks of study before booking.
SOA-C03 Practice Questions
Question 1.An Amazon EC2 instance needs to be reachable from the internet. The EC2 instance is in a subnet with the following route table. Which entry must a CloudOps Engineer add to the route table to meet this requirement? 
- A.A route for `0.0.0.0/0` that points to a `NAT` gateway.
- B.A route for `0.0.0.0/0` that points to an egress-only internet gateway.
- C.A route for `0.0.0.0/0` that points to an internet gateway.(correct answer)
- D.A route for `0.0.0.0/0` that points to an elastic network interface.
Show answer & explanationHide answer
Correct answer: C
A route for `0.0.0.0/0` that points to an internet gateway.
Explanation
To be reachable from the internet, a subnet must have a route to an Internet Gateway (IGW) for 0.0.0.0/0.
Question 2.A CloudOps Engineer launches an Amazon EC2 instance in a private subnet of a `VPC`. When the CloudOps Engineer attempts a `curl` command from the command line of the EC2 instance, the CloudOps Engineer cannot connect to `https:www.example.com`. What should the CloudOps Engineer do to resolve this issue?
- A.Ensure that there is an outbound security group for port `443` to `0.0.0.0/0`.(correct answer)
- B.Ensure that there is an inbound security group for port `443` from `0.0.0.0/0`.
- C.Ensure that there is an outbound network `ACL` for ephemeral ports `1024-66535` to `0.0.0.0/0`.
- D.Ensure that there is an outbound network `ACL` for port `80` to `0.0.0.0/0`.
Show answer & explanationHide answer
Correct answer: A
Ensure that there is an outbound security group for port `443` to `0.0.0.0/0`.
Explanation
Security groups are stateful, but outbound rules must explicitly allow traffic. For HTTPS, outbound port 443 needs to be allowed.
Question 3.A company's public website is hosted in an Amazon S3 bucket in the `us-east-1` Region behind an Amazon CloudFront distribution. The company wants to ensure that the website is protected from DDoS attacks. A CloudOps Engineer needs to deploy a solution that gives the company the ability to maintain control over the rate limit at which DDoS protections are applied. Which solution will meet these requirements?
- A.Deploy a global-scoped AWS WAF web `ACL` with an allow default action. Configure an AWS WAF rate-based rule to block matching traffic. Associate the web `ACL` with the CloudFront distribution.(correct answer)
- B.Deploy an AWS WAF web `ACL` with an allow default action in `us-east-1`. Configure an AWS WAF rate-based rule to block matching traffic. Associate the web `ACL` with the S3 bucket.
- C.Deploy a global-scoped AWS WAF web `ACL` with a block default action. Configure an AWS WAF rate-based rule to allow matching traffic. Associate the web `ACL` with the CloudFront distribution.
- D.Deploy an AWS WAF web `ACL` with a block default action in `us-east-1`. Configure an AWS WAF rate-based rule to allow matching traffic. Associate the web `ACL` with the S3 bucket.
Show answer & explanationHide answer
Correct answer: A
Deploy a global-scoped AWS WAF web `ACL` with an allow default action. Configure an AWS WAF rate-based rule to block matching traffic. Associate the web `ACL` with the CloudFront distribution.
Explanation
To protect CloudFront, use a Global WAF Web ACL. A rate-based rule with a 'Block' action on matching traffic (exceeding rate) effectively mitigates DDoS from specific IPs.
Question 4.A company hosts an online shopping portal in the AWS Cloud. The portal provides `HTTPS` security by using a TLS certificate on an Elastic Load Balancer (ELB). Recently, the portal suffered an outage because the TLS certificate expired. A CloudOps Engineer must create a solution to automatically renew certificates to avoid this issue in the future. What is the MOST operationally efficient solution that meets these requirements?
- A.Request a public certificate by using AWS Certificate Manager (ACM). Associate the certificate from ACM with the ELB. Write a scheduled AWS Lambda function to renew the certificate every 18 months.
- B.Request a public certificate by using AWS Certificate Manager (ACM). Associate the certificate from ACM with the ELB. ACM will automatically manage the renewal of the certificate.(correct answer)
- C.Register a certificate with a third-party certificate authority (CA). Import this certificate into AWS Certificate Manager (ACM). Associate the certificate from ACM with the ELB. ACM will automatically manage the renewal of the certificate.
- D.Register a certificate with a third-party certificate authority (CA). Configure the ELB to import the certificate directly from the CA. Set the certificate refresh cycle on the ELB to refresh when the certificate is within 3 months of the expiration date.
Show answer & explanationHide answer
Correct answer: B
Request a public certificate by using AWS Certificate Manager (ACM). Associate the certificate from ACM with the ELB. ACM will automatically manage the renewal of the certificate.
Explanation
ACM automatically renews public certificates issued by Amazon that are associated with ELBs, CloudFront, etc., removing operational overhead.
Question 5.With the threat of ransomware viruses encrypting and holding company data hostage, which action should be taken to protect an Amazon S3 bucket?
- A.Deny Post, Put, and Delete on the bucket.
- B.Enable server-side encryption on the bucket.
- C.Enable Amazon S3 versioning on the bucket.(correct answer)
- D.Enable snapshots on the bucket.
Show answer & explanationHide answer
Correct answer: C
Enable Amazon S3 versioning on the bucket.
Explanation
S3 Versioning allows you to preserve, retrieve, and restore every version of every object. If ransomware encrypts (overwrites) a file, you can restore the previous non-encrypted version.
Question 6.A company is partnering with an external vendor to provide data processing services. For this integration, the vendor must host the company's data in an Amazon S3 bucket in the vendor's AWS account. The vendor is allowing the company to provide an AWS Key Management Service (AWS KMS) key to encrypt the company's data. The vendor has provided an IAM role Amazon Resources Name (ARN) to the company for this integration. What should a CloudOps Engineer do to configure this integration?
- A.Create a new KMS key. Add the vendor's IAM role ARN to the KMS key policy. Provide the new KMS key ARN to the vendor.(correct answer)
- B.Create a new KMS key. Create a new IAM user. Add the vendor's IAM role ARN to an inline policy that is attached to the IAM user. Provide the new IAM user ARN to the vendor.
- C.Configure encryption using the KMS managed S3 key. Add the vendor's IAM role ARN to the KMS managed S3 key policy. Provide the KMS managed S3 key ARN to the vendor.
- D.Configure encryption using the KMS managed S3 key. Create a S3 bucket. Add the vendor's IAM role ARN to the S3 bucket policy. Provide the S3 bucket ARN to the vendor.
Show answer & explanationHide answer
Correct answer: A
Create a new KMS key. Add the vendor's IAM role ARN to the KMS key policy. Provide the new KMS key ARN to the vendor.
Explanation
To allow an external account (vendor) to use your KMS key, you must create a Customer Managed Key (CMK) and update its Key Policy to allow the external principal (Vendor's IAM Role ARN).
Question 7.A database is running on an Amazon RDS Multi-AZ DB instance. A recent security audit found the database to be out of compliance because it was not encrypted. Which approach will resolve the encryption requirement?
- A.Log in to the RDS console and select the encryption box to encrypt the database.
- B.Create a new encrypted Amazon EBS volume and attach it to the instance.
- C.Encrypt the standby replica in the secondary Availability Zone and promote it to the primary instance.
- D.Take a snapshot of the RDS instance, copy and encrypt the snapshot, and then restore to the new RDS instance.(correct answer)
Show answer & explanationHide answer
Correct answer: D
Take a snapshot of the RDS instance, copy and encrypt the snapshot, and then restore to the new RDS instance.
Explanation
You cannot encrypt an existing unencrypted RDS instance directly. You must take a snapshot, copy it (enabling encryption during the copy), and restore a new instance from the encrypted snapshot.
Question 8.A CloudOps Engineer receives an alert from Amazon GuardDuty about suspicious network activity on an Amazon EC2 instance. The GuardDuty finding lists a new external IP address as a traffic destination. The CloudOps Engineer does not recognize the external IP address. The CloudOps Engineer must block traffic to the external IP address that GuardDuty identified Which solution will meet this requirement?
- A.Create a new security group to block traffic to the external IP address. Assign the new security group to the EC2 instance.
- B.Use `VPC` flow logs with Amazon Athena to block traffic to the external IP address.
- C.Create a network `ACL`. Add an outbound deny rule for traffic to the external IP address.(correct answer)
- D.Create a new security group to block traffic to the external IP address. Assign the new security group to the entire `VPC`.
Show answer & explanationHide answer
Correct answer: C
Create a network `ACL`. Add an outbound deny rule for traffic to the external IP address.
Explanation
Network ACLs (NACLs) are stateless and support 'Deny' rules, making them the standard way to explicitly block IP addresses at the subnet level. Security groups only support 'Allow' rules.
Question 9.A web application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Auto Scaling group across multiple Availability Zones. A CloudOps Engineer notices that some of these EC2 instances show up as healthy in the Auto Scaling group but show up as unhealthy in the `ALB` target group. What is a possible reason for this issue?
- A.Security groups are not allowing traffic between the `ALB` and the failing EC2 instances.
- B.The Auto Scaling group health check is configured for EC2 status checks.
- C.The EC2 instances are failing to launch and failing EC2 status checks.
- D.The target group health check is configured with an incorrect port or path.(correct answer)
Show answer & explanationHide answer
Correct answer: D
The target group health check is configured with an incorrect port or path.
Explanation
If ASG sees instances as healthy (passing EC2 status checks) but ALB sees them as unhealthy, the issue is likely application-specific or configuration-related. An incorrect path or port in the Target Group health check settings is a common cause.
Question 10.A CloudOps Engineer has enabled AWS CloudTrail in an AWS account. If CloudTrail is disabled, it must be re-enabled immediately. What should the CloudOps Engineer do to meet these requirements WITHOUT writing custom code?
- A.Add the AWS account to AWS Organizations. Enable CloudTrail in the management account.
- B.Create an AWS Config rule that is invoked when CloudTrail configuration changes.(correct answer)
- C.Create an AWS Config rule that is invoked when CloudTrail configuration changes.
- D.Create an Amazon EventBridge (Amazon CloudWatch Events) hourly rule with a schedule pattern to run an AWS Systems Manager Automation document to enable CloudTrail.
Show answer & explanationHide answer
Correct answer: B
Create an AWS Config rule that is invoked when CloudTrail configuration changes.
Explanation
AWS Config Managed Rules (e.g., `cloudtrail-enabled`) can check for compliance. Config Remediations can then automatically re-enable CloudTrail if it's found to be disabled, identifying and fixing the issue.
Question 11.A CloudOps Engineer needs to give users the ability to upload objects to an Amazon S3 bucket. The CloudOps Engineer creates a presigned URL and provides the URL to a user, but the user cannot upload an object to the S3 bucket. The presigned URL has not expired, and no bucket policy is applied to the S3 bucket. Which of the following could be the cause of this problem?
- A.The user has not properly configured the AWS CLI with their access key and secret access key.
- B.The CloudOps Engineer does not have the necessary permissions to upload the object to the S3 bucket.(correct answer)
- C.The CloudOps Engineer must apply a bucket policy to the S3 bucket to allow the user to upload the object.
- D.The object already has been uploaded through the use of the presigned URL, so the presigned URL is no longer valid.
Show answer & explanationHide answer
Correct answer: B
The CloudOps Engineer does not have the necessary permissions to upload the object to the S3 bucket.
Explanation
Presigned URLs inherit the permissions of the creator at the time of creation. If the CloudOps Engineer (creator) lacks 'PutObject' permissions, the URL will fail for uploads.
Question 12.A company runs a web application on three Amazon EC2 instances behind an Application Load Balancer (ALB). The company notices that random periods of increased traffic cause a degradation in the application's performance. A CloudOps Engineer must scale the application to meet the increased traffic. Which solution meets these requirements?
- A.Create an Amazon CloudWatch alarm to monitor application latency and increase the size of each EC2 instance if the desired threshold is reached.
- B.Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor application latency and add an EC2 instance to the `ALB` if the desired threshold is reached.
- C.Deploy the application to an Auto Scaling group of EC2 instances with a target tracking scaling policy. Attach the `ALB` to the Auto Scaling group.(correct answer)
- D.Deploy the application to an Auto Scaling group of EC2 instances with a scheduled scaling policy. Attach the `ALB` to the Auto Scaling group.
Show answer & explanationHide answer
Correct answer: C
Deploy the application to an Auto Scaling group of EC2 instances with a target tracking scaling policy. Attach the `ALB` to the Auto Scaling group.
Explanation
Auto Scaling with Target Tracking (e.g., maintain 50% CPU or specific request count) is the most standard and effective way to handle variable traffic loads automatically.
Question 13.A company uses an Amazon Elastic File System (Amazon EFS) file system to share files across many Linux Amazon EC2 instances. A CloudOps Engineer notices that the file system's `PercentIOLimit` metric is consistently at `100%` for 15 minutes or longer. The CloudOps Engineer also notices that the application that reads and writes to that file system is performing poorly. They application requires high throughput and IOPS while accessing the file system. What should the CloudOps Engineer do to remediate the consistently high `PercentIOLimit` metric?
- A.Create a new EFS file system that uses Max I/O performance mode. Use AWS DataSync to migrate data to the new EFS file system.(correct answer)
- B.Create an EFS lifecycle policy to transition future files to the Infrequent Access (IA) storage class to improve performance. Use AWS DataSync to migrate existing data to IA storage.
- C.Modify the existing EFS file system and activate Max I/O performance mode.
- D.Modify the existing EFS file system and activate `Provisioned Throughput` mode.
Show answer & explanationHide answer
Correct answer: A
Create a new EFS file system that uses Max I/O performance mode. Use AWS DataSync to migrate data to the new EFS file system.
Explanation
High `PercentIOLimit` indicates the file system is hitting the General Purpose mode IOPS limit. Max I/O mode removes this limit but has higher latency. You cannot switch modes on an existing EFS; you must create a new one and migrate.
Question 14.A company needs to restrict access to an Amazon S3 bucket to Amazon EC2 instances in a `VPC` only. All traffic must be over the AWS private network. What actions should the CloudOps Engineer take to meet these requirements?
- A.Create a `VPC` endpoint for the S3 bucket, and create an IAM policy that conditionally limits all S3 actions on the bucket to the `VPC` endpoint as the source.
- B.Create a `VPC` endpoint for the S3 bucket, and create a S3 bucket policy that conditionally limits all S3 actions on the bucket to the `VPC` endpoint as the source.(correct answer)
- C.Create a service-linked role for Amazon EC2 that allows the EC2 instances to interact directly with Amazon S3, and attach an IAM policy to the role that allows the EC2 instances full access to the S3 bucket.
- D.Create a `NAT` gateway in the `VPC`, and modify the `VPC` route table to route all traffic destined for Amazon S3 through the `NAT` gateway.
Show answer & explanationHide answer
Correct answer: B
Create a `VPC` endpoint for the S3 bucket, and create a S3 bucket policy that conditionally limits all S3 actions on the bucket to the `VPC` endpoint as the source.
Explanation
A VPC Endpoint allows private access to S3. To restrict access to *only* that VPC, you must apply an S3 Bucket Policy that creates a `Deny` rule for anything that is NOT the specific VPC Endpoint (using `StringNotEquals` on `aws:sourceVpce`).
Question 15.A company is managing many accounts by using a single organization in AWS Organizations. The company is reviewing internal security of its AWS environment. The company's security engineer has their own AWS account and wants to review the `VPC` configuration of developer AWS accounts. Which solution will meet these requirements in the MOST secure manner?
- A.Create an IAM policy in each developer account that has read-only access related to `VPC` resources Assign the policy to an IAM user. Share the user credentials with the security engineer.
- B.Create an IAM policy in each developer account that has administrator access to all Amazon EC2 actions, including `VPC` actions. Assign the policy to an IAM user. Share the user credentials with the security engineer.
- C.Create an IAM policy in each developer account that has administrator access related to
- D.Create an IAM policy in each developer account that has read-only access related to `VPC` resources. Assign the policy to a cross-account IAM role. Ask the security engineer to assume the role from their account.(correct answer)
Show answer & explanationHide answer
Correct answer: D
Create an IAM policy in each developer account that has read-only access related to `VPC` resources. Assign the policy to a cross-account IAM role. Ask the security engineer to assume the role from their account.
Explanation
Cross-account roles are the secure, standard way to grant access between AWS accounts. It avoids sharing long-term credentials and follows the principle of least privilege.
Question 16.A company migrated an I/O intensive application to an Amazon EC2 general purpose instance. The EC2 instance has a single General Purpose SSD Amazon Elastic Block Store (Amazon EBS) volume attached. Application users report that certain actions that require intensive reading and writing to the disk are taking much longer than normal or are failing completely. After reviewing the performance metrics of the EBS volume, a CloudOps Engineer notices that the `VolumeQueueLength` metric is consistently high during the same times in which the users are reporting issues. The CloudOps Engineer needs to resolve this problem to restore full performance to the application. Which action will meet these requirements?
- A.Modify the instance type to be storage optimized.
- B.Modify the volume properties by deselecting Auto-Enable Volume 10.
- C.Modify the volume properties to increase the IOPS.(correct answer)
- D.Modify the instance to enable enhanced networking.
Show answer & explanationHide answer
Correct answer: C
Modify the volume properties to increase the IOPS.
Explanation
High `VolumeQueueLength` indicates the EBS volume cannot keep up with the I/O requests. Increasing IOPS (by modifying volume type or size) resolves this bottleneck.
Question 17.A company has multiple AWS Site-to-Site `VPN` connections between a `VPC` and its branch offices. The company manages an Amazon Elasticsearch Service (Amazon ES) domain that is configured with public access. The Amazon ES domain has an open domain access policy. A CloudOps Engineer needs to ensure that Amazon ES can be accessed only from the branch offices while preserving existing data. Which solution will meet these requirements?
- A.Configure an identity-based access policy on Amazon ES. Add an allow statement to the policy that includes the Amazon Resource Name (ARN) for each branch office `VPN` connection.
- B.Configure an IP-based domain access policy on Amazon ES. Add an allow statement to the policy that includes the private IP `CIDR` blocks from each branch office network.(correct answer)
- C.Deploy a new Amazon ES domain in private subnets in a `VPC`, and import a snapshot from the old domain. Create a security group that allows inbound traffic from the branch office `CIDR` blocks.
- D.Reconfigure the Amazon ES domain in private subnets in a `VPC`. Create a security group that allows inbound traffic from the branch office `CIDR` blocks.
Show answer & explanationHide answer
Correct answer: B
Configure an IP-based domain access policy on Amazon ES. Add an allow statement to the policy that includes the private IP `CIDR` blocks from each branch office network.
Explanation
Since the domain is public, you can restrict access using an IP-based access policy. For VPN traffic to be recognized by its private IP, it works if the VPN terminates in a way that preserves source IP or if you use public IPs. However, the question says 'public access', so restricting by the Public IP of the branch offices is the standard way. If 'private IP CIDR' is used, it implies connection via VPC Endpoint or similar, but the domain is public. The option 1 (IP-based policy) is the only valid way to restrict a public endpoint.
Question 18.A company is managing many accounts by using a single organization in AWS Organizations. The organization has all features enabled. The company wants to turn on AWS Config in all the accounts of the organization and in all AWS Regions. What should a CloudOps Engineer do to meet these requirements in the MOST operationally efficient way?
- A.Use AWS CloudFormation `StackSets` to deploy stack instances that turn on AWS Config in all accounts and in all Regions.(correct answer)
- B.Use AWS CloudFormation `StackSets` to deploy stack policies that turn on AWS Config in all accounts and in all Regions.
- C.Use Service Control Policies (SCPs) to configure AWS Config in all accounts and in all Regions.
- D.Create a script that uses the AWS CLI to turn on AWS Config in all accounts in the organization. Run the script from the organization's management account.
Show answer & explanationHide answer
Correct answer: A
Use AWS CloudFormation `StackSets` to deploy stack instances that turn on AWS Config in all accounts and in all Regions.
Explanation
CloudFormation StackSets allow you to deploy stack instances (resources like AWS Config Recorders) across all accounts in an Organization and all Regions in a single operation.
Question 19.A company's CloudOps Engineer deploys four new Amazon EC2 instances by using the standard Amazon Linux 2 Amazon Machine Image (AMI). The company needs to be able to use AWS Systems Manager to manage the instances The CloudOps Engineer notices that the instances do not appear in the Systems Manager console. What must the CloudOps Engineer do to resolve this issue?
- A.Connect to each instance by using `SSH`. Install Systems Manager Agent on each instance. Configure Systems Manager Agent to start automatically when the instances start up.
- B.Use AWS Certificate Manager (ACM) to create a TLS certificate. Import the certificate into each instance. Configure Systems Manager Agent to use the TLS certificate for secure communications.
- C.Connect to each instance by using `SSH`. Create an `ssm-user` account. Add the `ssm-user` account to the `/etcsudoers` directory.
- D.Attach an IAM instance profile to the instances. Ensure that the instance profile contains the `AmazonSSMManagedinstanceCore` policy.(correct answer)
Show answer & explanationHide answer
Correct answer: D
Attach an IAM instance profile to the instances. Ensure that the instance profile contains the `AmazonSSMManagedinstanceCore` policy.
Explanation
For Systems Manager to manage an EC2 instance, the SSM Agent (which is pre-installed on Amazon Linux 2) requires an IAM Role with the `AmazonSSMManagedInstanceCore` policy to communicate with the SSM service.
Question 20.A development team recently deployed a new version of a web application to production. After the release, penetration testing revealed a cross-site scripting vulnerability that could expose user data. Which AWS service will mitigate this issue?
- A.AWS Shield Standard.
- B.AWS WAF.(correct answer)
- C.Elastic Load Balancing.
- D.Amazon Cognito.
Show answer & explanationHide answer
Correct answer: B
AWS WAF.
Explanation
AWS WAF (Web Application Firewall) protects against common web exploits like SQL injection and Cross-Site Scripting (XSS).
Question 21.An Amazon EC2 instance is running an application that uses Amazon Simple Queue Service (Amazon SQS) queues. A CloudOps Engineer must ensure that the application can read, write, and delete messages from the SQS queues. Which solution will meet these requirements in the MOST secure manner?
- A.Create an IAM user with an IAM policy that allows the `sqs:SendMessage` permission, the `sqs:ReceiveMessage` permission, and the `sqs:DeleteMessage` permission to the appropriate queues Embed the IAM user's credentials in the application's configuration.
- B.Create an IAM user with an IAM policy that allows the `sqs:SendMessage` permission, the `sqs:ReceiveMessage` permission, and the `sqs:DeleteMessage` permission to the appropriate queues Export the IAM user's access key and secret access key as environment variables on the EC2 instance.
- C.Create and associate an IAM role that allows EC2 instances to call AWS services. Attach an IAM policy to the role that allows `sqs:*` permissions to the appropriate queues.
- D.Create and associate an IAM role that allows EC2 instances to call AWS services. Attach an IAM policy to the role that allows the `sqs:SendMessage` permission, the `sqs:ReceiveMessage` permission, and the `sqs:DeleteMessage` permission to the appropriate queues.(correct answer)
Show answer & explanationHide answer
Correct answer: D
Create and associate an IAM role that allows EC2 instances to call AWS services. Attach an IAM policy to the role that allows the `sqs:SendMessage` permission, the `sqs:ReceiveMessage` permission, and the `sqs:DeleteMessage` permission to the appropriate queues.
Explanation
Use IAM Roles for EC2 instances to avoid managing long-term credentials. Grant only the necessary permissions (Least Privilege), rather than `sqs:*`.
Question 22.A company has a policy that requires all Amazon EC2 instances to have a specific set of tags. If an EC2 instance does not have the required tags, the noncompliant instance should be terminated. What is the MOST operationally efficient solution that meets these requirements?
- A.Create an Amazon EventBridge (Amazon CloudWatch Events) rule to send all EC2 instance state changes to an AWS Lambda function to determine if each instance is compliant. Terminate any noncompliant instances.
- B.Create an IAM policy that enforces all EC2 instance tag requirements. If the required tags are not in place for an instance, the policy will terminate noncompliant instance.
- C.Create an AWS Lambda function to determine if each EC2 instance is compliant and terminate an instance if it is noncompliant. Schedule the Lambda function to invoke every 5 minutes.
- D.Create an AWS Config rule to check if the required tags are present. If an EC2 instance is noncompliant, invoke an AWS Systems Manager Automation document to terminate the instance.(correct answer)
Show answer & explanationHide answer
Correct answer: D
Create an AWS Config rule to check if the required tags are present. If an EC2 instance is noncompliant, invoke an AWS Systems Manager Automation document to terminate the instance.
Explanation
AWS Config with the `required-tags` managed rule can detect non-compliant instances. Automatic remediation via SSM Automation can then be configured to terminate them.
Question 23.A CloudOps Engineer wants to upload a file that is 1 TB in size from on-premises to an Amazon S3 bucket using multipart uploads. What should the CloudOps Engineer do to meet this requirement?
- A.Upload the file using the S3 console.
- B.Use the `s3api copy-object` command.
- C.Use the `s3api put-object` command.
- D.Use the `s3 cp` command.(correct answer)
Show answer & explanationHide answer
Correct answer: D
Use the `s3 cp` command.
Explanation
The AWS CLI high-level command `aws s3 cp` automatically performs multipart uploads for large files. `s3api put-object` does not.
Question 24.A CloudOps Engineer launches an Amazon EC2 Linux instance in a public subnet. When the instance is running, the CloudOps Engineer obtains the public IP address and attempts to remotely connect to the instance multiple times. However, the CloudOps Engineer always receives a timeout error. Which action will allow the CloudOps Engineer to remotely connect to the instance?
- A.Add a route table entry in the public subnet for the CloudOps Engineer's IP address.
- B.Add an outbound network `ACL` rule to allow `TCP` port `22` for the CloudOps Engineer's IP address.
- C.Modify the instance security group to allow inbound `SSH` traffic from the CloudOps Engineer's IP address.(correct answer)
- D.Modify the instance security group to allow outbound `SSH` traffic to the CloudOps Engineer's IP address.
Show answer & explanationHide answer
Correct answer: C
Modify the instance security group to allow inbound `SSH` traffic from the CloudOps Engineer's IP address.
Explanation
Timeout errors usually indicate that traffic is being silently dropped by a Security Group. Ensuring port 22 (SSH) is open in the inbound rules for your IP is the fix.
Question 25.A company wants to use only IPv6 for all its Amazon EC2 instances. The EC2 instances must not be accessible from the internet, but the EC2 instances must be able to access the internet. The company creates a dual-stack `VPC` and IPv6-only subnets. How should a CloudOps Engineer configure the `VPC` to meet these requirements?
- A.Create and attach a `NAT` gateway. Create a custom route table that includes an entry to point all IPv6 traffic to the `NAT` gateway. Attach the custom route table to the IPv6-only subnets.
- B.Create and attach an internet gateway. Create a custom route table that includes an entry to point all IPv6 traffic to the internet gateway. Attach the custom route table to the IPv6-only subnets.
- C.Create and attach an egress-only internet gateway. Create a custom route table that includes an entry to point all IPv6 traffic to the egress-only internet gateway. Attach the custom route table to the IPv6-only subnets.(correct answer)
- D.Create and attach an internet gateway and a `NAT` gateway. Create a custom route table that includes an entry to point all IPv6 traffic to the internet gateway and all IPv4 traffic to the `NAT` gateway. Attach the custom route table to the IPv6-only subnets.
Show answer & explanationHide answer
Correct answer: C
Create and attach an egress-only internet gateway. Create a custom route table that includes an entry to point all IPv6 traffic to the egress-only internet gateway. Attach the custom route table to the IPv6-only subnets.
Explanation
Egress-Only Internet Gateways allow IPv6 traffic to go out to the internet but prevent incoming connections initiated from the internet. This is the IPv6 equivalent of a NAT Gateway.
Question 26.A CloudOps Engineer wants to manage a web server application with AWS Elastic Beanstalk. The Elastic Beanstalk service must maintain full capacity for new deployments at all times. Which deployment policies satisfy this requirement? (Select TWO.)
- A.All at once.
- B.Immutable.(correct answer)
- C.Rebuild.
- D.Rolling.
- E.Rolling with additional batch.(correct answer)
Show answer & explanationHide answer
Correct answer: B, E
Immutable. / Rolling with additional batch.
Explanation
'Immutable' deployment creates a fresh set of instances for the new version, maintaining full capacity of the old version until the new one is ready. 'Rolling with additional batch' launches a new batch before terminating the old one, ensuring capacity is never reduced below the desired count.
Question 27.A company asks a CloudOps Engineer to ensure that AWS CloudTrail files are not tampered with after they are created. Currently, the company uses AWS Identity and Access Management (IAM) to restrict access to specific trails. The company's security team needs the ability to trace the integrity of each file. What is the MOST operationally efficient solution that meets these requirements?
- A.Create an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function when a new file is delivered. Configure the Lambda function to compute an MD5 hash check on the file and store the result in an Amazon DynamoDB table. The security team can use the values that are stored in DynamoDB to verify the integrity of the delivered files.
- B.Create an AWS Lambda function that is invoked each time a new file is delivered to the CloudTrail bucket. Configure the Lambda function to compute an MD5 hash check on the file and store the result as a tag in an Amazon S3 object. The security team can use the information in the tag to verify the integrity of the delivered files.
- C.Enable the CloudTrail file integrity feature on an Amazon S3 bucket. Create an IAM policy that grants the security team access to the file integrity logs that are stored in the S3 bucket.
- D.Enable the CloudTrail file integrity feature on the trail. The security team can use the digest file that is created by CloudTrail to verify the integrity of the delivered files.(correct answer)
Show answer & explanationHide answer
Correct answer: D
Enable the CloudTrail file integrity feature on the trail. The security team can use the digest file that is created by CloudTrail to verify the integrity of the delivered files.
Explanation
CloudTrail Log File Integrity Validation produces a digest file that allows you to cryptographically verify that the log files have not been modified.
Question 28.A company has multiple Amazon EC2 instances that run a resource-intensive application in a development environment. A CloudOps Engineer is implementing a solution to stop these EC2 instances when they are not in use. Which solution will meet this requirement?
- A.Assess AWS CloudTrail logs to verify that there is no EC2 API activity. Invoke an AWS Lambda function to stop the EC2 instances.
- B.Create an Amazon CloudWatch alarm to stop the EC2 instances when the average CPU utilization is lower than `5%` for a 30-minute period.(correct answer)
- C.Create an Amazon CloudWatch metric to stop the EC2 instances when the `VolumeReadBytes` metric is lower than `500` for a 30-minute period.
- D.Use AWS Config to invoke an AWS Lambda function to stop the EC2 instances based on resource configuration changes.
Show answer & explanationHide answer
Correct answer: B
Create an Amazon CloudWatch alarm to stop the EC2 instances when the average CPU utilization is lower than `5%` for a 30-minute period.
Explanation
CloudWatch Alarms can trigger EC2 actions, such as stopping an instance, based on metric thresholds (e.g., low CPU utilization indicating idleness).
Question 29.A company creates custom AMI images by launching new Amazon EC2 instances from an AWS CloudFormation template it installs and configure necessary software through AWS OpsWorks and takes images of each EC2 instance. The process of installing and configuring software can take between 2 to 3 hours but at times the process stalls due to installation errors. The CloudOps Engineer must modify the CloudFormation template so if the process stalls, the entire stack will fail and roll back. Based on these requirements what should be added to the template?
- A.`Conditions` with a timeout set to 4 hours.
- B.`CreationPolicy` with timeout set to 4 hours.(correct answer)
- C.`DependsOn` a timeout set to 4 hours.
- D.`Metadata` with a timeout set to 4 hours.
Show answer & explanationHide answer
Correct answer: B
`CreationPolicy` with timeout set to 4 hours.
Explanation
A `CreationPolicy` attribute allows you to pause the creation of a resource (like an EC2 instance) until a signal is received (e.g., from `cfn-signal` after software installation). If the timeout expires without a signal (stalling), the stack rolls back.
Question 30.A company plans to run a public web application on Amazon EC2 instances behind an Elastic Load Balancer (ELB). The company's security team wants to protect the website by using AWS Certificate Manager (ACM) certificates. The ELB must automatically redirect any `HTTP` requests to `HTTPS`. Which solution will meet these requirements?
- A.Create an Application Load Balancer that has one `HTTPS` listener on port `80`. Attach an SSL/TLS certificate to listener port `80`. Create a rule to redirect requests from `HTTP` to `HTTPS`.
- B.Create an Application Load Balancer that has one `HTTP` listener on port `80` and one `HTTPS` protocol listener on port `443`. Attach an SSL/TLS certificate to listener port `443`. Create a rule to redirect requests from port `80` to port `443`.(correct answer)
- C.Create an Application Load Balancer that has two `TCP` listeners on port `80` and port `443`. Attach an SSL/TLS certificate to listener port `443`. Create a rule to redirect requests from port `80` to port `443`.
- D.Create a Network Load Balancer that has two `TCP` listeners on port `80` and port `443`. Attach an SSL/TLS certificate to listener port `443`. Create a rule to redirect requests from port `80` to port `443`.
Show answer & explanationHide answer
Correct answer: B
Create an Application Load Balancer that has one `HTTP` listener on port `80` and one `HTTPS` protocol listener on port `443`. Attach an SSL/TLS certificate to listener port `443`. Create a rule to redirect requests from port `80` to port `443`.
Explanation
ALBs support HTTP (80) and HTTPS (443) listeners. To redirect, you configure the HTTP listener to return a Redirect action to HTTPS port 443.
Ready for the full SOA-C03 exam?
Get all 390+ Questions, timed simulation, and weak-area analytics. Plans from $2.99 — credits never expire.
Frequently Asked Questions
Are these real SOA-C03 practice questions?+
Is the SOA-C03 exam hard?+
How many questions are on the real SOA-C03 exam?+
Do I need to sign up to use these questions?+
Keep studying
Pass SOA-C03 on your first try
Join candidates using DummyExams to practice with realistic timed exams, detailed explanations, and weak-area analytics.
Start full SOA-C03 practice exam