DummyExams LogoDummyExams
AWS logo

Free Practice · No Signup Required

30 Free AWS DBS-C01 Practice Questions

Real practice questions for the AWS AWS Certified Database Specialty (DBS-C01) exam, with answers and detailed explanations. Updated 2026.

Free questions

30

Passing score

750 out of 1000

Exam time

180 minutes

Question pool

160+ Questions

Below are 30 real practice questions for the AWS AWS Certified Database Specialty (DBS-C01) exam. Each question shows the correct answer and a detailed explanation when you reveal it. Use these to benchmark your readiness — if you score below 70% on these 30 questions, plan for at least 4 more weeks of study before booking.

DBS-C01 Practice Questions

  1. Question 1.A company has deployed an e-commerce web application in a new AWS account. An Amazon RDS for MySQL Multi-AZ DB instance is part of this deployment with a database-1.xxxxxxxxxxxx.us-east-1.rds.amazonaws.com endpoint listening on port 3306. The company's Database Specialist is able to log in to MySQL and run queries from the bastion host using these details. When users try to utilize the application hosted in the AWS account, they are presented with a generic error message. The application servers are logging a 'could not connect to server: Connection times out' error message to Amazon CloudWatch Logs. What is the cause of this error?

    • A.The user name and password the application is using are incorrect.
    • B.The security group assigned to the application servers does not have the necessary rules to allow inbound connections from the DB instance.
    • C.The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.(correct answer)
    • D.The user name and password are correct, but the user is not authorized to use the DB instance.
    Show answer & explanation

    Correct answer: C

    The security group assigned to the DB instance does not have the necessary rules to allow inbound connections from the application servers.

    Explanation

    RDS Connection Timeout: (2) The DB instance security group must allow inbound connections FROM application servers (not TO). Security groups are stateful.

  2. Question 2.An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future. Which settings will meet this requirement? (Choose three.)

    • A.Set DeletionProtection to True.(correct answer)
    • B.Set MultiAZ to True.
    • C.Set TerminationProtection to True.
    • D.Set DeleteAutomatedBackups to False.(correct answer)
    • E.Set DeletionPolicy to Delete.
    • F.Set DeletionPolicy to Retain.(correct answer)
    Show answer & explanation

    Correct answer: A, D, F

    Set DeletionProtection to True. / Set DeleteAutomatedBackups to False. / Set DeletionPolicy to Retain.

    Explanation

    Prevent Accidental RDS Deletion: (0) DeletionProtection=True. (3) DeleteAutomatedBackups=False (preserves backups). (5) DeletionPolicy=Retain (CloudFormation stack deletion won't delete DB).

  3. Question 3.A Database Specialist is troubleshooting an application connection failure on an Amazon Aurora DB cluster with multiple Aurora Replicas that had been running with no issues for the past 2 months. The connection failure lasted for 5 minutes and corrected itself after that. The Database Specialist reviewed the Amazon RDS events and determined a failover event occurred at that time. The failover process took around 15 seconds to complete. What is the MOST likely cause of the 5-minute connection outage?

    • A.After a database crash, Aurora needed to replay the redo log from the last database checkpoint.
    • B.The client-side application is caching the DNS data and its TTL is set too high.(correct answer)
    • C.After failover, the Aurora DB cluster needs time to warm up before accepting client connections.
    • D.There were no active Aurora Replicas in the Aurora DB cluster.
    Show answer & explanation

    Correct answer: B

    The client-side application is caching the DNS data and its TTL is set too high.

    Explanation

    Aurora Failover DNS Caching: (1) Application is caching DNS with high TTL. Aurora failover completes in 15 seconds, but app took 5 minutes due to stale DNS cache.

  4. Question 4.A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT department has established an AWS Direct Connect link from the company's data center. The company's Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from being set over the network. The migration appears to be working successfully, and the data can be queried from a desktop machine. Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also verified that the security group configuration allows network from all corporate IP addresses. What should the Database Specialist do to correct the Data Analysts' inability to connect?

    • A.Restart the DB cluster to apply the SSL change.
    • B.Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.(correct answer)
    • C.Add explicit mappings between the Data Analysts' IP addresses and the instance in the security group assigned to the DB cluster.
    • D.Modify the Data Analysts' local client firewall to allow network traffic to AWS.
    Show answer & explanation

    Correct answer: B

    Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.

    Explanation

    Aurora SSL/TLS Requirement: (1) Data Analysts must download root certificate and use it in connection string. SSL/TLS requires client-side certificate configuration.

  5. Question 5.A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed. What can the Database Specialist do to reduce the overall cost?

    • A.Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.
    • B.Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.
    • C.Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.(correct answer)
    • D.Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.
    Show answer & explanation

    Correct answer: C

    Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.

    Explanation

    DynamoDB Cost Reduction (Old Data): (2) Create TTL attribute. Enable DynamoDB TTL on tables. Automatically deletes items after expiration. No cost, fully managed.

  6. Question 6.A company has an on-premises system that tracks various database operations that occur over the lifetime of a database, including database shutdown, deletion, creation, and backup. The company recently moved two databases to Amazon RDS and is looking at a solution that would satisfy these requirements. The data could be used by other systems within the company. Which solution will meet these requirements with minimal effort?

    • A.Create an Amazon Cloudwatch Events rule with the operations that need to be tracked on Amazon RDS. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
    • B.Create an AWS Lambda function to trigger on AWS CloudTrail API calls. Filter on specific RDS API calls and write the output to the tracking systems.
    • C.Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.(correct answer)
    • D.Write RDS logs to Amazon Kinesis Data Firehose. Create an AWS Lambda function to act on these rules and write the output to the tracking systems.
    Show answer & explanation

    Correct answer: C

    Create RDS event subscriptions. Have the tracking systems subscribe to specific RDS event system notifications.

    Explanation

    RDS Event Tracking: (2) RDS Event Subscriptions. SNS-based notifications for database operations (shutdown, deletion, creation, backup). Minimal effort.

  7. Question 7.A clothing company uses a custom ecommerce application and a PostgreSQL database to sell clothes to thousands of users from multiple countries. The company is migrating its application and database from its on- premises data center to the AWS Cloud. The company has selected Amazon EC2 for the application and Amazon RDS for PostgreSQL for the database. The company requires database passwords to be changed every 60 days. A Database Specialist needs to ensure that the credentials used by the web application to connect to the database are managed securely. Which approach should the Database Specialist take to securely manage the database credentials?

    • A.Store the credentials in a text file in an Amazon S3 bucket. Restrict permissions on the bucket to the IAM role associated with the instance profile only. Modify the application to download the text file and retrieve the credentials on start up. Update the text file every 60 days.
    • B.Configure IAM database authentication for the application to connect to the database. Create an IAM user and map it to a separate database user for each ecommerce user. Require users to update their passwords every 60 days.
    • C.Store the credentials in AWS Secrets Manager. Restrict permissions on the secret to only the IAM role associated with the instance profile. Modify the application to retrieve the credentials from Secrets Manager on start up. Configure the rotation interval to 60 days.(correct answer)
    • D.Store the credentials in an encrypted text file in the application AMI. Use AWS KMS to store the key for decrypting the text file. Modify the application to decrypt the text file and retrieve the credentials on start up. Update the text file and publish a new AMI every 60 days.
    Show answer & explanation

    Correct answer: C

    Store the credentials in AWS Secrets Manager. Restrict permissions on the secret to only the IAM role associated with the instance profile. Modify the application to retrieve the credentials from Secrets Manager on start up. Configure the rotation interval to 60 days.

    Explanation

    RDS Credential Management + Rotation: (2) AWS Secrets Manager. IAM role-based access. Automatic 60-day rotation. Application retrieves credentials on startup.

  8. Question 8.A financial services company is developing a shared data service that supports different applications from throughout the company. A Database Specialist designed a solution to leverage Amazon ElastiCache for Redis with cluster mode enabled to enhance performance and scalability. The cluster is configured to listen on port 6379. Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)

    • A.Enable in-transit and at-rest encryption on the ElastiCache cluster.(correct answer)
    • B.Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.
    • C.Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.(correct answer)
    • D.Create an IAM policy to allow the application service roles to access all ElastiCache API actions.
    • E.Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster's security group.
    • F.Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.(correct answer)
    Show answer & explanation

    Correct answer: A, C, F

    Enable in-transit and at-rest encryption on the ElastiCache cluster. / Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only. / Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.

    Explanation

    ElastiCache Redis Security: (0) Enable in-transit and at-rest encryption. (2) Security group allows self-referencing and trusted clients on port 6379. (5) Use auth-token parameter.

  9. Question 9.A company is running an Amazon RDS for PostgeSQL DB instance and wants to migrate it to an Amazon Aurora PostgreSQL DB cluster. The current database is 1 TB in size. The migration needs to have minimal downtime. What is the FASTEST way to accomplish this?

    • A.Create an Aurora PostgreSQL DB cluster. Set up replication from the source RDS for PostgreSQL DB instance using AWS DMS to the target DB cluster.
    • B.Use the pg_dump and pg_restore utilities to extract and restore the RDS for PostgreSQL DB instance to the Aurora PostgreSQL DB cluster.
    • C.Create a database snapshot of the RDS for PostgreSQL DB instance and use this snapshot to create the Aurora PostgreSQL DB cluster.
    • D.Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover.(correct answer)
    Show answer & explanation

    Correct answer: D

    Migrate data from the RDS for PostgreSQL DB instance to an Aurora PostgreSQL DB cluster using an Aurora Replica. Promote the replica during the cutover.

    Explanation

    RDS PostgreSQL to Aurora Migration (Minimal Downtime): (3) Create Aurora Replica from RDS PostgreSQL. Promotes replica during cutover. Fastest with minimal downtime.

  10. Question 10.A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region. Where should the AWS DMS replication instance be placed for the MOST optimal performance?

    • A.In the same Region and VPC of the source DB instance.
    • B.In the same Region and VPC as the target DB instance.
    • C.In the same VPC and Availability Zone as the target DB instance.(correct answer)
    • D.In the same VPC and Availability Zone as the source DB instance.
    Show answer & explanation

    Correct answer: C

    In the same VPC and Availability Zone as the target DB instance.

    Explanation

    DMS Replication Instance Placement: (2) Same VPC and AZ as target. Minimizes network latency to target. Cross-region migration so target proximity is optimal.

  11. Question 11.The Development team recently executed a database script containing several data definition language (DDL) and data manipulation language (DML) statements on an Amazon Aurora MySQL DB cluster. The release accidentally deleted thousands of rows from an important table and broke some application functionality. This was discovered 4 hours after the release. Upon investigation, a Database Specialist tracked the issue to a DELETE command in the script with an incorrect WHERE clause filtering the wrong set of rows. The Aurora DB cluster has Backtrack enabled with an 8-hour backtrack window. The Database Administrator also took a manual snapshot of the DB cluster before the release started. The database needs to be returned to the correct state as quickly as possible to resume full application functionality. Data loss must be minimal. How can the Database Specialist accomplish this?

    • A.Quickly rewind the DB cluster to a point in time before the release using Backtrack.
    • B.Perform a point-in-time recovery (PITR) of the DB cluster to a time before the release and copy the deleted rows from the restored database to the original database.(correct answer)
    • C.Restore the DB cluster using the manual backup snapshot created before the release and change the application configuration settings to point to the new DB cluster.
    • D.Create a clone of the DB cluster with Backtrack enabled. Rewind the cloned cluster to a point in time before the release. Copy deleted rows from the clone to the original database.
    Show answer & explanation

    Correct answer: B

    Perform a point-in-time recovery (PITR) of the DB cluster to a time before the release and copy the deleted rows from the restored database to the original database.

    Explanation

    Aurora Backtrack vs PITR: (1) PITR to time before release. Copy deleted rows to original database. Backtrack rewinds entire cluster (would lose 4 hours of good data).

  12. Question 12.A company is load testing its three-tier production web application deployed with an AWS CloudFormation template on AWS. The Application team is making changes to deploy additional Amazon EC2 and AWS Lambda resources to expand the load testing capacity. A Database Specialist wants to ensure that the changes made by the Application team will not change the Amazon RDS database resources already deployed. Which combination of steps would allow the Database Specialist to accomplish this? (Choose two.)

    • A.Review the stack drift before modifying the template.
    • B.Create and review a change set before applying it.(correct answer)
    • C.Export the database resources as stack outputs.
    • D.Define the database resources in a nested stack.
    • E.Set a stack policy for the database resources.(correct answer)
    Show answer & explanation

    Correct answer: B, E

    Create and review a change set before applying it. / Set a stack policy for the database resources.

    Explanation

    CloudFormation Change Protection: (1) Create and review change set. (4) Set stack policy to deny updates on database resources.

  13. Question 13.A manufacturing company's website uses an Amazon Aurora PostgreSQL DB cluster. Which configurations will result in the LEAST application downtime during a failover? (Choose three.)

    • A.Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.(correct answer)
    • B.Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
    • C.Edit and enable Aurora DB cluster cache management in parameter groups.(correct answer)
    • D.Set TCP keepalive parameters to a high value.
    • E.Set JDBC connection string timeout variables to a low value.(correct answer)
    • F.Set Java DNS caching timeouts to a high value.
    Show answer & explanation

    Correct answer: A, C, E

    Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster. / Edit and enable Aurora DB cluster cache management in parameter groups. / Set JDBC connection string timeout variables to a low value.

    Explanation

    Aurora Failover Minimal Downtime: (0) Use Aurora endpoints. (2) Enable DB cluster cache management. (4) Set JDBC timeout low (fast failure detection).

  14. Question 14.A company is hosting critical business data in an Amazon Redshift cluster. Due to the sensitive nature of the data, the cluster is encrypted at rest using AWS KMS. As a part of disaster recovery requirements, the company needs to copy the Amazon Redshift snapshots to another Region. Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?

    • A.Create a new KMS customer master key in the source Region. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.
    • B.Create a new IAM role with access to the KMS key. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.
    • C.Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.(correct answer)
    • D.Create a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS key. Enable Amazon Redshift cross-Region replication in the source Region and use the KMS key of the destination Region.
    Show answer & explanation

    Correct answer: C

    Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.

    Explanation

    Redshift Cross-Region Encrypted Snapshots: (2) Enable cross-Region snapshots in source. Create snapshot copy grant. Use KMS key in destination Region.

  15. Question 15.A company has a production Amazon Aurora Db cluster that serves both online transaction processing (OLTP) transactions and compute-intensive reports. The reports run for 10% of the total cluster uptime while the OLTP transactions run all the time. The company has benchmarked its workload and determined that a six-node Aurora DB cluster is appropriate for the peak workload. The company is now looking at cutting costs for this DB cluster, but needs to have a sufficient number of nodes in the cluster to support the workload at different times. The workload has not changed since the previous benchmarking exercise. How can a Database Specialist address these requirements with minimal user involvement?

    • A.Split up the DB cluster into two different clusters: one for OLTP and the other for reporting. Monitor and set up replication between the two clusters to keep data consistent.
    • B.Review all evaluate the peak combined workload. Ensure that utilization of the DB cluster node is at an acceptable level. Adjust the number of instances, if necessary.
    • C.Use the stop cluster functionality to stop all the nodes of the DB cluster during times of minimal workload. The cluster can be restarted again depending on the workload at the time.
    • D.Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.(correct answer)
    Show answer & explanation

    Correct answer: D

    Set up automatic scaling on the DB cluster. This will allow the number of reader nodes to adjust automatically to the reporting workload, when needed.

    Explanation

    Aurora Cost Optimization (Variable Workload): (3) Aurora Auto Scaling. Automatically adjusts reader nodes based on workload. Minimal user involvement.

  16. Question 16.A company is running a finance application on an Amazon RDS for MySQL DB instance. The application is governed by multiple financial regulatory agencies. The RDS DB instance is set up with security groups to allow access to certain Amazon EC2 servers only. AWS KMS is used for encryption at rest. Which step will provide additional security?

    • A.Set up NACLs that allow the entire EC2 subnet to access the DB instance.
    • B.Disable the master user account.
    • C.Set up a security group that blocks SSH to the DB instance.
    • D.Set up RDS to use SSL for data in transit.(correct answer)
    Show answer & explanation

    Correct answer: D

    Set up RDS to use SSL for data in transit.

    Explanation

    RDS Additional Security: (3) Enable SSL for data in transit. KMS handles encryption at rest. SSL protects data over network.

  17. Question 17.A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company requires fast responses for end-user queries when looking at data from the current year, and users must have access to the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries. Storage costs for the 100 TB of data must be kept low. Which solution meets these requirements?

    • A.Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the data on local Amazon Redshift storage. Provision enough instances to support high demand.
    • B.Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough instances to support high demand.
    • C.Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.(correct answer)
    • D.Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon Redshift elastic resize.
    Show answer & explanation

    Correct answer: C

    Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon Redshift Concurrency Scaling.

    Explanation

    Redshift Cost-Effective Query Performance: (2) Dense storage for recent data. Redshift Spectrum for S3 historical data. Concurrency Scaling handles fluctuating queries.

  18. Question 18.A gaming company wants to deploy a game in multiple Regions. The company plans to save local high scores in Amazon DynamoDB tables in each Region. A Database Specialist needs to design a solution to automate the deployment of the database with identical configurations in additional Regions, as needed. The solution should also automate configuration changes across all Regions. Which solution would meet these requirements and deploy the DynamoDB tables?

    • A.Create an AWS CLI command to deploy the DynamoDB table to all the Regions and save it for future deployments.
    • B.Create an AWS CloudFormation template and deploy the template to all the Regions.
    • C.Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.(correct answer)
    • D.Create DynamoDB tables using the AWS Management Console in all the Regions and create a step-by- step guide for future deployments.
    Show answer & explanation

    Correct answer: C

    Create an AWS CloudFormation template and use a stack set to deploy the template to all the Regions.

    Explanation

    Multi-Region DynamoDB Deployment: (2) CloudFormation template with Stack Sets. Deploys to multiple Regions. Automates configuration changes across all Regions.

  19. Question 19.A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation. How can the Database Specialists accomplish this?

    • A.Enable the option to push all database logs to Amazon CloudWatch for advanced analysis.
    • B.Create appropriate Amazon CloudWatch dashboards to contain specific periods of time.
    • C.Enable Amazon RDS Performance Insights and review the appropriate dashboard.(correct answer)
    • D.Enable Enhanced Monitoring will the appropriate settings.
    Show answer & explanation

    Correct answer: C

    Enable Amazon RDS Performance Insights and review the appropriate dashboard.

    Explanation

    RDS Performance Investigation: (2) Enable RDS Performance Insights. Shows database load, wait events, SQL statements, hosts, users.

  20. Question 20.A large company is using an Amazon RDS for Oracle Multi-AZ DB instance with a Java application. As a part of its disaster recovery annual testing, the company would like to simulate an Availability Zone failure and record how the application reacts during the DB instance failover activity. The company does not want to make any code changes for this activity. What should the company do to achieve this in the shortest amount of time?

    • A.Use a blue-green deployment with a complete application-level failover test.
    • B.Use the RDS console to reboot the DB instance by choosing the option to reboot with failover.(correct answer)
    • C.Use RDS fault injection queries to simulate the primary node failure.
    • D.Add a rule to the NACL to deny all traffic on the subnets associated with a single Availability Zone.
    Show answer & explanation

    Correct answer: B

    Use the RDS console to reboot the DB instance by choosing the option to reboot with failover.

    Explanation

    Simulate RDS Failover: (1) RDS Console reboot with failover option. Fastest way to test failover without code changes.

  21. Question 21.A company maintains several databases using Amazon RDS for MySQL and PostgreSQL. Each RDS database generates log files with retention periods set to their default values. The company has now mandated that database logs be maintained for up to 90 days in a centralized repository to facilitate real-time and after- the-fact analyses. What should a Database Specialist do to meet these requirements with minimal effort?

    • A.Create an AWS Lambda function to pull logs from the RDS databases and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
    • B.Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.(correct answer)
    • C.Write a stored procedure in each RDS database to download the logs and consolidate the log files in an Amazon S3 bucket. Set a lifecycle policy to expire the objects after 90 days.
    • D.Create an AWS Lambda function to download the logs from the RDS databases and publish the logs to Amazon CloudWatch Logs. Change the log retention policy for the log group to expire the events after 90 days.
    Show answer & explanation

    Correct answer: B

    Modify the RDS databases to publish log to Amazon CloudWatch Logs. Change the log retention policy for each log group to expire the events after 90 days.

    Explanation

    RDS Centralized Logs (90 days): (1) Publish RDS logs to CloudWatch Logs. Set retention policy to 90 days. Minimal effort, native integration.

  22. Question 22.A Database Specialist is setting up a new Amazon Aurora DB cluster with one primary instance and three Aurora Replicas for a highly intensive, business-critical application. The Aurora DB cluster has one medium- sized primary instance, one large-sized replica, and two medium sized replicas. The Database Specialist did not assign a promotion tier to the replicas. In the event of a primary failure, what will occur?

    • A.Aurora will promote an Aurora Replica that is of the same size as the primary instance.
    • B.Aurora will promote an arbitrary Aurora Replica.
    • C.Aurora will promote the largest-sized Aurora Replica.(correct answer)
    • D.Aurora will not promote an Aurora Replica.
    Show answer & explanation

    Correct answer: C

    Aurora will promote the largest-sized Aurora Replica.

    Explanation

    Aurora Failover Without Promotion Tier: (2) Aurora promotes largest-sized replica (tie-breaker when no promotion tier is set).

  23. Question 23.A company is running its line of business application on AWS, which uses Amazon RDS for MySQL at the persistent data store. The company wants to minimize downtime when it migrates the database to Amazon Aurora. Which migration method should a Database Specialist use?

    • A.Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.
    • B.Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora DB cluster, and restore the backup.
    • C.Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.(correct answer)
    • D.Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.
    Show answer & explanation

    Correct answer: C

    Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.

    Explanation

    RDS MySQL to Aurora (Minimal Downtime): (2) Create Aurora Replica from RDS MySQL. Promote Aurora cluster. Minimal downtime migration path.

  24. Question 24.The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real-time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution. Which approach will meet these requirements?

    • A.Use pg_audit to generate audit logs and send the logs to the Security team.
    • B.Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.
    • C.Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.(correct answer)
    • D.Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.
    Show answer & explanation

    Correct answer: C

    Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.

    Explanation

    Aurora PostgreSQL Real-Time Audit Logs: (2) Database Activity Streams. Pushes encrypted audit logs to Kinesis. Real-time monitoring outside DB cluster.

  25. Question 25.A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours to the restores to complete. Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake. Which approach should the Database Specialist take to reduce downtime?

    • A.Deploy multiple read replicas and have the team members make changes to separate replica instances.
    • B.Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot.
    • C.Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature.(correct answer)
    • D.Enable the Amazon RDS for MySQL Backtrack feature.
    Show answer & explanation

    Correct answer: C

    Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature.

    Explanation

    Reduce Restore Downtime (Schema Mistakes): (2) Migrate to Aurora MySQL. Enable Aurora Backtrack. Fast point-in-time rewind without full restore.

  26. Question 26.A media company is using Amazon RDS for PostgreSQL to store user data. The RDS DB instance currently has a publicly accessible setting enabled and is hosted in a public subnet. Following a recent AWS Well-Architected Framework review, a Database Specialist was given new security requirements. Only certain on-premises corporate network IPs should connect to the DB instance. Connectivity is allowed from the corporate network only. Which combination of steps does the Database Specialist need to take to meet these new requirements? (Choose three.)

    • A.Modify the pg_hba.conf file. Add the required corporate network IPs and remove the unwanted IPs.
    • B.Modify the associated security group. Add the required corporate network IPs and remove the unwanted IPs.(correct answer)
    • C.Move the DB instance to a private subnet using AWS DMS.
    • D.Enable VPC peering between the application host running on the corporate network and the VPC associated with the DB instance.
    • E.Disable the publicly accessible setting.(correct answer)
    • F.Connect to the DB instance using private IPs and a VPN.(correct answer)
    Show answer & explanation

    Correct answer: B, E, F

    Modify the associated security group. Add the required corporate network IPs and remove the unwanted IPs. / Disable the publicly accessible setting. / Connect to the DB instance using private IPs and a VPN.

    Explanation

    RDS Public to Private Migration: (1) Modify security group (corporate IPs only). (4) Disable publicly accessible. (5) Use private IPs and VPN.

  27. Question 27.A company is about to launch a new product, and test databases must be re-created from production data. The company runs its production databases on an Amazon Aurora MySQL DB cluster. A Database Specialist needs to deploy a solution to create these test databases as quickly as possible with the least amount of administrative effort. What should the Database Specialist do to meet these requirements?

    • A.Restore a snapshot from the production cluster into test clusters.
    • B.Create logical dumps of the production cluster and restore them into new test clusters.
    • C.Use database cloning to create clones of the production cluster.(correct answer)
    • D.Add an additional read replica to the production cluster and use that node for testing.
    Show answer & explanation

    Correct answer: C

    Use database cloning to create clones of the production cluster.

    Explanation

    Fast Test Database Creation: (2) Database cloning. Creates fast, storage-efficient copy using copy-on-write. Faster than snapshot restore.

  28. Question 28.A company with branch offices in Portland, New York, and Singapore has a three-tier web application that leverages a shared database. The database runs on Amazon RDS for MySQL and is hosted in the us-west-2 Region. The application has a distributed front end deployed in the us-west-2, ap-southheast-1, and us-east-2 Regions. This front end is used as a dashboard for Sales Managers in each branch office to see current sales statistics. There are complaints that the dashboard performs more slowly in the Singapore location than it does in Portland or New York. A solution is needed to provide consistent performance for all users in each location. Which set of actions will meet these requirements?

    • A.Take a snapshot of the instance in the us-west-2 Region. Create a new instance from the snapshot in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
    • B.Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the us- west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.(correct answer)
    • C.Create a new RDS instance in the ap-southeast-1 Region. Use AWS DMS and change data capture (CDC) to update the new instance in the ap-southeast-1 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
    • D.Create an RDS read replica in the us-west-2 Region where the primary instance resides. Create a read replica in the ap-southeast-1 Region from the read replica located on the us-west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.
    Show answer & explanation

    Correct answer: B

    Create an RDS read replica in the ap-southeast-1 Region from the primary RDS DB instance in the us- west-2 Region. Reconfigure the ap-southeast-1 front-end dashboard to access this instance.

    Explanation

    RDS Cross-Region Read Performance: (1) Create RDS read replica in ap-southeast-1. Singapore dashboard reads from local replica. Reduces latency.

  29. Question 29.A company wants to migrate its existing on-premises Oracle database to Amazon Aurora PostgreSQL. The migration must be completed with minimal downtime using AWS DMS. A Database Specialist must validate that the data was migrated accurately from the source to the target before the cutover. The migration must have minimal impact on the performance of the source database. Which approach will MOST effectively meet these requirements?

    • A.Use the AWS Schema Conversion Tool (AWS SCT) to convert source Oracle database schemas to the target Aurora DB cluster. Verify the datatype of the columns.
    • B.Use the table metrics of the AWS DMS task created for migrating the data to verify the statistics for the tables being migrated and to verify that the data definition language (DDL) statements are completed.
    • C.Enable the AWS Schema Conversion Tool (AWS SCT) premigration validation and review the premigration checklist to make sure there are no issues with the conversion.
    • D.Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.(correct answer)
    Show answer & explanation

    Correct answer: D

    Enable AWS DMS data validation on the task so the AWS DMS task compares the source and target records, and reports any mismatches.

    Explanation

    DMS Migration Validation: (3) Enable AWS DMS data validation. DMS compares source and target records. Reports mismatches. Minimal source impact.

  30. Question 30.A company is planning to close for several days. A Database Specialist needs to stop all applications along with the DB instances to ensure employees do not have access to the systems during this time. All databases are running on Amazon RDS for MySQL. The Database Specialist wrote and executed a script to stop all the DB instances. When reviewing the logs, the Database Specialist found that Amazon RDS DB instances with read replicas did not stop. How should the Database Specialist edit the script to fix this issue?

    • A.Stop the source instances before stopping their read replicas.
    • B.Delete each read replica before stopping its corresponding source instance.(correct answer)
    • C.Stop the read replicas before stopping their source instances.
    • D.Use the AWS CLI to stop each read replica and source instance at the same time.
    Show answer & explanation

    Correct answer: B

    Delete each read replica before stopping its corresponding source instance.

    Explanation

    Stop RDS with Read Replicas: (1) Delete read replicas before stopping source instance. RDS doesn't allow stopping instances with active replicas.

Ready for the full DBS-C01 exam?

Get all 160+ Questions, timed simulation, and weak-area analytics. Plans from $2.99 — credits never expire.

See pricing

Frequently Asked Questions

Are these real DBS-C01 practice questions?+
Yes. These 30 questions are taken directly from our 160+ Questions pool, written and reviewed by certified practitioners. They mirror the style, difficulty, and scope of the official AWS DBS-C01 exam.
Is the DBS-C01 exam hard?+
The AWS AWS Certified Database Specialty (DBS-C01) is considered a pass-mark exam (passing score: 750 out of 1000). Most candidates need 4–8 weeks of focused preparation. Use these free questions to gauge where you stand before committing to a full study plan.
How many questions are on the real DBS-C01 exam?+
The official DBS-C01 exam has 65 questions (50 scored, 15 unscored).
Do I need to sign up to use these questions?+
No. These 30 questions are free and require no signup. If you want timed simulation, performance analytics, and access to all 160+ Questions, our paid plans start at $2.99 per exam with credits that never expire.

Keep studying

Pass DBS-C01 on your first try

Join candidates using DummyExams to practice with realistic timed exams, detailed explanations, and weak-area analytics.

Start full DBS-C01 practice exam