Prev Question
Next Question

You’ve been tasked with implementing an automated data backup solution for your application servers that run on
Amazon EC2 with Amazon EBS volumes. You want to use a distributed data store for your backups to avoid single points
of failure and to increase the durability of the data. Daily backups should be retained for 30 days so that you can restore
data within an hour. How can you implement this through a script that a scheduling daemon runs daily on the application
servers?

A.
Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current date time group, and copy backup
data to a second Amazon EBS volume.
Use the ec2-describe-volumes API to enumerate existing backup volumes.
Call the ec2-delete-volume API to prune backup volumes that are tagged with a date-tine group older than 30 days.

B.
Write the script to call the Amazon Glacier upload archive API, and tag the backup archive with the current date-time group.
Use the list vaults API to enumerate existing backup archives.
Call the delete vault API to prune backup archives that are tagged with a date-time group older than 30 days.

C.
Write the script to call the ec2-create-snapshot API, and tag the Amazon EBS snapshot with the current date-time group.
Use the ec2-describe-snapshot API to enumerate existing Amazon EBS snapshots.
Call the ec2-delete-snapShot API to prune Amazon EBS snapshots that are tagged with a date-time group older than 30 days.

D.
Write the script to call the ec2-create-volume API, tag the Amazon EBS volume with the current date-time group, and use the ec2-
copy-snapshot API to back up data to the new Amazon EBS volume.
Use the ec2- describe-snapshot API to enumerate existing backup volumes.
Call the ec2-delete-snaphot API to prune backup Amazon EBS volumes that are tagged with a date-time group older than 30 days.

Prev Question
Next Question

Leave a Reply

Your email address will not be published. Required fields are marked *