Top 40 AWS Interview Questions – Read Now!

AWS Interview Questions
AWS Interview Questions

If you’ve started cross-training to prepare for development and operations roles in the AWS i.e. Amazon Web Services industry, you know it’s a challenging field that will take some real preparation to break into. Here are the Top 40 AWS interview questions and answers that can help you while you prepare for AWS Developer role in the industry.

Top 40 AWS Interview Questions

Question 1: What is AWS?

AWS stands for Amazon Web Service; it is a collection of remote computing services also known as cloud computing platform.  This new realm of cloud computing is also known as IaaS or Infrastructure as a Service.

Question 2: What are the key components of AWS?

The major components of AWS are listed here:

  • Route 53: A DNS web service
  • Simple E-mail Service: It allows sending e-mail using RESTFUL API call or via regular SMTP
  • Identity and Access Management: It provides enhanced security and identity management for your AWS account
  • Simple Storage Device or (S3): It is a storage device and the most widely used AWS service
  • Elastic Compute Cloud (EC2): It provides on-demand computing resources for hosting applications. It is very useful in case of unpredictable workloads
  • Elastic Block Store (EBS): It provides persistent storage volumes that attach to EC2 to allow you to persist data past the lifespan of a single EC2
  • CloudWatch: To monitor AWS resources, It allows administrators to view and collect key Also, one can set a notification alarm in case of trouble.
Question 3: What is S3?

S3 stands for Simple Storage Service. You can use S3 interface to store and retrieve any amount of data, at any time and from anywhere on the web.  For S3, the payment model is “pay as you go”.

Question 4: What Do You Mean By AMI?

AMI stands for Amazon Machine Image.  It’s a template that provides the information (an operating system, an application server, and applications) required to launch an instance, which is a copy of the AMI running as a virtual server in the cloud.  You can launch instances from as many different AMIs as you need.

Question 5: What does an AMI include?

An AMI includes the following things

  • A template for the root volume for the instance
  • Launch permissions decide which AWS accounts can avail the AMI to launch instances
  • A block device mapping that determines the volumes to attach to the instance when it is launched
Question 6: What is Amazon EC2?

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction.

Question 7: Mention what is the difference between Amazon S3 and EC2?

The difference between EC2 and Amazon S3 is that

EC2S3
  • It is a cloud web service used for hosting your application
  • It is a data storage system where any amount of data can be stored
  • It is like a huge computer machine which can run either Linux or Windows and can handle application like PHP, Python, Apache or any databases
  • It has a REST interface and uses secure HMAC-SHA1 authentication keys

Question 8: What does the following command do with respect to the Amazon EC2 security groups?

ec2-create-group CreateSecurityGroup

  1. Groups the user created security groups into a new group for easy access.
  2. Creates a new security group for use with your account.
  3. Creates a new group inside the security group.
  4. Creates a new rule inside the security group.

The answer is B.

Explanation: A Security group is just like a firewall, it controls the traffic in and out of your instance. In AWS terms, the inbound and outbound traffic. The command mentioned is pretty straight forward, it says to create a security group, and does the same. Moving along, once your security group is created, you can add different rules in it. For example, you have an RDS instance, to access it, you have to add the public IP address of the machine from which you want to access the instance in its security group.

Question 9: How is stopping and terminating an instance different from each other when it comes to EC2?

Starting, stopping and terminating are the three states in an EC2 instance, let’s discuss them in detail:

  • Stopping and Starting an instance: When an instance is stopped, the instance performs a normal shutdown and then transitions to a stopped state. All of its Amazon EBS volumes remain attached, and you can start the instance again at a later time. You are not charged for additional instance hours while the instance is in a stopped state.
  • Terminating an instance: When an instance is terminated, the instance performs a normal shutdown, and then the attached Amazon EBS volumes are deleted unless the volume’s deleteOnTermination attribute is set to false. The instance itself is also deleted, and you can’t start the instance again at a later time.
Question 10: Explain can you vertically scale an Amazon instance? If “Yes”, then explain “How”?

Yes, you can vertically scale on Amazon instance. Follow the below steps:

  • Spin up a new larger instance than the one you are currently running
  • Pause that instance and detach the root webs volume from the server and discard
  • Then stop your live instance and detach its root volume
  • Note the unique device ID and attach that root volume to your new server
  • And start it again
Question 11: To deploy a 4 node cluster of Hadoop in AWS which instance type can be used?

First let’s understand what actually happens in a Hadoop cluster, the Hadoop cluster follows a master-slave concept. The master machine processes all the data, slave machines store the data and act as data nodes. Since all the storage happens at the slave, a higher capacity hard disk would be recommended and since master does all the processing, a higher RAM and a much better CPU is required. Therefore, you can select the configuration of your machine depending on your workload. For e.g. – In this case c4.8xlarge will be preferred for master machine whereas for slave machine we can select i2.large instance. If you don’t want to deal with configuring your instance and installing Hadoop cluster manually, you can straight away launch an Amazon EMR (Elastic Map Reduce) instance which automatically configures the servers for you. You dump your data to be processed in S3, EMR picks it from there, processes it, and dumps it back into S3.

Question 12: Where do you think an AMI fits, when you are designing architecture for a solution?

AMIs (Amazon Machine Images) are like templates of virtual machines and an instance is derived from an AMI. AWS offers pre-baked AMIs which you can choose while you are launching an instance, some AMIs are not free, therefore can be bought from the AWS Marketplace. You can also choose to create your own custom AMI which would help you save space on AWS. For example, if you don’t need a set of software on your installation, you can customize your AMI to do that. This makes it cost-efficient since you are removing the unwanted things.

Question 13: What are the best practices for Security in Amazon EC2?

There are several best practices to secure Amazon EC2. A few of them are given below:

  • Use AWS Identity and Access Management (IAM) to control access to your AWS resources.
  • Restrict access by only allowing trusted hosts or networks to access ports on your instance.
  • Review the rules in your security groups regularly, and ensure that you apply the principle of least
  • Privilege – only open up permissions that you require.
  • Disable password-based logins for instances launched from your AMI. Passwords can be found or cracked, and are a security risk.
Question 14: When you need to move data over long distances using the internet, for instance across countries or continents to your Amazon S3 bucket, which method or service will you use?
  1. Amazon Glacier
  2. Amazon CloudFront
  3. Amazon Transfer Acceleration
  4. Amazon Snowball

Answer C.

Explanation: You would not use Snowball, because for now, the snowball service does not support cross-region data transfer, and since, we are transferring across countries, Snowball cannot be used. Transfer Acceleration shall be the right choice here as it throttles your data transfer with the use of optimized network paths and Amazon’s content delivery network up to 300% compared to normal data transfer speed.

Question 15: To create a mirror image of your environment in another region for disaster recovery, which of the following AWS resources do not need to be recreated in the second region? (Choose 2 answers)
  • Route 53 Record Sets
  • Elastic IP Addresses (EIP)
  • EC2 Key Pairs
  • Launch configurations
  • Security Groups

Answer A, B

Explanation: Elastic IPs and Route 53 record sets are common assets, therefore, there is no need to replicate them, since Elastic IPs and Route 53 are valid across regions

Question 16: How is AWS OpsWorks different than AWS CloudFormation?

OpsWorks and CloudFormation both support application modeling, deployment, configuration, management and related activities. Both support a wide variety of architectural patterns, from simple web applications to highly complex applications. AWS OpsWorks and AWS CloudFormation differ in abstraction level and areas of focus.

AWS CloudFormation is a building block service which enables the customer to manage almost any AWS resource via JSON-based domain specific language. It provides foundational capabilities for the full breadth of AWS, without prescribing a particular model for development and operations. Customers define templates and use them to provision and manage AWS resources, operating systems and application code.

In contrast, AWS OpsWorks is a higher level service that focuses on providing highly productive and reliable DevOps experiences for IT administrators and ops-minded developers. To do this, AWS OpsWorks employs a configuration management model based on concepts such as stacks and layers and provides integrated experiences for key activities like deployment, monitoring, auto-scaling, and automation. Compared to AWS CloudFormation, AWS OpsWorks supports a narrower range of application-oriented AWS resource types including Amazon EC2 instances, Amazon EBS volumes, Elastic IPs, and Amazon CloudWatch metrics.

Question 17: What is the difference between Amazon SQS and Amazon SNS?

SNS is Amazon Simple Notification Service (Amazon SNS) is a fast, flexible, fully managed push notification service that lets you send individual messages or to bulk messages to large numbers of recipients. Amazon SNS makes it simple and cost-effective to send push notifications to mobile device users, email recipients or even send messages to other distributed services.

A distributed publish-subscribe system. Messages are pushed to subscribers as and when they are sent by publishers to SNS, SNS supports several end points such as email, SMS, HTTP end point and SQS. If you want unknown number and type of subscribers to receive messages, you need SNS.

With Amazon SNS, you can send push notifications to Apple, Google, Fire OS, and Windows devices, as well as to Android devices in China with Baidu Cloud Push. You can use SNS to send SMS messages to mobile device users in the US or to email recipients worldwide.

Amazon AWS SQS( Simple Queue Service) 

SQS is distributed queuing system. Messages are not pushed to receivers. Receivers have to poll SQS to receive messages. Messages can’t be received by multiple receivers at the same time. Any one receiver can receive a message, process and delete it. Other receivers do not receive the same message later. Polling inherently introduces some latency in message delivery in SQS, unlike SNS where messages are immediately pushed to subscribers.

SQS is mainly used to decouple applications or integrate applications. Messages can be stored in SQS for short duration of time (max 14 days). SNS distributes several copies of the message to several subscribers. For example, let’s say you want to replicate data generated by an application to several storage systems. You could use SNS and send this data to multiple subscribers, each replicating the messages it receives to different storage systems (s3, the hard disk on your host, database, etc.).

Using amazon’s simple notification service (SNS) & simple queue service (SQS) for a reliable push-based processing of messages

Question 18: What are the different APIs available in Amazon DynamoDB?

Amazon DynamoDB supports both document as well as key based NoSQL databases. Due to this APIs in DynamoDB are generic enough to serve both the types.
Some of the main APIs available in DynamoDB is as follows:

  • CreateTable
  • UpdateTable
  • DeleteTable
  • DescribeTable
  • ListTables
  • PutItem
  • GetItem
  • BatchWriteItem
  • BatchGetItem
  • UpdateItem
  • DeleteItem
  • Query
  • Scan
Question 19: What are the benefits of AWS Storage Gateway?

AWS Storage Gateway is a hybrid storage service that enables your on-premises applications to seamlessly use AWS cloud storage. You can use the service for backup and archiving, disaster recovery, cloud bursting, storage tearing, and migration. Your applications connect to the service through a gateway appliance using standard storage protocols, such as NFS and iSCSI. The gateway connects to AWS storage services, such as Amazon S3, Amazon Glacier, and Amazon EBS, providing storage for files, volumes, and virtual tapes in AWS. The service includes a highly-optimized data transfer mechanism, with bandwidth management, automated network resilience, and efficient data transfer, along with a local cache for low-latency on-premises access to your most active data.

Question 20: What is an activity in AWS Data Pipeline?

An activity AWS Data Pipeline is an Action that is initiated as a part of the pipeline. Some of the activities are: Elastic MapReduce (EMR) Hive jobs Data copies SQL queries Command-line scripts.

Question 21: What is a schedule in AWS Data Pipeline?

In AWS Data Pipeline we can define a Schedule. The Schedule contains the information about when will pipeline activities run and with what frequency.

All schedules have a start date and a frequency.

E.g. One schedule can be run every day starting Mar 1, 2016, at 6 am.

Schedules may also have an end date, after which the AWS Data Pipeline service will not execute any activity.

Question 22: What is [email protected] in AWS?

[email protected] is an extension of AWS Lambda, a computer service that lets you execute functions that customize the content that is delivered through CloudFront. You can author functions in one region and execute them in AWS Regions and edge locations globally without provisioning or managing servers.

You can use Lambda functions to change CloudFront requests and responses at the following points:

  • After CloudFront receives a request from a viewer (viewer request)
  • Before CloudFront forwards the request to the origin (origin request)
  • After CloudFront receives the response from the origin (origin response)
  • Before CloudFront forwards the response to the viewer (viewer response)
Question 23: How do I use the AWS Storage Gateway service?

You can have two touch points to use the service: the AWS Management Console and a gateway virtual machine (VM).

You use the AWS Management Console to download the gateway, configure storage, and manage and monitor the service. The gateway connects your applications to AWS storage by providing standard storage interfaces. It provides transparent caching, efficient data transfer, and integration with AWS monitoring and security services.

To get started, sign up for the AWS Storage Gateway by choosing “Sign Up Now” on the AWS Storage Gateway detail page. To sign-up, you must have an Amazon Web Services account; if you don’t already have one, you are prompted to create one when you begin the AWS Storage Gateway sign-up process.

After you sign up, you visit the AWS Storage Gateway Management Console to download a gateway with a file, volume, or tape interface. Once you’ve downloaded and installed your gateway, you associate it with your AWS Account through our activation process. After activation, you configure the gateway to connect to the appropriate storage type. For file gateway, you configure file shares that are mapped to selected S3 buckets, using IAM roles. For volume gateway, you create and mount volumes as iSCSI devices. For tape gateway, you connect your backup application to create and manage tapes. Once configured, you start using the gateway to write and read data to and from AWS storage. You can monitor the status of your data transfer and your storage interfaces through the AWS Management Console. Additionally, you can use the API or SDK to programmatically manage your application’s interaction with the gateway.

Question 24: What Is File Gateway?

File gateway provides a virtual file server, which enables you to store and retrieve Amazon S3 objects through standard file storage protocols. File gateway allows your existing file-based applications or devices to use secure and durable cloud storage without needing to be modified. With file gateway, your configured S3 buckets will be available as Network File System (NFS) mount points. Your applications read and write files and directories over NFS, interfacing to the gateway as a file server. In turn, the gateway translates these file operations into object requests on your S3 buckets. Your most recently used data is cached on the gateway for low-latency access, and data transfer between your data center and AWS is fully managed and optimized by the gateway. Once in S3, you can access the objects directly or manage them using features such as S3 Lifecycle Policies, object versioning, and cross-region replication. You can run file gateway On-Premises or in EC2.

Question 25: What Is Volume Gateway?

Volume gateway provides an iSCSI target, which enables you to create volumes and mount them as iSCSI devices from your on-premises or EC2 application servers. The volume gateway runs in either a cached or stored mode.

  • In the cached mode, your primary data is written to S3, while retaining your frequently accessed data locally in a cache for low-latency access.
  • In the stored mode, your primary data is stored locally and your entire dataset is available for low-latency access while asynchronously backed up to AWS.

In either mode, you can take point-in-time snapshots of your volumes and store them in Amazon S3, enabling you to make space-efficient versioned copies of your volumes for data protection and various data reuse needs.

Question 26: What Is Tape Gateway?

Tape gateway presents your backup application with a virtual tape library (VTL) interface, consisting of a media changer and tape drives. You can create virtual tapes in your virtual tape library using the AWS Management Console. Your backup application can read data from or write data to virtual tapes by mounting them to virtual tape drives using the virtual media changer. Virtual tapes are discovered by your backup application using its standard media inventory procedure. Virtual tapes are available for immediate access and are backed by Amazon S3. You can also archive tapes. Archived tapes are stored in Amazon Glacier.

Question 27: What are the benefits of using file gateway to store data in S3?

File gateway enables your existing file-based applications, devices, and workflows to use cloud storage without modification. File gateway securely and durably stores both file contents and metadata as objects in your Amazon S3 buckets using standard file protocols.

Question 28: What is the relationship between files and objects?

Files are stored as objects in your S3 buckets and you can configure the initial storage class for objects that file gateway create. There is a one-to-one relationship between files and objects, and you can configure the initial storage class for objects that file gateway create.

The object key is derived from the file path within the file system. For example, if you have a gateway with hostname file.amazon.com and have mapped my-bucket, then file gateway will expose a mount point called file.amazon.com:/export/my-bucket. If you then mount this locally on /mnt/my-bucket and create a file named file.html in a directory /mnt/my-bucket/dir this file will be stored as an object in the bucket my-bucket with a key of dir/file.html.

Question 29: Can I Have Two Gateways Writing Independent Data To The Same Bucket?

We do not recommend configuring multiple writers to a single bucket because it can lead to unpredictable results. You could enforce unique object names or prefixes through your application workflow. File gateway doesn’t monitor or report on conflicts in such a setup.

Question 30: Can I Have Multiple Gateways Reading Data From The Same Bucket?

Yes, you can have multiple readers on a bucket managed through a file gateway. You can configure a file share as read-only, and allow multiple gateways to read objects from the same bucket. Additionally, you can refresh the inventory of objects that your gateway knows about using the Refresh Cache API.

Note however that if you do not configure a file share as read-only, file gateway does not monitor or restrict these readers from inadvertently writing to the bucket. It is up to you to maintain a single writer/multi-reader configuration from your application.

Question 31:  What Is A Serverless Application In AWS?

In AWS, we can create applications based on AWS Lambda. These applications are composed of functions that are triggered by an event. These functions are executed by AWS in the cloud. But we do not have to specify/buy any instances or server for running these functions. An application created on AWS Lambda is called Serverless application in AWS.

Question 32: How Will You Manage and Run A Serverless Application In AWS?

We can use AWS Serverless Application Model (AWS SAM) to deploy and run a serverless application. AWS SAM is not a server or software. It is just a specification that has to be followed for creating a serverless application.

Once we create our serverless application, we can use CodePipeline to release and deploy it in AWS. CodePipeline is built on Continuous Integration Continuous Deployment (CI/CD) concept.

Question 33: What Are Spot Instances In Amazon EC2?

Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications, grow your applications compute capacity and throughput for the same budget, and enable new types of cloud computing applications.

Question 34: What Is The Difference Between Spot Instance And On-Demand Instance In Amazon EC2?

On-Demand” instances allow the user to use the compute by the hour without requiring the long-term commitment. There are no guarantees that the user will always be able to launch specific instance types in an availability zone, though AWS tries it’s best to meet the needs. This service is preferable for POCs and they do not suffer an interruption of the service (by AWS) like Spot instances.
“Spot” instances are a bid for low price version of On-Demand instances but could be shut down by AWS anytime the Spot instance price goes higher than bid price. Spot price fluctuates based on the supply and demand of the capacity. It’s essentially the leftover capacity of AWS to be used. There is no difference in the performance compared to On-Demand instances and is usually cheaper than On-demand instances as there is no guarantee provided by the availability. The user can choose a start time and end time for the instances or can make a persistent request (no end time specified) for this service. This service is preferable for computing needs which are not tied to any deadlines, computing needs are large and the interruption of service is acceptable.

Question 35: How Will You Scale An Amazon EC2 Instance Vertically?

We can use following steps to scale an Amazon EC2 instance:

Step 1: Start an EC2 instance that is larger in capacity than the one we are currently using.

Step 2: Pause the new instance and detach the root web volume from the server.

Step 3: Stop the current live instance and detach its root volume

Step 4: Note the unique device ID and attach that root volume to new server

Step 5: Start the new EC2 instance again

Question 36: When Should Be Use Amazon DynamoDB Vs. Amazon S3?

Amazon DynamoDB stores structured data, indexed by primary key, and allows low latency read and write access to items ranging from 1 byte up to 400KB. Amazon S3 stores unstructured blobs and suited for storing large objects up to 5 TB. In order to optimize your costs across AWS services, large objects or infrequently accessed data sets should be stored in Amazon S3, while smaller data elements or file pointers (possibly to Amazon S3 objects) are best saved in Amazon DynamoDB.

Question 37: How Does Amazon EC2 Work?

Amazon Elastic Compute Cloud (Amazon EC2) is a computing environment provided by AWS. It supports highly scalable computing capacity in AWS.

Instead of buying hardware for servers we can use Amazon EC2 to deploy our applications. So there is no need to buy and maintain the hardware within our own datacenter. We can just rent the Amazon EC2 servers.

Based on our varying needs we can use as few and as many Amazon EC2 instances. It even provides auto-scaling options in which the instances scale up or down based on the load and traffic spikes.

It is easier to deploy applications on EC2. Even we can configure security and networking in Amazon EC2 much easily than our own custom data center.

Question 38: Can Dynamodb Be Used By Applications Running On Any Operating System?

Yes. DynamoDB is a fully managed cloud service that you access via API. DynamoDB can be used by applications running on any operating system (e.g. Linux, Windows, iOS, Android, Solaris, AIX, HP-UX, etc.). We recommend using the AWS SDKs to get started with DynamoDB. You can find a list of the AWS SDKs on our Developer Resources page. If you have trouble installing or using one of our SDKs, please let us know by posting to the relevant AWS Forum.

Question 39: What Are The Benefits Of Using A Virtual Private Cloud In AWS?

By launching your instances into a VPC instead of EC2-Classic, you gain the ability to:

  • Assign static private IPv4 addresses to your instances that persist across starts and stops
  • Assign multiple IPv4 addresses to your instances
  • Define network interfaces, and attach one or more network interfaces to your instances
  • Change security group membership for your instances while they’re running
  • Control the outbound traffic from your instances (egress filtering) in addition to controlling the inbound traffic to them (ingress filtering)
  • Add an additional layer of access control to your instances in the form of network access control lists (ACL)
  • Run your instances on single-tenant hardware
  • Assign IPv6 addresses to your instances
Question 40: What are the different types of actions in Object Lifecycle Management in Amazon S3?

Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket. The configuration is a set of one or more rules, where each rule defines an action for Amazon S3 to apply to a group of objects. These actions can be classified as follows:

  • Transition actions – In which you define when objects transition to another storage class. For example, you may choose to transition objects to the STANDARD_IA (IA, for infrequent access) storage class 30 days after creation or archive objects to the GLACIER storage class one year after creation.
  • Expiration actions – In which you specify when the objects expire. Then Amazon S3 deletes the expired objects on your behalf.

I hope you enjoyed these AWS Interview Questions. The topics that you learnt in this AWS Interview questions blog are the most sought-after skill sets that recruiters look for in an AWS Solution Architect Professional. For a detailed study on AWS, you can refer AWS Tutorial.

Subscribe JNTU World and follow us on Twitter and Facebook too!

Also, don’t ever forget to share this post with your friends via WhatsApp groups!

Rate this post