An Introduction to Cloud Computing and AWS Certification – SitePoint

Write powerful, clean and maintainable JavaScript.

RRP $11.95
In this article, we’ll look at what cloud computing is, the different types of cloud computing, what a cloud provider is, and why you might want to use one. We’ll also survey the best cloud providers, and dig into AWS services in particular and what cloud certification is all about.
When starting your cloud computing career, one of the first steps is to choose a cloud provider. Using that cloud provider’s services, you’ll be able to learn about various cloud computing concepts and get to practice your skills .
A cloud provider is a company that offers you computing services over the Internet. In the simplest terms, it allows you to store and run your applications on somebody else’s computers.
In reality, you can do more than this with a cloud provider, and you’ll get a glimpse of that in this article!
Rather than purchasing equipment, setting up your infrastructure, and maintaining it, you can use a cloud provider.
This way, you can focus on building and maintaining your applications without worrying about the physical infrastructure.
There are many cloud providers available, and there’s no right or wrong answer when choosing one. Some of the most popular cloud service providers are:
Amazon Web Services is the biggest and most popular cloud provider. Another strong point of AWS is its certification program. Amazon’s certifications are among the highest-paying certifications in IT.
As a result, this article focuses on AWS for your introduction to cloud computing. The AWS Cloud Practitioner Certificate is Amazon’s foundational course, which teaches the fundamentals of cloud computing and AWS.
Amazon Web Services offers 11 certifications that are divided into four categories.
Screenshot of all AWS certifications
AWS certifications
The Foundational Level has only one certification, the AWS Certified Cloud Practitioner certificate. It covers topics such as:
The Cloud Practitioner certificate is suitable and recommended for people who are getting started with cloud computing and AWS. To ease you into the cloud world, this article goes over cloud computing and AWS fundamentals. You can use it as a pre-requisite for the Cloud Practitioner certificate.

The next level is the Associate Level, which has three certifications:
These certifications are more complex than the foundational level, and they teach you how to implement solutions using the AWS infrastructure. With the “Associate Level” certificates, you deep dive into services rather than getting an overview of them.
The certificate you choose depends on the path you want to follow. The AWS Solutions Architect certificate helps you gain general AWS expertise. Since it enables you to gain general AWS expertise, you can use it as the foundation for your following certificates.
After the AWS Certified Cloud Practitioner certificate, you could work towards the Solutions Architect one.
The following levels, Professional Level and Specialty, are the most difficult certifications. You don’t have to worry about them for now.
Let’s start with some fundamental information on cloud concepts. The first question you might ask yourself is “what is cloud computing?”
In layman’s terms, cloud computing is simply like using someone else’s computer. Instead of having your servers, you rent the servers from someone like AWS.
In more sophisticated terms, cloud computing is the on-demand delivery of IT resources over the Internet on a pay-as-you-go basis.
When it comes to cloud computing, there are six significant benefits:
Now you know what cloud computing is and its six significant benefits. The next stage is to become familiar with the various types of cloud computing.
There are three types of Cloud Computing:
In addition to the three cloud computing services, there are four cloud computing deployments. These are:
At the time of writing, Amazon has 81+ availability zones within 25+ geographic regions. There are over 230+ points of presence, split as follows:

A region is a geographic area, and it consists of at least two availability zones (AZs). The reason for having at least two AZs is in case one of the data centers goes down. For example, one region is eu-west-1 (Ireland). Every region is independent of each other, and the US-EAST is the largest region. As a result, almost all services become available first in this region.
An availability zone is a data center (a building containing lots of physical servers). An availability zone might consist of several data centers, but they’re counted as one AZ because they’re close to each other.
Points of presence are data centers placed at the edge of the networks.
An edge location is an AWS endpoint for caching content. That’s typically CloudFront, which is AWS’s content delivery network. The purpose of these edge locations is to provide low latency for the end users.
There’s a unique region that’s not available to everyone. This region is called GovCloud, and it’s only accessible to companies from the US and US citizens. You also have to pass a screening process. GovCloud allows users to host sensitive Controlled Unclassified Information such as military information.
This section comprises the different AWS technologies such as computing services, storage services, logging services, and many more.
Identity Access Management, or IAM, is one of the essential tools in AWS. IAM is global, which means you don’t have to choose a specific region to use it.
A company has several departments, which means they need different types of access. You can define specific permissions for each department using IAM. IAM allows you to create users, groups, roles. It also allows you to apply a password policy. A password policy specifies what the password needs to contain — for example, numbers, characters, and so on. All the users and groups created are created globally.
According to AWS best practices, you should never use or grant root access to anyone. When someone gains access to the root account, they have complete control over the account. You should also turn on multi-factor authentication (MFA).
AWS Organization is an account management service that allows users to consolidate various AWS accounts into a single organization. It enables you to manage billing, access, security, compliance, and resource sharing across your AWS accounts. You can, for example, make billing easier by setting up a single payment for all of your AWS accounts.
Organizational units are groups within an organization that can contain other organization units. AWS Organization allows you to isolate different departments in the company — for instance, separate developers from human resources.
The goal of creating organizations for your teams is to attach policies and control access for each team individually. Service control policies define the rules for each organizational unit, ensuring that your accounts follow the guidelines set out by your department.
There are several AWS Compute Services. However, we’re only looking at EC2, ECS, Elastic Beanstalk, Fargate, EKS, Lambda, and Batch for this exam.

EC2 stands for Elastic Compute Cloud, a virtual server (or servers) in the cloud. EC2 makes it simple to scale up or down, depending on how your requirements change.
There are different types of pricing for EC2 instances. They are as follows:
If Amazon shuts down your EC2 instance, you won’t be charged for the remaining hour of usage. However, if you terminate your EC2 instance, you’ll be charged for any hours that the instance was running.
EBS is just a virtual hard drive disk that gets attached to your EC2 instances. Once EBS is attached to an EC2 instance, you can use it in any other way you would use an HDD. The EC2 instance needs to be in the same Availability Zone as the EBS. EBS comes in two flavors: SSD and Magnetic.
AWS ELB is used to balance the traffic between your resources. For instance, if one EC2 instance is down, the traffic is redirected to another one or creates another EC2 instance. The same happens if one of your resources is overloaded with traffic. That means your application is always available to users instead of being “down”. There are three types of load balancers:
The critical difference between these Elastic Load Balancers is that the Application Load Balancer can “look” into your code and make decisions based on that. In contrast, the Network Load Balancer is used when you need extremely high performance and static IP addresses.
ECS is a highly scalable, high-performance container orchestration service that supports Docker containers. It enables you to deploy and run containerized applications on AWS. You must select the type of ECS instance you want, which comes pre-configured with Docker.
You can quickly start or stop an application and access other services and resources such as IAM, CloudFormation templates, a load balancer, CloudTrail logs, or CloudWatch events. You must pay for the EC2 instances that ECS utilizes.
When you think of Fargate, I want you to think of the phrase serverless containers. Fargate enables you to run containers without the need to manage servers or clusters. Essentially, you deploy applications without having to worry about the infrastructure. You no longer need to select server types or decide how and when to scale your clusters.
ECS has two launch options: Fargate and EC2. All you have to do for the Fargate launch type is package your application in a container, specify the CPU and memory, and define the network and IAM policies. After that, your application is ready for deployment.
Fargate charges you per task and per CPU utilization. You don’t have to pay for EC2 instances. Fargate is best suited for applications with consistent workloads that are Docker containerized.
EKS also manages your Kubernetes management infrastructure across several AWS availability zones. The reason for that is to remove a single point of failure.
Finally, EKS is better suited for architectures with thousands of containers than ECS, which is better suited for simpler architectures.

These are just serverless functions that take care of everything after you’ve uploaded your code. AWS Lambda allows you to run your code without provisioning or managing servers.
You pay for the compute time you consume. There’s no charge when the Lambda isn’t running. A use case for Lambda functions would be unpredictable and inconsistent workloads.
AWS Elastic Beanstalk is a fast and straightforward way to deploy your application on AWS. This service handles capacity provisioning, load balancing, autoscaling, and health monitoring automatically.
Elastic Beanstalk is covered in greater detail later in the “AWS Provisioning Services” section.
AWS Batch allows you to plan, manage and execute your batch processing jobs. This service plans, manages, and runs batch processing workloads across the entire AWS Compute Services portfolio, including EC2 and spot instances.
We also need to store our data somewhere, right? Not to worry, as AWS allows us to do just that with a wide range of services.
The first in line is one of the oldest and most fundamental AWS services — Amazon Simple Storage Service (S3).
S3 allows users to store and retrieve any amount of data from anywhere in the world. It provides a highly scalable, secure and durable object storage. In simpler words, S3 is a safe place where you can host (store) your flat stuff (such as videos and images). By “flat”, I mean that the content doesn’t change. (For example, you can’t store a database in S3, as it continuously changes.) The data from your S3 buckets are spread across multiple facilities and devices in case of failures.
But wait, what do you mean by  “object storage”? Data is stored in buckets, and each bucket consists of key–value pairs. The key represents the file’s name, whereas the value represents the contents of the file.
Some essential quick points about S3 are:
What are the features of the S3 service?
S3 data consistency is of vital importance as well. What about it, though?
How does S3 charge you? S3 charges you based on:

The last thing that remains is to look at the different S3 storage classes. They are as follows:
The figure below compares the S3 storage classes.
AWS S3 Storage Classes
Source: AWS reInvent
There are multiple database services, but they’re split into two parts. There are NoSQL and SQL (relational) databases. The NoSQL databases available on AWS are:
The SQL (relational) databases are:
The relational databases have two key features:
Provisioning refers to the creation of resources and services for a customer. It’s a way of creating resources for your AWS resources. The AWS provisioning resources are:
Let’s start with CloudFormation, one of the most powerful and helpful tools in AWS.
CloudFormation is a JSON or YAML template that turns your infrastructure into code and consists of stacks. “Turning infrastructure into code” means programmatically specifying all the resources needed by your application, after which they’ll be created automatically. That means you don’t have to manually create resources in the AWS console and then link them together.
See an example of a CloudFormation template that creates an EC2 instance with security groups here (it’s in YAML format).
Elastic Beanstalk allows you to upload your application code. It automatically creates all the resources for you (provisioning your EC2 instances, your security groups, your application load balancers, all with the click of a button). It automatically handles the details of capacity provisioning, load balancing, scaling, and application monitoring.
Elastic Beanstalk is an excellent service for quickly deploying and managing applications in the cloud without you having to worry about the infrastructure if you’re unfamiliar with AWS. It automates everything for you. If you want to associate this service with something more familiar, Elastic Beanstalk is AWS’s own Heroku.

AWS Quick Starts allow you to quickly deploy applications in the cloud by using existing CloudFormation templates built by experts. Let’s say you want to deploy a WordPress blog on AWS. You can go to AWS Quick Starts and use a template that does just that, so you don’t have to build it yourself.
Amazon describes AWS Marketplace like this:
AWS Marketplace is a digital catalogue with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS.
You could use AWS Marketplace to buy a pre-configured EC2 instance for your WordPress blog.
Lastly, OpsWorks is a configuration management service that allows you to manage instances of Chef and Puppet. It gives you the ability to use code to automate the configuration of your servers. More OpsWorks information can be found here.
One important area we need to cover is logging. If your services go down, you surely want to know why that happened. Thus, AWS provides two logging services that help you with that:
It can be easy to confuse these two services, so you can read more about the difference between AWS CloudTrail and AWS Cloudwatch if you’re interested.
AWS CloudFront is Amazon’s content delivery network (CDN). A CDN is a system of distributed servers worldwide that serves web content to users based on their geographical location and the web page origin.
There are two types of CloudFront distributions:
This is an essential section. The reason is that you don’t want to incur any unnecessary expenses (which is relatively easy to do with AWS), and it’s also a vital component of the exam.
You must remember the AWS paying principles. These are as follows:
Also, on AWS you pay for:
AWS is smart. To entice you to use their services, they don’t charge you for migrating your data to them. They do, however, charge you when you transfer data from their cloud.

The other two important terms you should know are CAPEX and OPEX. CAPEX stands for Capital Expenditure, and it means to pay upfront. It’s a fixed cost. OPEX stands for Operational Expenditure, and it means paying only for what you use.
There are four fundamental pricing principles. These are:
These are the critical pricing policies, and you can read more about them here.
One of the downsides of AWS is how easy it is to generate a massive bill. If you don’t pay attention and don’t make the most out of the budgets and billing alarms, you may rack up a bill of a few thousand dollars and even more.
The billing alarm enables you to set money limits to ensure that you don’t overspend. You’ll be warned when you reach a certain threshold and are near to exceeding the set limit.
Learn how to set a budget on AWS
Let’s ease in with the free services from AWS. The free AWS services are as follows:
There is, however, a catch. These services are free, but the resources they use/create aren’t. Although CloudFormation is free, the resources it generates aren’t. You’ll be charged for the EC2 instances as well as whatever it creates/uses. Keep this in mind at all times.
There are currently four support plans with different features. The different AWS support plans are BasicDeveloperBusiness, and Enterprise. Let’s see how they differ and what do they offer.
This is the most basic plan, with actually no support (huh). This plan could be used for testing AWS or very small applications.
With the developer support plan, things get better. We have more benefits, which means that this service is paid.
This service is better than the basic plan.
This support plan is even better.

The response times are very good with this support plan. If your production system is down, you get an answer in less than one hour. That is admirable.
This plan is the best support plan. However, it comes with a hefty price tag.
The main benefit of this support plan is that you’ll be assigned a technical account manager. This is an Amazon employee who’s solely responsible for your account.
The main key takeaway from the AWS support plans is to remember the case severities and response times. Also, remember which support plan you get a Technical Account Manager with. In the exam, you get a scenario, and you have to choose a support plan.
You can go to the Marketplace and buy a pre-configured WordPress blog that runs on AWS, for example. You can purchase CloudFormation templates, Amazon Machine Images, AWS Web Application Firewall rules, and other items.
Be warned that while the Marketplace service may be free, there may be additional fees related to the software you buy. AWS deducts the charges from your account before paying the vendor.
AWS allows you to create a paying account to aggregate your payments from all of your AWS accounts. To put it another way, you can pay all of your bills from a single account.
Keep in mind that the paying account is separate from all other accounts and has no access to their resources.
What are the advantages of using this service?
In this section, we’ll go through AWS Budgets and AWS Cost Explorer.
AWS Budgets allows you to build custom budgets that warn you when you’re about to go over your budget limit, or when that limit is exceeded.
AWS Cost Explorer is a tool for checking and managing your AWS expenditures over time.
The difference between them is that AWS Budgets enables you to explore costs prior to being charged, whereas AWS Cost Explorer can be used to investigate costs after you’ve been charged.

TCO stands for Total Cost of Ownership, and it helps you compare the costs of your AWS cloud infrastructure to the costs of your on-premises infrastructure.
AWS TCO indicates how much you may save by migrating from on-premises to AWS cloud. It only provides an estimate, so the actual expenses may differ.
The AWS Trusted Advisor is a tool that helps users reduce costs, improve performance, and increase security by implementing the recommendations it provides. In other words, the Trusted Advisor provides users with advice on cost optimization, performance, security, fault tolerance, and service limits. It also ensures that users adhere to AWS best practices by providing real-time guidance.
There are three types of trusted advisors: free, and business/enterprise. With the free trusted advisor, you get seven trusted advisor checks, whereas with the business/enterprise advisor, you get all trusted advisor checks.
Tags are metadata (information about data) and are represented as key–value pairs. These tags are associated with AWS resources and can contain information such as EC2 public and private addresses, ELB port configuration, or RDS database engines.
Resource groups allow you to categorize your resources based on the tags that have been assigned to them. They may include information such as the region, name, or department.
Simply put, tags and resource groups allow you to organize your resources.
The final phase is to investigate what factors influence costs for various services such as EC2, Lambda, S3, and others.
Security is an essential topic, especially in the cloud.
According to the shared responsibility model, Amazon AWS is responsible for security of the cloud, while customers are responsible for security in the cloud.
What exactly do they mean when they say “security of the cloud”? They claim that AWS is responsible for the infrastructure that the services run on. The physical servers, the location where they’re stored, the networking, and the facilities that run the AWS cloud services are all part of the infrastructure.
What do they mean by “security in the cloud”? Customers are responsible for patching their EC2 instances, securing their customer data, ensuring compliance with various legislations, and employing IAM (Identity Access Management) solutions, among other things. The customer’s responsibilities are determined by the AWS service they’re using. You are directly responsible for the data you put on AWS and for enabling monitoring tools.
AWS Shared Responsibility Model

AWS Shared Responsibility Model
The figure above illustrates the shared responsibilities between the customers and AWS.
First of all, let’s define what compliance programs are. Compliance programs are a set of internal policies and procedures of the company to comply with laws and regulations.
For example, if you’re a hospital that uses AWS services, you must comply with HIPAA. Another example is when you accept credit card payments and must be PCI DSS compliant. We have AWS Artifact to ensure that you’re complying with regulations.
AWS Artifact is a service that provides access to AWS compliance programs. AWS Artifact allows you to find, accept, and manage AWS agreements for a single account or all accounts within your organization. It also allows you to cancel any previously accepted agreement if it is no longer required.
AWS Inspector is an automated security service that evaluates your applications hosted on AWS to improve their security and compliance.
AWS Inspector examines your applications to see if they deviate from existing best practices and if they contain any security flaws. When the assessment is finished, it will generate a report with all the findings organized by severity level.
Its goal is to remove as many security flaws as possible.
I’m sure you’ve heard of web attacks like SQL injections, cross-site scripting (XSS), and sensitive data exposure, among other things. The AWS WAF service’s purpose is to protect your applications from common web exploits like those, as well as many others.
This service allows you to filter traffic based on the contents of HTTP requests. That is, depending on the contents of the incoming HTTP requests, you can DENY or ALLOW traffic to your application. You could also use a pre-existing ruleset from the AWS WAF Rules marketplace.
AWS WAF can be attached to CloudFront, your Application Load Balancer, or the Amazon API Gateway.
The cost of AWS WAF is determined by the number of rules you deploy and the number of requests your applications receive.
AWS WAF doesn’t protect your applications from all attacks and exploits. Applications must also be protected from DDoS attacks. A DDoS attack is an attempt to make an application unresponsive by overwhelming it with requests. The server can’t handle all the requests and the application breaks. As a result, users can no longer access the application.

This is where AWS Shield comes in handy. AWS Shield is a security service that protects AWS-hosted applications. It’s always on and actively scans the applications. Its goal is to reduce downtime and latency by protecting your application against DDoS attacks. When you route your traffic through Route53 or CloudFront, you’re automatically using AWS Shield.
AWS Shield comes in two flavors — basic and advanced. The basic version is free and used by default. The advanced version will cost you $3000 per month, but it’s worth the money. The reason is that you aren’t charged for the charges incurred during the DDoS attack. It doesn’t matter if your resources were maxed out during the attack; you won’t pay anything. That’s not the case with the basic service, and a DDoS attack can result in massive charges.
AWS Shield protects an application against three layers of attack:
AWS GuardDuty is a threat-detection service that continuously monitors AWS-hosted applications for malicious and suspicious activity, as well as unauthorized behavior.
This service scans CloudTrail, VPC, and DNS logs using machine learning, anomaly detection, and integrated threat intelligence. It will automatically notify you if it discovers any problems.
Amazon Macie is a security service that exclusively scans S3 buckets for sensitive information using machine learning and natural language processing. Sensitive information includes information such as credit card numbers, for example.
When it detects anomalies, it generates detailed alerts for you to review.
AWS Athena allows you to query data in S3 buckets using SQL. It’s a serverless service. Therefore, no setup is required. There’s no need to set up complex Extract/Transform/Load operations.
AWS Athena charges per query or TB scanned.
The AWS VPN gives you the ability to create a secure and private connection to your AWS network. There are two types of VPNs:
The security groups act as a firewall at the instance level, and it implicitly denies all traffic. You can create allow rules to allow traffic to your EC2 instances. For example, you can enable HTTP traffic to your EC2 instances through port 80 by adding a specific rule.
The NACLs (Network Access Control Lists) act as a firewall at the subnet level. You can create ALLOW and DENY rules for the subnets. What does that mean? For example, you could restrict access to a specific IP address known for abuse.
Congratulations on taking your first steps towards your cloud computing journey!
After learning about the fundamental cloud concepts and AWS basics, you’re ready to start with the AWS Certified Cloud Practioner certificate.
A developer advocate by day, and a software engineer by night. I enjoy working with Vue, Node.Js, AWS and JavaScript. I also write articles on the same topics.
Learn the basics of programming with the web’s most popular language – JavaScript
A practical guide to leading radical innovation and growth.
© 2000 – 2021 SitePoint Pty. Ltd.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

source