Sunteți pe pagina 1din 7

Cloud Computing Service Models

==============================

===================================================================================
==============
Resources On-Premise Infrastructure as a Service Platform as
a Service Software as a Service
===================================================================================
==============================
Applications Yes | Yes |
Yes | No
Data Yes |We have to manage Yes | We have to manage Yes
| We have to manage No
Runtime Yes | Yes |
No No
Middleware Yes | Yes |
No No
O/S Yes | Yes |
No No
Virtualization Yes | Yes
No No
Servers Yes | Yes Vendor will
manage No Vendor will manage No Vendor will manage
Storage Yes | No
No No
Networking Yes | No
No No

Cloud Computing Deployment Models


=================================
Public: Cloud service providers make cloud computing resources available to the
general public over internet. Public cloud services may be free or offered on a
pay-per-uase model

Private: Private cloud services are dedicated to one organization or business and
have specific security controls

Hybrid: Combination of Public & Private cloud

Community: It is formed when several organizations with similar requirements share


common infrastructure

Edge Location (Cloud Front): is an intermediate between the end users and servers
to access the services from AWS. It is a small setup in different location to
provide low latency connection by caching static content. Basically it's a Content
Delivery/distribution Network and used with AWS CloudFront Service

EC2 (Elastic Compute Cloud)


===========================
Security Group: By default Ec2 created with firewalls and blocked all IPs and
ports. Security Group enables the inbound and outbound rules for an EC2.

KeyPair: KeyPair will be used to authenticate EC2 instances without password.


KeyPair contains private key and public key. Private key need to be downloaded to
local to authenticate EC2 and public key will be at ec2 side

Enable Password:
/etc/ssh/ssh_config set PasswordAuthentication yes
sudo service sshd restart

Service Quota: BY default AWS limits the EC2 instances with certain configuration
so we can raise request against AWS and get the limits increased by AWS

Instance Connect: is the feature provided by AWS for AWS linux servers only to
connect via browser instead putty

Cost Optimization: class4


On Demand: No need to commit. Use and terminate
Reserved Instance: We need to commit for 1year or 3years with No Upfront or
Partial Upfront or All Upfront. Here we have one more option standard or
convertable, in convertable we can upgrade EC2s with Upfronts.
Spot Instance: Spot Instances are cheep as compare to ON Demand and RI
instances. AWS declares some of the instances as Spot Instances and assign them to
any of the accounts. If AWS gets more requests for other instances (ON
Demand/Reserved/Dedicated) then aws gives 2 mins for account owner and terminate
this and give it to others.
Dedicated Hosts (Hardware Tenancy): Dedicated hosts are assigned to a
particular account and create EC2s so other accounts can't share the same hosts. By
default all ec2s are created on shared hosts

Instance Storage: By default EC2 comes with some storage called instance storage to
store temporary data and cache data apart from EBS, when the ec2 goes down the data
in instance storage will lost

Load Balancing ((ELB)Elastic Load Balancing) class5 & 6:Load Balancers distribute
the load among the EC2s. BY default they use round robin algorithm to distribute
the load. There are Network Load Balancer which operates from lawyer4 (supports
TCP/TLS/UDP protocals and do not support cookies and sessions), Application Load
Balancer which operates from lawyer7 (supports http/https and supports cookies and
sessions) and Classic Load Balancer (combination of both Network and Application
Load balancers) in AWS. ELB and EC2 are in the same region.

ALB (Application Load Balancer) has target groups which is a collection of EC2s
with specific applications, listeners and rules. We can have multiple listeners and
within the listeners we can have multiple rules to route the requests depends on
context path and listen address. With ALB we can do path based routing.

Auto Scaling: is the concept which AWS scale up and scale down the resources
depends on the conditions and rules to optimize the cost. Auto scaling has
different methods like Fixed Scaling, Target Scaling, Simple Scaling and Step
Scaling. Fixed Scaling maintains the fixed number of EC2s at any point of time, if
we mention 4 as the fixed scaling at any time aws maintains 4 always, if two goes
down aws creates new 2 instances. Trget Scaling has a condition like threshold
something like if the CPU (aws has multiple conditions not only cpu) crosses the
specified limit then aws automatically add one more instance. Simple Scaling has a
condition to scale up and scale down the services but Step Scaling has multiple
conditions to scale up and scale down the services.

Auto Scaling has a auto scaling group which is collection of aws services and group
has two components called template for EC2 and policy nothing but a methods. if we
shutdown auto scaling group all the instances will go down and if we start the
group all instances will start.

By default metrics sends to cloud watch for every 5mins it is free. If we want to
send the metrics to cloud watch for every minute enable detailed monitoring and it
is costlable.
Create Alarms to scale up and scale down the services, whenever the Alarms breach
then auto scaling will happen. Alarms will be created as part of auto scaling group
and we can watch these alarms in Cloud watch.

EBS (Elastic Block Storage)


===========================
EBS can be attached to only one EC2 at any particular time and both EBS and EC2
should be in the same Availability Zone. We can increase existing EBS volume and
also we can attach more EBSs to an existing EC2. Below are the commands to attach
additional EBS to an EC2. When we terminate EC2 additional attached EBSs will not
be terminated so we can attach them to newly created Ec2s and follow the same steps
except format command to attach to Ec2.
1. Switch to root
2. List of disks attached and note the disk name for 1GB (should be like
/dev/xvdf) "fdisk -l"
3. Format the disk with the file system "mkfs /dev/xvdf"
4. Create a directory "mkdir /mnt/disk100"
5. Map or mount the folder to the disk "mount /dev/xvdf /mnt/disk100"

EFS (Elastic File System)


========================
EFS can be attached to multiple EC2s but EC2 and EFS should be in the same region.
EFS uses NFS protocal and this protocal should be enabled in security group of EC2
to mount EFS on the EC2s. We no need to specify minimum storage as EBS. It will
automatically expand.

SNS (Simple Notification Service)


=================================
SNS uses to de-couple applications. Here in SNS we have Topic and Subscribers. This
is push mechanism. Producers send messages to Topic (Broker) and the consumers
(Subscribers) receive the message. No message will be stored in the Topic though
the consumers are down.

Route53 class 6
========
Route53 has number of hosted zones and each hosted zone contains multiple records.
A recored is either a cname or aname. cname maps a URL to another url and aname
maps to a domain name to IP(s). kenery deployment new deployment with new features.
Route53 has routing policies called weight based routing policy, latency based
routing policy, Geo based routing policy and simple routing to return a IP depends
on these routing policies. Each hosted zone will have 4 Name Servers (DNS servers)
so all records will be saved in 4 name servers. These are called domain name
servers (DNS), these name servers need to be registered/mapped with the Domain
Name.

IAM class7
==========
Users root admin power iamusers
Groups
Polocies A policy is a document (written in the Access Policy Language) that acts
as a container for one or more statements.
Roles are permissions which we can provide Users or Groups or Services

deny takes precedence

CLI & SDK class8


================
What ever we can do with browser we can do the same from CLI & SDK but we need to
get keys (security credentials) for CLI and create roles for SDK (we can also use
security credentials) from aws. We can create security credentials for each user
and use them to connect with AWS. we can get dowloaded the CLI and SDKs from aws
and install on our system or EC2s.
install CLI on our laptop
aws configure
provide security credentials and run the commands (We can get the commands
reference from AWS)
We have multiple SDKs to manage and work with AWS resources like python, java and
others

VPC (Virtual Private Cloud) class 8 & 9 (class 8 middle and class 9 start)
==========================================================================
need to revise with practical

Cloud Watch
===========
We can use Cloud Watch to see the metricks for each service/resource and also we
can store the logs. We can create alarms in Cloud Watch and send mail/notification
when the alarm breaches so we can take some actions accordingly.

Cloud Trail
===========
Using this service we can monitor the actions done by the IAM users like who
created/deleted/modified the AWS resources. All actions logged in Cloud Trail
related to AWS resources. By default the events store for 90 days. We can store
these events in S3 using trails forever.

Trusted Adviser
==============
Will give us notify which are all the services/resources are not in AWS security.

KMS (Key Management Service) class10


============================
KMS uses to encrypt and decrypt the data. KMS creates a CMKs and DKs to encrypt and
decrypt data. KMS is region specific. KMS uses symmetric algorithm to encrypt and
decrypt data (Uses the same key to encrypt and decrypt data). These keys are stored
in HSM.
CMKs - Customer Master Keys
DKs - Data Keys

SES (Simple Email Service) class10


==========================
Using SES we can send mails to verified email ids. This service is mainly for
sending mails.

SQS (Simple Queue Service) class10


==========================
SQS is same as SNS but SQS is pull mechanism. Mainly these two services are used to
de-couple the applications. Applications put the messages in Queue and other end
applications pull the messages. Since the messages are stored in the Queue if the
consumers are down at the moment when the consumers are up so they will read the
messages. So we will not lost the messages. SQS is having two types of Queues one
is Standard Queue and other one is FIFO (First In First Out) Queue.

Standard Queue:
Unlimited Throughput: Standard queues support a nearly unlimited number
of transactions per second (TPS) per API action.
At-Least-Once Delivery: A message is delivered at least once, but
occasionally more than one copy of a message is delivered.
Best-Effort Ordering: Occasionally, messages might be delivered in an
order different from which they were sent.
FIFO Queue:
High Throughput: FIFO queues support up to 300 messages per second (300
send, receive, or delete operations per second). When you batch 10 messages per
operation (maximum), FIFO queues can support up to 3,000 messages per second. To
request a limit increase, file a support request.
First-ln-First-out Delivery: The order in which messages are sent and
received is strictly preserved.
Exactly-Once Processing: A message is delivered once and remains
available until a consumer processes and deletes it. Duplicates are not introduced
into the queue.

Lambda class10 and 11


======
Lanbda is a service (Function) in AWS and it is serverless. When we call this
service then only we will be charged. We can develop Lanbda function using
different languages. This function can be integrated with diffent aws services so
this function will be automatically called. API Gateway is one of the calling
lambda mechanism manually.

S3 (Simple Storage Service) class 11 and 12


===========================
S3 can hold huge data. The top level element in S3 is bucket and in bucket we can
have files and folders subfolders. There are different storage classes in S3 so
data can be moved within these storage classes. By default S3 standard is used as
storage class then we have IA (infrequest access) and Glacier and others. The data
stored in different classes has different pricing. Standard is high as compare to
IA and Glacier but the accessing speed decrease. We can use Life Cycle Management
to move data from one storage class to other storage class automatically, for this
we create lifecycle rules and specify the days to move.

Intelligent-Tiering: By default all the storage classes use the date when the
object created to move to other classes, in this scnerion if the objects that are
frequetly accessed also move to other classes and accessing time will increase if
the objects moved to subsequent classes but Intelligent-
Tiering will use access time of object and if the object is not accessed
frequently/recently then only it moves to other classes. We
no need to specify days here automatically the objects in this class are moved to
other classes based on access time of the objects.
Cross Region Replication: This will be used to copy/replicate
data/buckets/objects from one region to another region for sake of backup, Disaster
Recovery, Complinace and latency. When we upload objects in one region
automatically those objects will be replicated to other target region Bucket. We
can configure to replicate entire bucket or part of bucket but before doing this
setup versioning should be enabled. Source and target can be have different storage
class.
Requestor Pays: By default storing and download the objects in/from S3 owner
of the object will be charged, if we want to share the cost between owner and
requestor then enable this option so whoever request/download the object they
will be charged for downloading and storing of the object will be
charged by the owner of the object. Both owner and requestor should have AWS
account.
Versioning: This is same like version control systems like CVS Github.
Transfer Acceleration: Instead of directly uploading data to S3 it uses Edge
locations. With this scenerio first data is placed in Edge location and then moved
to S3 so the performance and speed will increase since the Edge locations are near
to user and connected to S3 with high speed internet
Security: We can secure S3 objects with 3 ways one is Access Control List,
second one is Bucket Policy and third one is IAM Policy. Both Bucket and IAM policy
uses JSON but ACL we no need JSON. IAM policy can be managed in IAM console and
Bucket policy and ACL can be managed in S3 management console. Both Bucket
policy and IAM policy are same except Principal. We need to specify Principal in
Bucket Policy. In IAM we are attaching the policy to a specific
Principal/User/Account where as Bucket Policy doesn't know the User thats why we
need to specify the Principal in Bucket Policy.

Cloud Front: It is a service provided by AWS to get a URL related to S3 not


only S3 for any static websites.

Storage Gateway: It acts as an interface/bridge between on-premise and AWS


S3, uses disk interface to map a disk with S3 so we no need to change the code to
write data into S3. Assume the data is writing to local disk instead of S3 but
later we want to store/write the data in S3 so this time we need to change the code
to connect to S3 and write data to S3. But using this Storage Gareway without
changing the code we can write the data into S3 instaed of local disk. There are
two types of Storage Gateways one is File Gateway which uses NFS protocal and other
one is volume gateway which uses iscsi protocal to write data into S3. This Storage
Gateway installed in on-primise.

Snowball/Snowmobile
===================
These are data trasfer services from on-premise to AWS S3 offline.

RDS (Relational Database Service)


=================================
Class 12 and 13

ElasticCache
============
It is a service provided by AWS to cache application data. Application gets the
data from database and cache it is ElasticCache so next time onwards it reads data
from ElasticCache so the request is fast and performance will be increased. It
supports two methods/mechanisms to cache data one is Redis and Memcache. Memcache
uses key value to store and get the data but Redis supports key value, Complex
structure and geo locations query. Memcache is multithreaded but Redis is
singlethreaded.

RedShift
========
This is a dataware house tool provided by AWS. Redshift gets the data from
different services and process the data. Redshift uses columnor to store data
instead of row format. BI tools will be used to store and process the data with
RedShift. We need to create a table and map this table to a dataset to process
data. Redshift has leader node and compute nodes, leader node will be created by
AWS automatically and compute nodes we need to specify. When the request comes to
process the data first the request goes to leader node and leader node distribute
the work to compute nodes to process so the processing time decreased. Query Editor
is the option provide by AWS within Redshift to manage data.

CloudFormation
==============
CloudFormation automate the AWS resources and environments instead of creating them
manually one by one. It uses CloudFormation templates to create stacks. Stack is a
collection of AWS resources. When we delete stack AWS delete all resources
specified in the template. CloudFormation template can be written in JOSN or YAML
or CDK (cloud development kit). We have pre built templates provided by AWS so
using these pre built templates we can create stacks. We can use visual editor also
to create templates, when we use visual/designer editor it generates both JSON and
YAML. Pink dot represents dependency and blue dot represents association of
components.

Elastic Beanstalk
=================
Is a service provided by AWS and it falls under category PaaS. PaaS will not allow
us to configure the services/components by default but Elastic Beanstalk allows us
to configure the underlying services/components. This automates the applications.
In this Elastic Beanstalk first we need to create Application and then environment
and version and finally upload the application. In the background Elastic Beanstalk
creates corresponding services/components and deploy the application on them.

Kinesis
=======
Easily collect, process, and analyze video and data streams in real time, so you
can get timely insights and react quickly to new information. It mainly includes
video stream, data stream, data firehose and data analytics. Kinesis video steram
is used to capture, process and analyse video stream for machine learning and
analytics. Kinesis data stream is used to build custom application to analyse data
streams using third party stream processing system. Kinesis data firehose is used
to load data into AWS data stores. Kinesis data analytics is an easy way to process
data stream with SQL. Shard is the unit to measure the speed of the input and
output data.

OpsWorks
========
It is a service provided AWS for automate things specially application level. It
supports Chef and Puppet. Chef is again divide into two one is Chef Solo and other
one is Chef Automate. Puppet Enterprise and Chef Automate support client and server
architecture but Chef Solo is standalone application. Puppet supports puppet DSL
which is a declarative language and Chef supports Ruby which is a Imperative
language. OpsWorks lifecycle events are setup, configure, deploy, undeploy and
shutdown. Chef has cookbooks, cookbooks has a collection of recipes and recipies
has resource. Puppet has manifests and modules. OpsWorks work with Stack, layers,
Instance and App.

Athena & EMR


============
These are the services provide by AWS to process data like RedShift.

AWS 5 Pillars

S-ar putea să vă placă și