Documente Academic
Documente Profesional
Documente Cultură
==============================
===================================================================================
==============
Resources On-Premise Infrastructure as a Service Platform as
a Service Software as a Service
===================================================================================
==============================
Applications Yes | Yes |
Yes | No
Data Yes |We have to manage Yes | We have to manage Yes
| We have to manage No
Runtime Yes | Yes |
No No
Middleware Yes | Yes |
No No
O/S Yes | Yes |
No No
Virtualization Yes | Yes
No No
Servers Yes | Yes Vendor will
manage No Vendor will manage No Vendor will manage
Storage Yes | No
No No
Networking Yes | No
No No
Private: Private cloud services are dedicated to one organization or business and
have specific security controls
Edge Location (Cloud Front): is an intermediate between the end users and servers
to access the services from AWS. It is a small setup in different location to
provide low latency connection by caching static content. Basically it's a Content
Delivery/distribution Network and used with AWS CloudFront Service
Enable Password:
/etc/ssh/ssh_config set PasswordAuthentication yes
sudo service sshd restart
Service Quota: BY default AWS limits the EC2 instances with certain configuration
so we can raise request against AWS and get the limits increased by AWS
Instance Connect: is the feature provided by AWS for AWS linux servers only to
connect via browser instead putty
Instance Storage: By default EC2 comes with some storage called instance storage to
store temporary data and cache data apart from EBS, when the ec2 goes down the data
in instance storage will lost
Load Balancing ((ELB)Elastic Load Balancing) class5 & 6:Load Balancers distribute
the load among the EC2s. BY default they use round robin algorithm to distribute
the load. There are Network Load Balancer which operates from lawyer4 (supports
TCP/TLS/UDP protocals and do not support cookies and sessions), Application Load
Balancer which operates from lawyer7 (supports http/https and supports cookies and
sessions) and Classic Load Balancer (combination of both Network and Application
Load balancers) in AWS. ELB and EC2 are in the same region.
ALB (Application Load Balancer) has target groups which is a collection of EC2s
with specific applications, listeners and rules. We can have multiple listeners and
within the listeners we can have multiple rules to route the requests depends on
context path and listen address. With ALB we can do path based routing.
Auto Scaling: is the concept which AWS scale up and scale down the resources
depends on the conditions and rules to optimize the cost. Auto scaling has
different methods like Fixed Scaling, Target Scaling, Simple Scaling and Step
Scaling. Fixed Scaling maintains the fixed number of EC2s at any point of time, if
we mention 4 as the fixed scaling at any time aws maintains 4 always, if two goes
down aws creates new 2 instances. Trget Scaling has a condition like threshold
something like if the CPU (aws has multiple conditions not only cpu) crosses the
specified limit then aws automatically add one more instance. Simple Scaling has a
condition to scale up and scale down the services but Step Scaling has multiple
conditions to scale up and scale down the services.
Auto Scaling has a auto scaling group which is collection of aws services and group
has two components called template for EC2 and policy nothing but a methods. if we
shutdown auto scaling group all the instances will go down and if we start the
group all instances will start.
By default metrics sends to cloud watch for every 5mins it is free. If we want to
send the metrics to cloud watch for every minute enable detailed monitoring and it
is costlable.
Create Alarms to scale up and scale down the services, whenever the Alarms breach
then auto scaling will happen. Alarms will be created as part of auto scaling group
and we can watch these alarms in Cloud watch.
Route53 class 6
========
Route53 has number of hosted zones and each hosted zone contains multiple records.
A recored is either a cname or aname. cname maps a URL to another url and aname
maps to a domain name to IP(s). kenery deployment new deployment with new features.
Route53 has routing policies called weight based routing policy, latency based
routing policy, Geo based routing policy and simple routing to return a IP depends
on these routing policies. Each hosted zone will have 4 Name Servers (DNS servers)
so all records will be saved in 4 name servers. These are called domain name
servers (DNS), these name servers need to be registered/mapped with the Domain
Name.
IAM class7
==========
Users root admin power iamusers
Groups
Polocies A policy is a document (written in the Access Policy Language) that acts
as a container for one or more statements.
Roles are permissions which we can provide Users or Groups or Services
VPC (Virtual Private Cloud) class 8 & 9 (class 8 middle and class 9 start)
==========================================================================
need to revise with practical
Cloud Watch
===========
We can use Cloud Watch to see the metricks for each service/resource and also we
can store the logs. We can create alarms in Cloud Watch and send mail/notification
when the alarm breaches so we can take some actions accordingly.
Cloud Trail
===========
Using this service we can monitor the actions done by the IAM users like who
created/deleted/modified the AWS resources. All actions logged in Cloud Trail
related to AWS resources. By default the events store for 90 days. We can store
these events in S3 using trails forever.
Trusted Adviser
==============
Will give us notify which are all the services/resources are not in AWS security.
Standard Queue:
Unlimited Throughput: Standard queues support a nearly unlimited number
of transactions per second (TPS) per API action.
At-Least-Once Delivery: A message is delivered at least once, but
occasionally more than one copy of a message is delivered.
Best-Effort Ordering: Occasionally, messages might be delivered in an
order different from which they were sent.
FIFO Queue:
High Throughput: FIFO queues support up to 300 messages per second (300
send, receive, or delete operations per second). When you batch 10 messages per
operation (maximum), FIFO queues can support up to 3,000 messages per second. To
request a limit increase, file a support request.
First-ln-First-out Delivery: The order in which messages are sent and
received is strictly preserved.
Exactly-Once Processing: A message is delivered once and remains
available until a consumer processes and deletes it. Duplicates are not introduced
into the queue.
Intelligent-Tiering: By default all the storage classes use the date when the
object created to move to other classes, in this scnerion if the objects that are
frequetly accessed also move to other classes and accessing time will increase if
the objects moved to subsequent classes but Intelligent-
Tiering will use access time of object and if the object is not accessed
frequently/recently then only it moves to other classes. We
no need to specify days here automatically the objects in this class are moved to
other classes based on access time of the objects.
Cross Region Replication: This will be used to copy/replicate
data/buckets/objects from one region to another region for sake of backup, Disaster
Recovery, Complinace and latency. When we upload objects in one region
automatically those objects will be replicated to other target region Bucket. We
can configure to replicate entire bucket or part of bucket but before doing this
setup versioning should be enabled. Source and target can be have different storage
class.
Requestor Pays: By default storing and download the objects in/from S3 owner
of the object will be charged, if we want to share the cost between owner and
requestor then enable this option so whoever request/download the object they
will be charged for downloading and storing of the object will be
charged by the owner of the object. Both owner and requestor should have AWS
account.
Versioning: This is same like version control systems like CVS Github.
Transfer Acceleration: Instead of directly uploading data to S3 it uses Edge
locations. With this scenerio first data is placed in Edge location and then moved
to S3 so the performance and speed will increase since the Edge locations are near
to user and connected to S3 with high speed internet
Security: We can secure S3 objects with 3 ways one is Access Control List,
second one is Bucket Policy and third one is IAM Policy. Both Bucket and IAM policy
uses JSON but ACL we no need JSON. IAM policy can be managed in IAM console and
Bucket policy and ACL can be managed in S3 management console. Both Bucket
policy and IAM policy are same except Principal. We need to specify Principal in
Bucket Policy. In IAM we are attaching the policy to a specific
Principal/User/Account where as Bucket Policy doesn't know the User thats why we
need to specify the Principal in Bucket Policy.
Snowball/Snowmobile
===================
These are data trasfer services from on-premise to AWS S3 offline.
ElasticCache
============
It is a service provided by AWS to cache application data. Application gets the
data from database and cache it is ElasticCache so next time onwards it reads data
from ElasticCache so the request is fast and performance will be increased. It
supports two methods/mechanisms to cache data one is Redis and Memcache. Memcache
uses key value to store and get the data but Redis supports key value, Complex
structure and geo locations query. Memcache is multithreaded but Redis is
singlethreaded.
RedShift
========
This is a dataware house tool provided by AWS. Redshift gets the data from
different services and process the data. Redshift uses columnor to store data
instead of row format. BI tools will be used to store and process the data with
RedShift. We need to create a table and map this table to a dataset to process
data. Redshift has leader node and compute nodes, leader node will be created by
AWS automatically and compute nodes we need to specify. When the request comes to
process the data first the request goes to leader node and leader node distribute
the work to compute nodes to process so the processing time decreased. Query Editor
is the option provide by AWS within Redshift to manage data.
CloudFormation
==============
CloudFormation automate the AWS resources and environments instead of creating them
manually one by one. It uses CloudFormation templates to create stacks. Stack is a
collection of AWS resources. When we delete stack AWS delete all resources
specified in the template. CloudFormation template can be written in JOSN or YAML
or CDK (cloud development kit). We have pre built templates provided by AWS so
using these pre built templates we can create stacks. We can use visual editor also
to create templates, when we use visual/designer editor it generates both JSON and
YAML. Pink dot represents dependency and blue dot represents association of
components.
Elastic Beanstalk
=================
Is a service provided by AWS and it falls under category PaaS. PaaS will not allow
us to configure the services/components by default but Elastic Beanstalk allows us
to configure the underlying services/components. This automates the applications.
In this Elastic Beanstalk first we need to create Application and then environment
and version and finally upload the application. In the background Elastic Beanstalk
creates corresponding services/components and deploy the application on them.
Kinesis
=======
Easily collect, process, and analyze video and data streams in real time, so you
can get timely insights and react quickly to new information. It mainly includes
video stream, data stream, data firehose and data analytics. Kinesis video steram
is used to capture, process and analyse video stream for machine learning and
analytics. Kinesis data stream is used to build custom application to analyse data
streams using third party stream processing system. Kinesis data firehose is used
to load data into AWS data stores. Kinesis data analytics is an easy way to process
data stream with SQL. Shard is the unit to measure the speed of the input and
output data.
OpsWorks
========
It is a service provided AWS for automate things specially application level. It
supports Chef and Puppet. Chef is again divide into two one is Chef Solo and other
one is Chef Automate. Puppet Enterprise and Chef Automate support client and server
architecture but Chef Solo is standalone application. Puppet supports puppet DSL
which is a declarative language and Chef supports Ruby which is a Imperative
language. OpsWorks lifecycle events are setup, configure, deploy, undeploy and
shutdown. Chef has cookbooks, cookbooks has a collection of recipes and recipies
has resource. Puppet has manifests and modules. OpsWorks work with Stack, layers,
Instance and App.
AWS 5 Pillars