Sunteți pe pagina 1din 41

CHAPTER 1

INTRODUCTION

Cloud computing mostly alluded to as "the cloud", in easy terms implies swing away

or going to your data and projects over the web instead of storing to your own disk

drive. Everything lately is moved to the cloud, running within the cloud, have to be

compelled to from the cloud or may well be place away within the cloud. AWS is that

the most trustworthy supplier of cloud computing that not solely provides wonderful

cloud security however additionally provides excellent cloud services.

1.1 LOCAL INFRASTRUCTURE SERVER

Data center infrastructure usually includes the power, cooling and building components

necessary to support data center hardware. The data center hardware infrastructure

sometimes involves servers, storage subsystems, networking devices, like switches,

routers and physical cabling and dedicated network appliances, like network firewalls.

A data center infrastructure additionally needs careful thought of IT infrastructure

security. This will embody physical security for the building, like electronic key entry,

constant video and human surveillance of the premises, rigorously controlled access to

the server and storage areas, and so on. This ensures solely licensed personnel will

access the information center hardware infrastructure and reduces the potential for

malicious harm or data larceny.

Outside of the data center is a web infrastructure, which has transmission media, like

fibre optic cables, satellites, routers, aggregators, repeaters, load balancers and different

network parts that manage transmission methods. Web infrastructures are designed,

engineered and operated by internet Service providers (ISPs). Once a business engages

1|Page
an ISP for web access, the ISP usually ties into the data center infrastructure inside an

ardent and secured building area.

The role of cloud computing is dynamic the manner infrastructures are designed and

enforced. Wherever traditional, business-owned data centers are private, capital-

intensive resources, cloud computing allows organizations to access a cloud provider's

data center infrastructure and services for a fee. This infrastructure-as-a-service (IaaS)

model permits versatile computing on demand. Users will invoke a cloud provider's

compute, storage and services while not the necessity to deploy those resources

domestically -- and modify cloud infrastructure usage, as workload wants amendment.

2|Page
1.2 WHAT IS A CLOUD?

Cloud computing frequently alluded to as "the cloud", in basic terms implies putting

away or getting to your information and projects over the web as opposed to your own

hard drive. Everything these days is moved to the cloud, running in the cloud, got to

from the cloud or might be put away in the cloud. Henceforth, the interest for Certified

Cloud Architects is expanding over all divisions of the economy.

It is some place at the opposite end of your web association where you store your

documents and can be gotten to from anywhere on the planet.

Three reasons to prefer cloud are as follows:

1. You do not need to keep up or direct any foundation for the equivalent.

2. It will never come up short on limit since it is virtually infinite.

3. You can get to your cloud-based applications from anywhere you simply need a

gadget, which can associate with the web.

1.3 MODELS OF CLOUD

The services that the cloud offers can be divided into 3 different models:

1. SaaS (Software as a Service)

In this service, the Cloud Provider leases applications or programming which are

claimed by them to its customer. The customer can get to this product on any gadget,

which is associated with the Internet utilizing devices, for example, an internet browser,

an application, and so forth. For Example, salesforce.com gives the CRM (Customer

Relation Manager) on a cloud foundation to its customer and charges them for it;

however, the product is claimed by the salesforce organization as it were.

3|Page
2. PaaS (Platform as a Service)

In this service, the Cloud Provider enables the client to send client made application

utilizing programming dialects, tools and so on that are given by the Cloud Provider.

The client cannot control the hidden architecture design including operating systems,

storage, servers and so on. For Example: This service would sound good to you just in

the event that you are a designer, since this service gives you a stage to creating

applications, similar to Google App Engine.

3. IaaS (Infrastructure as a Service)

In this service the Cloud Provider furnishes the client with virtual machines and

different assets as an administration, they restrict the client from the physical machine,

area and so on. In the event that the client needs a Linux machine, he gets a Linux

machine, he will not stress over the physical machine or the systems administration of

the framework on which the OS is introduced.

For Example, AWS (Amazon Web Services) is IaaS, as AWS EC

4|Page
CHAPTER 2

2.1 CLOUD INFRASTRUCTURE

Cloud infrastructure is no different from typical data centre infrastructure except that it

is virtualised and offered as a service to be consumed via the Internet. Servers, storage,

compute resources and security are all key components of cloud infrastructure. Cloud

infrastructure can be managed much more efficiently than traditional physical

infrastructure, which typically requires that individual servers, storage, compute and

networking components be procured and assembled to support an application. With

cloud infrastructure, DevOps teams can deploy infrastructure programmatically, as part

of an application’s code. Cloud infrastructure is flexible and scalable, making it ideal

for enterprise computing. A cloud-based infrastructure has several key components

such as Server, Software, Network devices and other storage resources.

A cloud infrastructure is offered in three methods:

1. Public Cloud: In this cloud deployment mode, the services are generally available

or open for use by public and the services can be availed absolutely free of cost. Since

it is available for use by everyone thus, involves a risk factor, which cannot be neglected.

2. Private Cloud: It is mainly designed for a single organisation and services can only

be availed by the authorised users of that organisation. It is mainly favoured if the

company is big enough to handle its own data centers and has the effective budget.

Seeing this the cost of private cloud is higher since hardware has to upgrade periodically

and security has to be given the utmost priority.

3. Hybrid Cloud: A hybrid cloud architecture includes a combination of private and

public cloud offerings. This offering provides efficiency with a public cloud and

security with a private cloud.

5|Page
2.2 AMAZON WEB SERVICES (AWS)

It is a global cloud platform, which allows to manage and host the services on internet.

It provides many services where in you can run your applications in cloud. It is based

on a paid subscription with a free trail option available for 12 months. Amazon , which

ventured as an online bookstore has emerged as the leading cloud-computing provider

in the world. Amazon Web Service (AWS) Cloud Business is a service that provides

developers and businesses a simple way to create a set of related AWS resources and

provision them in an orderly and foreseeable fashion. AWS offers a wide set of services.

It helps the users to move fast in business, lower the costs, and can gain access to the

on-demand computing resources. Amazon Web Services (AWS) provides a highly

scalable and reliable infrastructure primarily for deploying solutions with minimal

support. It offers a whole set of Cloud Computing services that facilitate organizations

to form accessible applications corresponding to their requirements. Definite services

consist of computing, networking, storage and content distribution, databanks,

arrangement, management and application facilities.

Why it is such big hit? Everyone is trying to use AWS; everyone is trying to put their

applications in cloud. So what is the reason that AWS is the top provider in cloud?

6|Page
One of the biggest reason is billing - Billing is very clear. It provide the per hour billing,

every instance or service has a micro billing. EC2 instance, you get per hour billing

which is very transparent. S3 buckets are charged on the basis of per gigabyte. Although,

it is storage service but still micro billing is available.

Second reason is easy sign up process - You do not need to sign any agreement, just

sign up with mail id and add credit card, you are signed in. Within a minute account is

created.

Third reason is trusted vendor - It is the most trusted service provider in IT market .It

is used by small start-ups to big enterprises.

Amazon Web Services (AWS) is a secure cloud services platform, offering compute

power, database storage, content delivery and other functionality to help businesses

scale and grow. In simple words AWS allows you to do the following things-

 Running web and application servers in the cloud to host dynamic websites.

 Securely store all your files on the cloud so you can access them from anywhere.

 Using managed databases like MySQL, PostgreSQL, Oracle or SQL Server to

store information.

 Deliver static and dynamic files quickly around the world using a Content

Delivery Network (CDN).

 Send bulk email to your customers.

7|Page
The total cost of hosting your static website on AWS will vary depending on your usage.

Typically, it will cost $1-3/month if you are outside the AWS Free Tier limits. If you

qualify for AWS Free Tier and are within the limits, hosting your static website will

cost around $0.50/month. The prices of services get reduce when you there is a

subscription for 12 months or 6 months. Per hour billing for almost everything.

2.3 AWS v/s ITS COMPETITORS

AWS and Microsoft Azure are 2 major players within the cloud computing trade,

however, still AWS is larger than Azure. Well, the server capability of AWS is half-

dozen times the scale than all of its competitors' server size combined.

In addition, AWS started its cloud journey approach back in 2006 compared to

Microsoft Azure that was launched in 2010, therefore in terms of service; AWS’s

service model is a lot of mature. Amazon owns the biggest knowledge centers within

the world that are strategically placed all around the globe. When we see Azure, it is

nowhere close to the capacity that Amazon has, then again, Microsoft has been

operating laborious to realize the type of services and adaptability that Amazon offers.

For instance, in 2014, Microsoft launched a redundant storage choice referred to as

Zone Redundant Storage that is at par with the services that Amazon offers.

8|Page
2.4 GARTNER MAGIC QUADRANT

Recently, analyst house Gartner, Inc. released their 2019 Magic Quadrant for Public

Cloud Infrastructure Professional and Managed Services, Worldwide. The Magic

Quadrant lists 19 different MSP!vendors and showcases their offerings, strong suits

and weaknesses. Gartner then measures!the vendor on both!the completeness!of!their

vision and their ability to execute.$From these measurements, the vendors are!placed

into one of@four categories:$Leaders,$Challengers, Visionaries, and Niche Players.

The 19 vendors that!Gartner analysed! this year are: 2nd Watch, Accenture, Atos,

Bespin Global,@Capgemini,@Cloudreach,#Cognizant,#Deloitte,#DXC Technologies,

HCL Technologies,#Infosys, Logicworks, Nordcloud,#Rackspace, Samsung SDS,

Smartronix, !Tata Consultancy Services, Unisys, and Wipro.

The Leaders quadrant#was a lot of $packed this year, with!six entrants rather than

2018’s three. Accenture$and Rackspace maintained their Leader standing whereas

Cloudreach were stirred to the Challengers class.

9|Page
10 | P a g e
CHAPTER 3

PROJECT PHASES

3.1 AWS account creation

3.2 Architecture Design phase

Creation of cloud architecture for web application deployment

3.3 INFRASTRUCTURE DEPLOYMENT

After designing the architecture, it is time to start the infrastructure deployments.

The step-by-step approach for the same is as follows.

3.4 VPC CREATION

Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS

resources into a virtual network that you've defined. This virtual network closely

resembles a traditional network that you'd operate in your own data centre, with the

benefits of using the scalable infrastructure of AWS.

Benefits of using VPC are as follows :-

i. Define custom network.

ii. Assign static private IPv4 addresses to the instances.

iii. Define network interfaces and attach one of more network interfaces to

the instances.

iv. Define the routing between different subnets.

v. Define the internet access for the subnets.

vi. Define your network security by allowing/denying the traffic.

Amazon VPC builds massive computing power through Amazon EC2, scalable

storage via S3, and dedicated private IP address via Amazon Elastic IP. Amazon

11 | P a g e
Elastic IP allocates separate IP addresses for each EC2 instance and isolates internet

accessible & un-accessible servers so only desired servers are accessed by remote

users. Amazon VPC can also be connected with an in-house VPN to create

dedicated connection with the physical and cloud data centres.

3.5 SUBNET CREATION

A VPC spans all the Availability Zones in the region. After creating a VPC, you can

add one or more subnets in each Availability Zone. When you create a subnet, you

specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each

subnet must reside entirely within one Availability Zone and cannot span zones.

Availability Zones are distinct locations that are engineered to be isolated from failures

in other Availability Zones. By launching instances in separate Availability Zones, you

can protect your applications from the failure of a single location. You can indicate a

scope of openly routable IPv4 addresses, be that as it may, we as of now don't bolster

direct access to the web from freely routable CIDR blocks in a VPC. Windows

occurrences cannot boot effectively whenever propelled into a VPC with reaches from

224.0.0.0 to 255.255.255.255 (Class D and Class E IP address ranges).

They are containers within your VPC that segment off a slice of the CIDR block you

define in your VPC. Subnets allow you to give different access rules and place resources

in different containers where those rules should apply. We can also assign IPv6 CIDR

blocks to VPC and to the subnets.

There are 2 types of subnets:

Public Subnet - A public subnet has an outbound route that sends all traffic through

what AWS calls an Internet Gateway (IGW). The IGW lets traffic — IPv4 or IPv6 —

out of the VPC without any constraints on bandwidth. Instances in public subnets can

12 | P a g e
also receive inbound traffic through the IGW as long as their security groups and

Network ACLs allow it.

Private Subnet - Contrast that with a private subnet which, if it needs to talk to the

internet, must do so through a Network Address Translation (NAT) gateway. NATs are

really common. Run a wireless router, the router itself does network address translation.

Importantly a NAT won’t allow inbound traffic from the internet — that’s what makes

a private subnet, well, private.

To add a subnet to your VPC

1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.

2. In the navigation pane, click Subnets, and then click Create Subnet.

3. In the Create Subnet dialog box, select the VPC, select the Availability Zone,

and specify the IPv4 CIDR block for the subnet.

4. For IPv6 CIDR block, choose Specify a custom IPv6 CIDR.

5. Click Yes, Create.

3.6 INTERNET GATEWAY

An internet gateway may be a horizontally scaled, redundant, and extremely accessible

VPC element that permits communication between instances in your VPC and therefore

the web. It so imposes no accessibility risks or information measure constraints on your

network traffic. An internet gateway supports IPv4 and IPv6 traffic. To use an internet

gateway, your subnet's route table should contain a route that directs internet-bound

traffic to the internet gateway. you'll scope the route to all or any destinations not

expressly better-known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6),

otherwise you will scope the route to a narrower range of IP addresses; for instance, the

13 | P a g e
general public IPv4 addresses of your company’s public endpoints outside of AWS, or

the Elastic IP addresses of different Amazon EC2 instances outside your VPC. If your

subnet is related to a route table that encompasses a route to an internet gateway, it's

called a public subnet.

An internet gateway serves 2 purposes:

1) To produce a target in your VPC route tables for internet-routable traffic

2) To perform network address translation (NAT) for instances that are assigned

public IPv4 addresses.

To modify communication over the net for IPv4, your instance should have a public

IPv4 address or an Elastic IP address that is related to a private IPv4 address on your

instance. Your instance is only conscious of the private (internal) IP address area

outlined among the VPC and subnet. the internet gateway logically provides the one-

to-one NAT on behalf of your instance, so once traffic leaves your VPC subnet and

goes to the net, the reply address field is about to the general public IPv4 address or

Elastic IP address of your instance, and not its non-public IP address. Conversely,

traffic that is destined for the general public IPv4 address or Elastic IP address of your

instance has its destination address translated into the instance's non-public IPv4

address before the traffic is delivered to the VPC. To modify communication over the

net for IPv6, your VPC and subnet should have an associated IPv6 CIDR block, and

your instance should be assigned an IPv6 address from the vary of the subnet. IPv6

addresses are globally distinctive, and thus public by default.

14 | P a g e
3.7 ROUTE TABLES

A route table contains a group of rules, referred to as routes that are used to confirm

wherever network traffic is directed. Every subnet in your VPC should be related to a

route table; the table controls the routing for the subnet. A subnet will solely be related

to one route table at a time, however you'll associate multiple subnets with an equivalent

route table. Each VPC has a main route table by default.

 You cannot delete the main route table, but you can replace the main route table

with a custom table that you've created.

 Each route in a table specifies a destination CIDR and a target

 CIDR blocks for IPv4 and IPv6 are treated separately. For example, a route with

a destination CIDR of 0.0.0.0/0 (all IPv4 addresses) does not automatically

include all IPv6 addresses. You must create a route with a destination CIDR

of ::/0 for all IPv6 addresses.

 Every route table contains a local route for communication among the VPC over

IPv4. If your VPC has more than one IPv4 CIDR block, your route tables

contain a local route for every IPv4 CIDR block. If you have associated an IPv6

CIDR block along with your VPC, your route tables contain a local route for

the IPv6 CIDR block. you can't modify or delete these routes.

 When you add an online gateway, a virtual private gateway, a NAT device, a

peering connection, or a VPC end point in your VPC, you need to update the

route table for any subnet that uses these gateways or connections.

 There is a limit on the amount of route tables you will be able to produce per

VPC, and therefore the range of routes you'll be able to add per route table.

15 | P a g e
3.8 SECURITY GROUPS

A security group acts as a virtual firewall that controls the traffic for one or additional

instances. Once you launch an instance, you will specify one or additional security

groups; otherwise, we have a tendency to use the default security group. You will add

rules to every security group that enable traffic to or from its associated instances. You

will modify the principles for a security group at any time; the new rules are

automatically applied to any or all instances that are related to the security group.

Once we decide whether to permit traffic to achieve an instance, we have a tendency to

appraise all the principles from all the security groups that are related to the instance.

16 | P a g e
You might set up network ACLs with rules like your security groups so as to feature an

additional layer of security to your VPC.

 Security groups are Stateful — if you send a request from your instance, the

response traffic for that request is allowed to flow in despite incoming security

group rules. Responses to allowed incoming traffic are allowed to flow out,

despite outgoing rules.

 Instances associated with a security group can't talk to each other unless you

add rules allowing it.

 Security groups are related to network interfaces. once you launch an instance,

you'll be able to modify the security groups related to the instance, which

changes the security groups related to the first network interface (eth0). you'll

be able to additionally modification the security groups related to the other

network interface.

When you create a security group, you must provide it with a name and a description.

The following rules apply:

 Names and descriptions can be up to 255 characters in length.

 Names and descriptions are limited to the following characters: a-z, A-Z, 0-9,

spaces, and ._-:/()#,@[]+=&;{}! $*.

 A security group name cannot start with sg-.

 A security group name must be unique within the VPC.

17 | P a g e
3.9 ELASTIC CLOUD COMPUTE (EC2)

Amazon Elastic compute Cloud (Amazon EC2) is an internet service that has secure,

resizable figure capacity within the cloud. It is designed to create web-scale cloud

computing easier for developers. Amazon EC2’s easy internet service interface permits

you to get and configure capability with least friction. It provides you with complete

management of your computing resources and allows you to run on Amazon’s verified

computing atmosphere. Amazon EC2 reduces the time needed to obtain and boot new

server instances to minutes, permitting you to quickly scale capability, each up and

down, as your computing needs modification. Amazon EC2 changes the economics of

computing by permitting you to pay just for capacity that you simply truly use. Amazon

EC2 provides developers the tools to build failure resilient applications and isolate them

from common failure eventualities.

 Virtual computing environments, called instances

 Preconfigured templates for your instances, called Amazon Machine images

(AMIs), that package the bits you would like for your server (including the

operating system and extra software)

 Numerous configurations of CPU, memory, storage, and networking capacity

for your instances, called instance types

 Secure login info for your instances using key pairs (AWS stores the general

public key, and you store the private key in an exceedingly secure place)

 Storage volumes for temporary information that is deleted after you stop or

terminate your instance, called instance store volumes

 Persistent storage volumes for your information using Amazon Elastic Block

Store (EBS) volumes .

18 | P a g e
 Multiple physical locations for your resources, like instances and Amazon EBS

volumes, called Regions and Availability Zones

 A firewall that allows you to specify the protocols, ports, and source IP ranges

that may reach your instances using security groups

 Static IPv4 addresses for dynamic cloud computing, called Elastic IP addresses

 Metadata, called tags, that you just will produce and assign to your Amazon

EC2 resources

 Virtual networks you'll be able to create that are logically isolated from the rest

of the AWS cloud, which you'll be able to optionally connect with your own

network, called virtual private clouds (VPCs).

Amazon EC2 presents a real virtual computing environment, permitting you to use

internet service interfaces to launch instances with a range of operating systems,

load them together with your custom application environment, manage your

network’s access permissions, and run your image using as many or few systems

as you want.

To use Amazon EC2, you simply:

 Select a pre-configured, template Amazon Machine Image (AMI) to

induce up and running immediately. Alternatively, create an AMI

containing your applications, libraries, data, and associated configuration

settings.

 Configure security and network access on your Amazon EC2 instance.

 Choose that instance type(s) you would like, then start, terminate, and

monitor as several instances of your AMI

19 | P a g e
 Determine whether or not you would like to run in multiple locations,

utilize static IP endpoints, or attach persistent block storage to your

instances.

 Pay just for the resources that you just really consume, like instance-hours

or data transfer.

3.10 ELASTIC BLOCK STORAGE

Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes

to be used with Amazon EC2 instances within the AWS Cloud. Every Amazon EBS

volume is automatically replicated among its availability Zone to guard you from

component failure, providing high convenience and durability. Amazon EBS volumes

supply the consistent and low-latency performance required to run your workloads.

With Amazon EBS, you will be able to scale your usage up or down within minutes –

all while paying an occasional price for under what you provision.

Amazon EBS is meant for application workloads that like fine-tuning for performance,

value and capacity. Typical use cases include huge information analytics engines (like

the Hadoop), relative and NoSQL databases (like Microsoft SQL Server and MySQL),

stream and log processing applications, and data warehousing applications. Amazon

EBS permits you to make storage volumes and connect them to Amazon EC2 instances.

Once connected, you will produce a file system on top of those volumes, run a database,

or use them in any other means you would use block storage. Amazon EBS volumes

are placed in a very specific availability Zone wherever they're automatically replicated

to protect you from the failure of one part. All EBS volume types supply sturdy

snapshot capabilities and are designed for 99.999% availability.

20 | P a g e
Amazon EBS provides a variety of choices that enable you to optimize storage

performance and price for your workload. These choices are divided into 2 major

categories: SSD-backed storage for transactional workloads, like databases and boot

volumes (performance depends totally on IOPS), and HDD-backed storage for

throughput intensive workloads, like MapReduce and log process (performance

depends totally on MB/s).

SSD-backed volumes embrace the very best performance Provisioned IOPS SSD (io1)

for latency-sensitive transactional workloads and General Purpose SSD (gp2) that

balance worth and performance for a good kind of transactional data. HDD-backed

volumes embrace throughput Optimized HDD (st1) for oftentimes accessed, throughput

intensive workloads and therefore the lowest value Cold HDD (sc1) for fewer

oftentimes accessed data.

Elastic Volumes is a feature of Amazon EBS that permits you to dynamically increase

capacity, tune performance, and alter the kind of live volumes with no period or

performance impact. This permits you to simply right-size your deployment and adapt

to performance changes. Volume storage for EBS Provisioned IOPS SSD (IO1)

volumes is charged by the amount you provision in GB per month until you release the

storage.

3.11 LOAD BALANCER

A load balancer is a device that acts as a reverse proxy and distributes the incoming

network traffic across a variety of targets like in different Availability Zones. By using

Load Balancer, we can easily increase the capacity and load on the single server.

Therefore, the performance increases abruptly as the burden is distributed among

virtual servers. Fault Tolerance of the applications also increases.

21 | P a g e
In load balancers, the work of encryption and decryption can also be offloaded so that

focus can be maintained on the main thing.

Load balancers are typically classified into 3 categories:

1) Application Load Balancers

It works on the application layer and supports path based routing. It can distribute the

requests to one or more ports on each container instance in the cluster. Routes the

requests based mostly upon the information found in application layer protocols like

HTTP/HTTPS.

2) Network Load Balancers

It works on the information found in network and transport layer protocols (IP, TCP,

FTP, UDP). Capable of handling millions of requests per second. It mainly uses Flow

Hash Routing Algorithm after receiving a connection by selecting a target from the

target group. Then tries an open TCP connection to a specified target. Network Load

Balancers supports dynamic host port mapping.

3) Classic Load Balancers

These type of Load Balancers takes decisions at the transport layer or the application

layer. They require a fixed relationship between the load balancer port and container

instance port.

Application Load Balancer Network Load Balancer Classic Load Balancer

22 | P a g e
CHAPTER 4

AWS WEB APPLICATION DEPLOYMENT

4.1 APACHE SERVER

Apache server is a open source software designed to deploy and manage large networks

of virtual machines on cloud, as to make highly available, and highly scalable of

Infrastructure as a Service (IaaS) on cloud computing platform. Apache HTTP Server

is an effort for modern operating systems including UNIX and WINDOWS. Apache

server can be easily integrate with the other open source application as a PHP and

MySQL and by using internet we connect to other application. Apache server also

provide a secure, efficient server that provides server space, web services and file

maintenance for web sites.

Apache server is a software that receive a request from the user for access a web page

and it run a few security check on HTTP and redirect you to the web page and it handles

the communication with the website and also helps to clean the cache memory for the

new user.

Apache Server is a highly customized to listen to various IP addresses that are to be

determined in its configuration. The method of hosting a wide range of domains name

under a single server is called Virtual hosting.

We can easily install Apache server to our cloud by using this command in our linux

terminal

 $ sudo apt-get install apache2

 $ sudo apt-get update

 $ sudo service apache2 restart

23 | P a g e
Now our apache server is install and we can use this server for our website and make it

more secure and effective for the users.

Apache server can be closed from the terminal therefore the web site can not be access

by the users , so first we have to open the apache server then only user can access the

website. So we have to write

 Service httpd off (for stop apache server)

 Service httpd restart (for start apache server)

4.2 PHP SERVER

PHP is a server scripting language and it is a tool used for making dynamic and

interactive web pages. PHP is a built-in web server. PHP is the most widely used for

making an efficient and effective web page. PHP is an open source scripting. The PHP

file can contain text, HTML, CSS, JavaScript and PHP code. So, if a person writes a

code in PHP so he should have a basic understanding of the HTML, CSS, JS. The PHP

code is executed on the server and the result is shown on the browser and the extension

of a file is ‘.php’. PHP is not limited to output it will return the PDF file, images, even

video also.

PHP can generate dynamic page content and also provide security as it also encrypts

the data. It can work on many platforms such as Windows, Unix, Mac and compatible

with many servers. The user can add, delete, modify data in the database. By using the

PHP server the web page provider can easily insert the data into the web page and make

it more interactive so whenever the user opens the web page it can easily be accessed.

We can execute the code and the output of the code shown on the website/web browser.

24 | P a g e
First we have to install the PHP server in our system by using the command

$ sudo apt-get install PHP5-server

After installing the server, the code can be written in the HTML, CSS, JS for making

the website more dynamic and interactive and user-friendly. Generally, the code of the

website is in the index.html or in index.php file.

The code is written in the body of the HTML and inside the <?php code…. ?>. It

generally uses the HTML also for making a website more interactive. PHP server helps

to run the website and to store the data of the user. First set the path where the file than

to edit We can access to the file by the using the command on the terminal:

 vim index.html (then do the changes).

4.3 SECURE SHELL (SSH)

SSH is known as Secure Shell. It is an encrypted protocol used to administer and

communicate with servers. SSH keys give us an easy and extremely secure way of

logging into your server. SSH is a pair of two cryptographically secure keys that are

Public key and Private key that can be used for authenticating a user or client.

The Private key should be retained by the client and it should be kept secret. If the

private key is not kept secret then the intruder can easily access into the servers that are

configured with the associated public key. The private key should be encrypted as an

additional precaution.

The Public key is shared freely. The public key can be used to encrypt message/data

that only the private key can decrypt.

The public key is remotely uploaded that you want to log into with SSH. You can access

to a special file in ~/.ssh/authorizied_keys.

Advantage of using SSH key:-

25 | P a g e
 The private SSH key is never exposed on the network and the passphrase is only

used to decrypt the key in local machine.

 The private key is kept in the restricted directory and the private key is hidden

so no unauthorized user can access the key.

 If Any intruder wants to crack the private SSH key passphrase intruder must

already have access to the system. This means that intruder will already have

access to your root account. If you are in this situation, then the passphrase can

prevent the intruder from immediately logging into your other servers.

So the pair of the public key and private key help to secure the data. SSH can be

accessed by using the Putty tools it takes as a private key and as .pem file and it redirects

us to the terminal where we can access the server, data, coding, etc. From the terminal,

we can see the SSH authorized key.

26 | P a g e
4.4 FILE TRANSFER PROTOCOL (FTP)

FTP is known as File Transfer Protocol. FTP used for transfer the data or computer file

from client to the server side through the computer network. FTP uses the internet

TCP/IP protocol for transferring. FTP is an easy way for transferring the file, photo, etc

from client side to the server-side. The file is transferred from user computer which is

known as a local host and the second machine is known as a remote host with the help

of internet. The local host is connected through the remote host IP and also with the

port through which file transfers.

The user has entered the username and password so it gives the security to our data.

FTP software may have the GUI by which we can easily drag the file from the local

host and drop to the remote host. FTP gives you a fast, easy, secure and reliable way to

update and maintain the website. FTP uses the port 20 for transferring of data from

localhost to remote host.

FileZilla is an SFTP(Secure File Transfer Protocol) software which is used to transfer

the data for both client and server. In the FileZilla, we write the remote host IP and port

no. And then for the authentication, we have to put the Private key as a .ppk file which

helps to connect the client to the server side and help to transfer the data over an internet.

27 | P a g e
CHAPTER 5

TOOLS AND TECHNOLOGY

5.1 MATERIAL REQUIREMENTS

PC with Intel Core i5 processor (recommended)

4 GB RAM (recommended)

500 GB Hard Drive (recommended)

Fast and reliable internet connection to operate AWS and other tools

SOFTWARE REQUIREMENTS

5.2 PUTTY AND PUTTY-gen

Putty is an open-source and a free emulator, network file transfer application. Putty

supports several networks protocol such as SSH, SCP, telnet, etc. It can also connect to

the port. Putty sends typed commands and receives text response over a TCP/IP socket

like a traditional terminal and putty uses the SSH with public key encryption. Putty

helps to secure data. And by using putty it also helps to transfer the data.

Putty-gen is an application, which helps to make the private key “.pem” file to “.ppk”

file as the putty only takes the .ppk file. In the putty, we have to write the IP of the web

site and the port 22 and the .ppk file through which we access the terminal. It creates

the session through which we can access the coding, data, etc of the server/web site. In

the putty, we can also save the session also.

Putty works on the SSH authentication which helps to secure the backend of our

website/server. It ensures that the authorized user can only access. It takes the private

key of the server and we know the private key is only with the authorized user. It helps

to secure the data and integrity of the data.

28 | P a g e
5.3 KALI LINUX

Kali Linux is a Debian-bases Linux distribution aimed at the Testing and Security. Kali

Linux contains many numbers of Tools which for attacking, security, Penetration,

Social Engineering and Reverse Engineering. Kali Linux can be illegal if you use it for

attacking but it is legal if you use it for learning and teaching. Kali Linux is most

common for the ethical hacking and for the security. From it we can also do the Session

Hijacking and the Mobile Hijacking. Kali Linux help us to find the loop holes in our

website or to find vulnerability and fix it and make sure that intruder can not hack the

website.

Hacker misuse the Kali Linux platform for attacking the website or the computer like

the DOS, Session Hijacking, etc and take the information or data. Kali Linux consists

more than 600 testing tools. We use the Kali Linux for mainly 2 things:

 Nmap

 DOS(Denial of Service)

5.4 NMAP

Nmap stands for the Network Mapper. It is open source tool for scanning the ports of

the particular IP. Nmap tells which ports are open so if any unnecessary port is open so

the attacker can attack the website easily. This includes many port scanning

mechanisms for TCP, OS detection, version detection, ping sweeps, and more. It is a

flexible, powerful, portable, and easy. We can scan a Multiple IPs also.

Nmap is used to discover hosts and services on a computer network by sending packets

and analysing the responses. Auditing the security of a device or firewall by identifying

the network connections, which can be made to, or through it.

Command for Nmap is nmap -p 192.168.1.1

29 | P a g e
5.5 DENIAL OF SERVICE (DOS)

DOS stands for the Denial-of-Service. In the Dos Attack, we use the hping3 tool in

which we put the target IP through this attack the command sends 10,000 of request to

the IP and the thousands of request server down the particular IP and by which the web

site stop working it is the most dangerous attack and it is illegal.

There are 3 types of DOS attack:-

1. DoS (Denial of Service)

In the DOS we directly send the request to the Host machine and make the server down.

2. DDoS (Distributed Denial Of Service)

In a typical DDoS attack, the attacker begins by exploiting a vulnerability in one

computer system and creating it the DDoS master. The attacker system identify the

different vulnerable computer and take over them by either infecting the systems with

malware or through bypassing the authentication controls so attack from all create them

as a clone and request to the host machine

3. DrDoS(Distributed Reflection Denial of Service)

DrDoS stands for Distributed Reflection Denial of Service attack. DrDoS techniques

typically involve multiple victim machines that inadvertently participate during a DDoS

attack on the attacker's target. Requests to the victim host machines are redirected from

the victim hosts to the target.

30 | P a g e
Command for DoS attack:-

Hping3 -c 10000 -d 120 -S -w 64 -p 21 --flood --rand-source www.certifiedhacker.com

1) hping3 = Name of the application

2) -c 10000 = Number of the packet

3) -d 120 = Size of each packet

4) -S = sending syn packet

5) -w 64 = window size

6) -p 21 = sending to port 21

7) –flood = sending packet as fast as possible

8) –rand-source = use random source IP address

9) www.certifiedhacker.com = destination IP address or host name+

31 | P a g e
SCREENSHOTS

32 | P a g e
33 | P a g e
34 | P a g e
35 | P a g e
36 | P a g e
37 | P a g e
38 | P a g e
FUTURE WORK

The work done so far in this project is about showcasing the need of security groups in

order to protect the website from letting the intruders and hackers from bombarding

into the website and steal the sensitive and extremely confidential information. So this

can be taken further by adding more security levels to protect the website so that nobody

can attack the website by hitting the IP.

To extend the current project we can add more security groups to our server to make it

even more secure and trustworthy. We can add S3 Bucket to expand the storage and

easy management and security along with performance.

Otherwise we can also add data encryption to make the information of the company

more precise and confidential. Even if got into wrong hands there is not much threat.

HTTPS Connection can be used to make the client server communication through

website more secure and reliable.

IAM ROLL can be used to define a set of permissions for the different AWS service

requests. Also it is not user or group specific.

We can add cryptographic ciphers to the project by using AES 256 encryption standards.

This cipher is the topmost level cipher as the combination of keys is extremely huge

and complex.

Thus, a company generally needs to maintain 3 principles:

1) Availability 2) Integrity 3) Confidentiality

Therefore, all these are the most important aspects for a company in order to grow and

remain powerful. You need to take care of these principles.

39 | P a g e
CONCLUSION

Basically the main motive of the whole project is to try secure our data or information

on cloud particularly AWS through security groups. We use the services provided by

the AWS to rent a server and add certain elements like Subnets, EC2 Instances, VPC,

Internet Gateway etc to store information and data in the server.

Firstly, we create an AWS account and then create a VPC, then we select an availability

zone. Instance needs to be created to effectively use the AWS compute resources.

Through instance, you can easily rescale the capacity of server. Subnets needs to be

created for different instances to communicate within each other or with the internet.

Public IP Addresses are assigned to connect to the outside world through gateway and

private IP addresses for internal communications. Every subnet has a route table to

route the network traffic to a particular target. We can add security groups to our

instances to protect it from hackers and intruders to enter the sever and steal copyright

data.

Every user or customer has a unique private key that is confidential. Putty-gen is an

application, which helps to make the private key “.pem” file to “.ppk” file. In the putty,

we have to write the IP of the web site and the port no and the .ppk file through which

we access the terminal. It creates the session through which we can access the coding,

data, etc of the server/web site. Putty works on the SSH authentication which helps to

secure the backend of our website/server. It ensures that the authorized user can only

access. After this we enter the terminal through which we have access to the backend

of our website. From here we can easily modify the code.

FileZilla is an SFTP (Secure File Transfer Protocol) software which is used to transfer

the data for both client and server using TCP/IP protocol.

40 | P a g e
REFERENCES

1) J. Surbiryala, C. Li and C. Rong, "A framework for improving security in cloud


computing," 2017 IEEE 2nd International Conference on Cloud Computing and Big
Data Analysis (ICCCBDA), Chengdu, 2017, pp. 260-264.

2) S. Narula, A. Jain and Prachi, "Cloud Computing Security: Amazon Web


Service," 2015 Fifth International Conference on Advanced Computing &
Communication Technologies, Haryana, 2015, pp. 501-505.

3) B. A. Navamani, C. Yue and X. Zhou, "An Analysis of Open Ports and Port Pairs
in EC2 Instances," 2017 IEEE 10th International Conference on Cloud Computing
(CLOUD), Honolulu, CA, 2017, pp. 790-793.

4) D. S. Linthicum, "Cloud Computing Changes Data Integration Forever: What's


Needed Right Now," in IEEE Cloud Computing, vol. 4, no. 3, pp. 50-53, 2017.

5) X. Huang and R. Chen, "A Survey of Key Management Service in Cloud," 2018
IEEE 9th International Conference on Software Engineering and Service Science
(ICSESS), Beijing, China, 2018, pp. 916-919.

6) IBM (2012). Applying the cloud in education: An innovative approach to IT. White
Paper by International Business Machine

41 | P a g e

S-ar putea să vă placă și