Documente Academic
Documente Profesional
Documente Cultură
INTRODUCTION
Cloud computing mostly alluded to as "the cloud", in easy terms implies swing away
or going to your data and projects over the web instead of storing to your own disk
drive. Everything lately is moved to the cloud, running within the cloud, have to be
compelled to from the cloud or may well be place away within the cloud. AWS is that
the most trustworthy supplier of cloud computing that not solely provides wonderful
Data center infrastructure usually includes the power, cooling and building components
necessary to support data center hardware. The data center hardware infrastructure
routers and physical cabling and dedicated network appliances, like network firewalls.
security. This will embody physical security for the building, like electronic key entry,
constant video and human surveillance of the premises, rigorously controlled access to
the server and storage areas, and so on. This ensures solely licensed personnel will
access the information center hardware infrastructure and reduces the potential for
Outside of the data center is a web infrastructure, which has transmission media, like
fibre optic cables, satellites, routers, aggregators, repeaters, load balancers and different
network parts that manage transmission methods. Web infrastructures are designed,
engineered and operated by internet Service providers (ISPs). Once a business engages
1|Page
an ISP for web access, the ISP usually ties into the data center infrastructure inside an
The role of cloud computing is dynamic the manner infrastructures are designed and
data center infrastructure and services for a fee. This infrastructure-as-a-service (IaaS)
model permits versatile computing on demand. Users will invoke a cloud provider's
compute, storage and services while not the necessity to deploy those resources
2|Page
1.2 WHAT IS A CLOUD?
Cloud computing frequently alluded to as "the cloud", in basic terms implies putting
away or getting to your information and projects over the web as opposed to your own
hard drive. Everything these days is moved to the cloud, running in the cloud, got to
from the cloud or might be put away in the cloud. Henceforth, the interest for Certified
It is some place at the opposite end of your web association where you store your
1. You do not need to keep up or direct any foundation for the equivalent.
3. You can get to your cloud-based applications from anywhere you simply need a
The services that the cloud offers can be divided into 3 different models:
In this service, the Cloud Provider leases applications or programming which are
claimed by them to its customer. The customer can get to this product on any gadget,
which is associated with the Internet utilizing devices, for example, an internet browser,
an application, and so forth. For Example, salesforce.com gives the CRM (Customer
Relation Manager) on a cloud foundation to its customer and charges them for it;
3|Page
2. PaaS (Platform as a Service)
In this service, the Cloud Provider enables the client to send client made application
utilizing programming dialects, tools and so on that are given by the Cloud Provider.
The client cannot control the hidden architecture design including operating systems,
storage, servers and so on. For Example: This service would sound good to you just in
the event that you are a designer, since this service gives you a stage to creating
In this service the Cloud Provider furnishes the client with virtual machines and
different assets as an administration, they restrict the client from the physical machine,
area and so on. In the event that the client needs a Linux machine, he gets a Linux
machine, he will not stress over the physical machine or the systems administration of
4|Page
CHAPTER 2
Cloud infrastructure is no different from typical data centre infrastructure except that it
is virtualised and offered as a service to be consumed via the Internet. Servers, storage,
compute resources and security are all key components of cloud infrastructure. Cloud
infrastructure, which typically requires that individual servers, storage, compute and
1. Public Cloud: In this cloud deployment mode, the services are generally available
or open for use by public and the services can be availed absolutely free of cost. Since
it is available for use by everyone thus, involves a risk factor, which cannot be neglected.
2. Private Cloud: It is mainly designed for a single organisation and services can only
company is big enough to handle its own data centers and has the effective budget.
Seeing this the cost of private cloud is higher since hardware has to upgrade periodically
public cloud offerings. This offering provides efficiency with a public cloud and
5|Page
2.2 AMAZON WEB SERVICES (AWS)
It is a global cloud platform, which allows to manage and host the services on internet.
It provides many services where in you can run your applications in cloud. It is based
on a paid subscription with a free trail option available for 12 months. Amazon , which
in the world. Amazon Web Service (AWS) Cloud Business is a service that provides
developers and businesses a simple way to create a set of related AWS resources and
provision them in an orderly and foreseeable fashion. AWS offers a wide set of services.
It helps the users to move fast in business, lower the costs, and can gain access to the
scalable and reliable infrastructure primarily for deploying solutions with minimal
support. It offers a whole set of Cloud Computing services that facilitate organizations
Why it is such big hit? Everyone is trying to use AWS; everyone is trying to put their
applications in cloud. So what is the reason that AWS is the top provider in cloud?
6|Page
One of the biggest reason is billing - Billing is very clear. It provide the per hour billing,
every instance or service has a micro billing. EC2 instance, you get per hour billing
which is very transparent. S3 buckets are charged on the basis of per gigabyte. Although,
Second reason is easy sign up process - You do not need to sign any agreement, just
sign up with mail id and add credit card, you are signed in. Within a minute account is
created.
Third reason is trusted vendor - It is the most trusted service provider in IT market .It
Amazon Web Services (AWS) is a secure cloud services platform, offering compute
power, database storage, content delivery and other functionality to help businesses
scale and grow. In simple words AWS allows you to do the following things-
Running web and application servers in the cloud to host dynamic websites.
Securely store all your files on the cloud so you can access them from anywhere.
store information.
Deliver static and dynamic files quickly around the world using a Content
7|Page
The total cost of hosting your static website on AWS will vary depending on your usage.
Typically, it will cost $1-3/month if you are outside the AWS Free Tier limits. If you
qualify for AWS Free Tier and are within the limits, hosting your static website will
cost around $0.50/month. The prices of services get reduce when you there is a
subscription for 12 months or 6 months. Per hour billing for almost everything.
AWS and Microsoft Azure are 2 major players within the cloud computing trade,
however, still AWS is larger than Azure. Well, the server capability of AWS is half-
dozen times the scale than all of its competitors' server size combined.
In addition, AWS started its cloud journey approach back in 2006 compared to
Microsoft Azure that was launched in 2010, therefore in terms of service; AWS’s
service model is a lot of mature. Amazon owns the biggest knowledge centers within
the world that are strategically placed all around the globe. When we see Azure, it is
nowhere close to the capacity that Amazon has, then again, Microsoft has been
operating laborious to realize the type of services and adaptability that Amazon offers.
Zone Redundant Storage that is at par with the services that Amazon offers.
8|Page
2.4 GARTNER MAGIC QUADRANT
Recently, analyst house Gartner, Inc. released their 2019 Magic Quadrant for Public
Quadrant lists 19 different MSP!vendors and showcases their offerings, strong suits
vision and their ability to execute.$From these measurements, the vendors are!placed
The 19 vendors that!Gartner analysed! this year are: 2nd Watch, Accenture, Atos,
The Leaders quadrant#was a lot of $packed this year, with!six entrants rather than
9|Page
10 | P a g e
CHAPTER 3
PROJECT PHASES
Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS
resources into a virtual network that you've defined. This virtual network closely
resembles a traditional network that you'd operate in your own data centre, with the
iii. Define network interfaces and attach one of more network interfaces to
the instances.
Amazon VPC builds massive computing power through Amazon EC2, scalable
storage via S3, and dedicated private IP address via Amazon Elastic IP. Amazon
11 | P a g e
Elastic IP allocates separate IP addresses for each EC2 instance and isolates internet
accessible & un-accessible servers so only desired servers are accessed by remote
users. Amazon VPC can also be connected with an in-house VPN to create
A VPC spans all the Availability Zones in the region. After creating a VPC, you can
add one or more subnets in each Availability Zone. When you create a subnet, you
specify the CIDR block for the subnet, which is a subset of the VPC CIDR block. Each
subnet must reside entirely within one Availability Zone and cannot span zones.
Availability Zones are distinct locations that are engineered to be isolated from failures
can protect your applications from the failure of a single location. You can indicate a
scope of openly routable IPv4 addresses, be that as it may, we as of now don't bolster
direct access to the web from freely routable CIDR blocks in a VPC. Windows
occurrences cannot boot effectively whenever propelled into a VPC with reaches from
They are containers within your VPC that segment off a slice of the CIDR block you
define in your VPC. Subnets allow you to give different access rules and place resources
in different containers where those rules should apply. We can also assign IPv6 CIDR
Public Subnet - A public subnet has an outbound route that sends all traffic through
what AWS calls an Internet Gateway (IGW). The IGW lets traffic — IPv4 or IPv6 —
out of the VPC without any constraints on bandwidth. Instances in public subnets can
12 | P a g e
also receive inbound traffic through the IGW as long as their security groups and
Private Subnet - Contrast that with a private subnet which, if it needs to talk to the
internet, must do so through a Network Address Translation (NAT) gateway. NATs are
really common. Run a wireless router, the router itself does network address translation.
Importantly a NAT won’t allow inbound traffic from the internet — that’s what makes
2. In the navigation pane, click Subnets, and then click Create Subnet.
3. In the Create Subnet dialog box, select the VPC, select the Availability Zone,
VPC element that permits communication between instances in your VPC and therefore
network traffic. An internet gateway supports IPv4 and IPv6 traffic. To use an internet
gateway, your subnet's route table should contain a route that directs internet-bound
traffic to the internet gateway. you'll scope the route to all or any destinations not
expressly better-known to the route table (0.0.0.0/0 for IPv4 or ::/0 for IPv6),
otherwise you will scope the route to a narrower range of IP addresses; for instance, the
13 | P a g e
general public IPv4 addresses of your company’s public endpoints outside of AWS, or
the Elastic IP addresses of different Amazon EC2 instances outside your VPC. If your
subnet is related to a route table that encompasses a route to an internet gateway, it's
2) To perform network address translation (NAT) for instances that are assigned
To modify communication over the net for IPv4, your instance should have a public
IPv4 address or an Elastic IP address that is related to a private IPv4 address on your
instance. Your instance is only conscious of the private (internal) IP address area
outlined among the VPC and subnet. the internet gateway logically provides the one-
to-one NAT on behalf of your instance, so once traffic leaves your VPC subnet and
goes to the net, the reply address field is about to the general public IPv4 address or
Elastic IP address of your instance, and not its non-public IP address. Conversely,
traffic that is destined for the general public IPv4 address or Elastic IP address of your
instance has its destination address translated into the instance's non-public IPv4
address before the traffic is delivered to the VPC. To modify communication over the
net for IPv6, your VPC and subnet should have an associated IPv6 CIDR block, and
your instance should be assigned an IPv6 address from the vary of the subnet. IPv6
14 | P a g e
3.7 ROUTE TABLES
A route table contains a group of rules, referred to as routes that are used to confirm
wherever network traffic is directed. Every subnet in your VPC should be related to a
route table; the table controls the routing for the subnet. A subnet will solely be related
to one route table at a time, however you'll associate multiple subnets with an equivalent
You cannot delete the main route table, but you can replace the main route table
CIDR blocks for IPv4 and IPv6 are treated separately. For example, a route with
include all IPv6 addresses. You must create a route with a destination CIDR
Every route table contains a local route for communication among the VPC over
IPv4. If your VPC has more than one IPv4 CIDR block, your route tables
contain a local route for every IPv4 CIDR block. If you have associated an IPv6
CIDR block along with your VPC, your route tables contain a local route for
the IPv6 CIDR block. you can't modify or delete these routes.
When you add an online gateway, a virtual private gateway, a NAT device, a
peering connection, or a VPC end point in your VPC, you need to update the
route table for any subnet that uses these gateways or connections.
There is a limit on the amount of route tables you will be able to produce per
VPC, and therefore the range of routes you'll be able to add per route table.
15 | P a g e
3.8 SECURITY GROUPS
A security group acts as a virtual firewall that controls the traffic for one or additional
instances. Once you launch an instance, you will specify one or additional security
groups; otherwise, we have a tendency to use the default security group. You will add
rules to every security group that enable traffic to or from its associated instances. You
will modify the principles for a security group at any time; the new rules are
automatically applied to any or all instances that are related to the security group.
appraise all the principles from all the security groups that are related to the instance.
16 | P a g e
You might set up network ACLs with rules like your security groups so as to feature an
Security groups are Stateful — if you send a request from your instance, the
response traffic for that request is allowed to flow in despite incoming security
group rules. Responses to allowed incoming traffic are allowed to flow out,
Instances associated with a security group can't talk to each other unless you
Security groups are related to network interfaces. once you launch an instance,
you'll be able to modify the security groups related to the instance, which
changes the security groups related to the first network interface (eth0). you'll
network interface.
When you create a security group, you must provide it with a name and a description.
Names and descriptions are limited to the following characters: a-z, A-Z, 0-9,
17 | P a g e
3.9 ELASTIC CLOUD COMPUTE (EC2)
Amazon Elastic compute Cloud (Amazon EC2) is an internet service that has secure,
resizable figure capacity within the cloud. It is designed to create web-scale cloud
computing easier for developers. Amazon EC2’s easy internet service interface permits
you to get and configure capability with least friction. It provides you with complete
management of your computing resources and allows you to run on Amazon’s verified
computing atmosphere. Amazon EC2 reduces the time needed to obtain and boot new
server instances to minutes, permitting you to quickly scale capability, each up and
down, as your computing needs modification. Amazon EC2 changes the economics of
computing by permitting you to pay just for capacity that you simply truly use. Amazon
EC2 provides developers the tools to build failure resilient applications and isolate them
(AMIs), that package the bits you would like for your server (including the
Secure login info for your instances using key pairs (AWS stores the general
public key, and you store the private key in an exceedingly secure place)
Storage volumes for temporary information that is deleted after you stop or
Persistent storage volumes for your information using Amazon Elastic Block
18 | P a g e
Multiple physical locations for your resources, like instances and Amazon EBS
A firewall that allows you to specify the protocols, ports, and source IP ranges
Static IPv4 addresses for dynamic cloud computing, called Elastic IP addresses
Metadata, called tags, that you just will produce and assign to your Amazon
EC2 resources
Virtual networks you'll be able to create that are logically isolated from the rest
of the AWS cloud, which you'll be able to optionally connect with your own
Amazon EC2 presents a real virtual computing environment, permitting you to use
load them together with your custom application environment, manage your
network’s access permissions, and run your image using as many or few systems
as you want.
settings.
Choose that instance type(s) you would like, then start, terminate, and
19 | P a g e
Determine whether or not you would like to run in multiple locations,
instances.
Pay just for the resources that you just really consume, like instance-hours
or data transfer.
Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes
to be used with Amazon EC2 instances within the AWS Cloud. Every Amazon EBS
volume is automatically replicated among its availability Zone to guard you from
component failure, providing high convenience and durability. Amazon EBS volumes
supply the consistent and low-latency performance required to run your workloads.
With Amazon EBS, you will be able to scale your usage up or down within minutes –
all while paying an occasional price for under what you provision.
Amazon EBS is meant for application workloads that like fine-tuning for performance,
value and capacity. Typical use cases include huge information analytics engines (like
the Hadoop), relative and NoSQL databases (like Microsoft SQL Server and MySQL),
stream and log processing applications, and data warehousing applications. Amazon
EBS permits you to make storage volumes and connect them to Amazon EC2 instances.
Once connected, you will produce a file system on top of those volumes, run a database,
or use them in any other means you would use block storage. Amazon EBS volumes
are placed in a very specific availability Zone wherever they're automatically replicated
to protect you from the failure of one part. All EBS volume types supply sturdy
20 | P a g e
Amazon EBS provides a variety of choices that enable you to optimize storage
performance and price for your workload. These choices are divided into 2 major
categories: SSD-backed storage for transactional workloads, like databases and boot
SSD-backed volumes embrace the very best performance Provisioned IOPS SSD (io1)
for latency-sensitive transactional workloads and General Purpose SSD (gp2) that
balance worth and performance for a good kind of transactional data. HDD-backed
volumes embrace throughput Optimized HDD (st1) for oftentimes accessed, throughput
intensive workloads and therefore the lowest value Cold HDD (sc1) for fewer
Elastic Volumes is a feature of Amazon EBS that permits you to dynamically increase
capacity, tune performance, and alter the kind of live volumes with no period or
performance impact. This permits you to simply right-size your deployment and adapt
to performance changes. Volume storage for EBS Provisioned IOPS SSD (IO1)
volumes is charged by the amount you provision in GB per month until you release the
storage.
A load balancer is a device that acts as a reverse proxy and distributes the incoming
network traffic across a variety of targets like in different Availability Zones. By using
Load Balancer, we can easily increase the capacity and load on the single server.
21 | P a g e
In load balancers, the work of encryption and decryption can also be offloaded so that
It works on the application layer and supports path based routing. It can distribute the
requests to one or more ports on each container instance in the cluster. Routes the
requests based mostly upon the information found in application layer protocols like
HTTP/HTTPS.
It works on the information found in network and transport layer protocols (IP, TCP,
FTP, UDP). Capable of handling millions of requests per second. It mainly uses Flow
Hash Routing Algorithm after receiving a connection by selecting a target from the
target group. Then tries an open TCP connection to a specified target. Network Load
These type of Load Balancers takes decisions at the transport layer or the application
layer. They require a fixed relationship between the load balancer port and container
instance port.
22 | P a g e
CHAPTER 4
Apache server is a open source software designed to deploy and manage large networks
is an effort for modern operating systems including UNIX and WINDOWS. Apache
server can be easily integrate with the other open source application as a PHP and
MySQL and by using internet we connect to other application. Apache server also
provide a secure, efficient server that provides server space, web services and file
Apache server is a software that receive a request from the user for access a web page
and it run a few security check on HTTP and redirect you to the web page and it handles
the communication with the website and also helps to clean the cache memory for the
new user.
determined in its configuration. The method of hosting a wide range of domains name
We can easily install Apache server to our cloud by using this command in our linux
terminal
23 | P a g e
Now our apache server is install and we can use this server for our website and make it
Apache server can be closed from the terminal therefore the web site can not be access
by the users , so first we have to open the apache server then only user can access the
PHP is a server scripting language and it is a tool used for making dynamic and
interactive web pages. PHP is a built-in web server. PHP is the most widely used for
making an efficient and effective web page. PHP is an open source scripting. The PHP
file can contain text, HTML, CSS, JavaScript and PHP code. So, if a person writes a
code in PHP so he should have a basic understanding of the HTML, CSS, JS. The PHP
code is executed on the server and the result is shown on the browser and the extension
of a file is ‘.php’. PHP is not limited to output it will return the PDF file, images, even
video also.
PHP can generate dynamic page content and also provide security as it also encrypts
the data. It can work on many platforms such as Windows, Unix, Mac and compatible
with many servers. The user can add, delete, modify data in the database. By using the
PHP server the web page provider can easily insert the data into the web page and make
it more interactive so whenever the user opens the web page it can easily be accessed.
We can execute the code and the output of the code shown on the website/web browser.
24 | P a g e
First we have to install the PHP server in our system by using the command
After installing the server, the code can be written in the HTML, CSS, JS for making
the website more dynamic and interactive and user-friendly. Generally, the code of the
The code is written in the body of the HTML and inside the <?php code…. ?>. It
generally uses the HTML also for making a website more interactive. PHP server helps
to run the website and to store the data of the user. First set the path where the file than
to edit We can access to the file by the using the command on the terminal:
communicate with servers. SSH keys give us an easy and extremely secure way of
logging into your server. SSH is a pair of two cryptographically secure keys that are
Public key and Private key that can be used for authenticating a user or client.
The Private key should be retained by the client and it should be kept secret. If the
private key is not kept secret then the intruder can easily access into the servers that are
configured with the associated public key. The private key should be encrypted as an
additional precaution.
The Public key is shared freely. The public key can be used to encrypt message/data
The public key is remotely uploaded that you want to log into with SSH. You can access
25 | P a g e
The private SSH key is never exposed on the network and the passphrase is only
The private key is kept in the restricted directory and the private key is hidden
If Any intruder wants to crack the private SSH key passphrase intruder must
already have access to the system. This means that intruder will already have
access to your root account. If you are in this situation, then the passphrase can
prevent the intruder from immediately logging into your other servers.
So the pair of the public key and private key help to secure the data. SSH can be
accessed by using the Putty tools it takes as a private key and as .pem file and it redirects
us to the terminal where we can access the server, data, coding, etc. From the terminal,
26 | P a g e
4.4 FILE TRANSFER PROTOCOL (FTP)
FTP is known as File Transfer Protocol. FTP used for transfer the data or computer file
from client to the server side through the computer network. FTP uses the internet
TCP/IP protocol for transferring. FTP is an easy way for transferring the file, photo, etc
from client side to the server-side. The file is transferred from user computer which is
known as a local host and the second machine is known as a remote host with the help
of internet. The local host is connected through the remote host IP and also with the
The user has entered the username and password so it gives the security to our data.
FTP software may have the GUI by which we can easily drag the file from the local
host and drop to the remote host. FTP gives you a fast, easy, secure and reliable way to
update and maintain the website. FTP uses the port 20 for transferring of data from
the data for both client and server. In the FileZilla, we write the remote host IP and port
no. And then for the authentication, we have to put the Private key as a .ppk file which
helps to connect the client to the server side and help to transfer the data over an internet.
27 | P a g e
CHAPTER 5
4 GB RAM (recommended)
Fast and reliable internet connection to operate AWS and other tools
SOFTWARE REQUIREMENTS
Putty is an open-source and a free emulator, network file transfer application. Putty
supports several networks protocol such as SSH, SCP, telnet, etc. It can also connect to
the port. Putty sends typed commands and receives text response over a TCP/IP socket
like a traditional terminal and putty uses the SSH with public key encryption. Putty
helps to secure data. And by using putty it also helps to transfer the data.
Putty-gen is an application, which helps to make the private key “.pem” file to “.ppk”
file as the putty only takes the .ppk file. In the putty, we have to write the IP of the web
site and the port 22 and the .ppk file through which we access the terminal. It creates
the session through which we can access the coding, data, etc of the server/web site. In
Putty works on the SSH authentication which helps to secure the backend of our
website/server. It ensures that the authorized user can only access. It takes the private
key of the server and we know the private key is only with the authorized user. It helps
28 | P a g e
5.3 KALI LINUX
Kali Linux is a Debian-bases Linux distribution aimed at the Testing and Security. Kali
Linux contains many numbers of Tools which for attacking, security, Penetration,
Social Engineering and Reverse Engineering. Kali Linux can be illegal if you use it for
attacking but it is legal if you use it for learning and teaching. Kali Linux is most
common for the ethical hacking and for the security. From it we can also do the Session
Hijacking and the Mobile Hijacking. Kali Linux help us to find the loop holes in our
website or to find vulnerability and fix it and make sure that intruder can not hack the
website.
Hacker misuse the Kali Linux platform for attacking the website or the computer like
the DOS, Session Hijacking, etc and take the information or data. Kali Linux consists
more than 600 testing tools. We use the Kali Linux for mainly 2 things:
Nmap
DOS(Denial of Service)
5.4 NMAP
Nmap stands for the Network Mapper. It is open source tool for scanning the ports of
the particular IP. Nmap tells which ports are open so if any unnecessary port is open so
the attacker can attack the website easily. This includes many port scanning
mechanisms for TCP, OS detection, version detection, ping sweeps, and more. It is a
flexible, powerful, portable, and easy. We can scan a Multiple IPs also.
Nmap is used to discover hosts and services on a computer network by sending packets
and analysing the responses. Auditing the security of a device or firewall by identifying
29 | P a g e
5.5 DENIAL OF SERVICE (DOS)
DOS stands for the Denial-of-Service. In the Dos Attack, we use the hping3 tool in
which we put the target IP through this attack the command sends 10,000 of request to
the IP and the thousands of request server down the particular IP and by which the web
In the DOS we directly send the request to the Host machine and make the server down.
computer system and creating it the DDoS master. The attacker system identify the
different vulnerable computer and take over them by either infecting the systems with
malware or through bypassing the authentication controls so attack from all create them
DrDoS stands for Distributed Reflection Denial of Service attack. DrDoS techniques
typically involve multiple victim machines that inadvertently participate during a DDoS
attack on the attacker's target. Requests to the victim host machines are redirected from
30 | P a g e
Command for DoS attack:-
5) -w 64 = window size
6) -p 21 = sending to port 21
31 | P a g e
SCREENSHOTS
32 | P a g e
33 | P a g e
34 | P a g e
35 | P a g e
36 | P a g e
37 | P a g e
38 | P a g e
FUTURE WORK
The work done so far in this project is about showcasing the need of security groups in
order to protect the website from letting the intruders and hackers from bombarding
into the website and steal the sensitive and extremely confidential information. So this
can be taken further by adding more security levels to protect the website so that nobody
To extend the current project we can add more security groups to our server to make it
even more secure and trustworthy. We can add S3 Bucket to expand the storage and
Otherwise we can also add data encryption to make the information of the company
more precise and confidential. Even if got into wrong hands there is not much threat.
HTTPS Connection can be used to make the client server communication through
IAM ROLL can be used to define a set of permissions for the different AWS service
We can add cryptographic ciphers to the project by using AES 256 encryption standards.
This cipher is the topmost level cipher as the combination of keys is extremely huge
and complex.
Therefore, all these are the most important aspects for a company in order to grow and
39 | P a g e
CONCLUSION
Basically the main motive of the whole project is to try secure our data or information
on cloud particularly AWS through security groups. We use the services provided by
the AWS to rent a server and add certain elements like Subnets, EC2 Instances, VPC,
Firstly, we create an AWS account and then create a VPC, then we select an availability
zone. Instance needs to be created to effectively use the AWS compute resources.
Through instance, you can easily rescale the capacity of server. Subnets needs to be
created for different instances to communicate within each other or with the internet.
Public IP Addresses are assigned to connect to the outside world through gateway and
private IP addresses for internal communications. Every subnet has a route table to
route the network traffic to a particular target. We can add security groups to our
instances to protect it from hackers and intruders to enter the sever and steal copyright
data.
Every user or customer has a unique private key that is confidential. Putty-gen is an
application, which helps to make the private key “.pem” file to “.ppk” file. In the putty,
we have to write the IP of the web site and the port no and the .ppk file through which
we access the terminal. It creates the session through which we can access the coding,
data, etc of the server/web site. Putty works on the SSH authentication which helps to
secure the backend of our website/server. It ensures that the authorized user can only
access. After this we enter the terminal through which we have access to the backend
FileZilla is an SFTP (Secure File Transfer Protocol) software which is used to transfer
the data for both client and server using TCP/IP protocol.
40 | P a g e
REFERENCES
3) B. A. Navamani, C. Yue and X. Zhou, "An Analysis of Open Ports and Port Pairs
in EC2 Instances," 2017 IEEE 10th International Conference on Cloud Computing
(CLOUD), Honolulu, CA, 2017, pp. 790-793.
5) X. Huang and R. Chen, "A Survey of Key Management Service in Cloud," 2018
IEEE 9th International Conference on Software Engineering and Service Science
(ICSESS), Beijing, China, 2018, pp. 916-919.
6) IBM (2012). Applying the cloud in education: An innovative approach to IT. White
Paper by International Business Machine
41 | P a g e