Sunteți pe pagina 1din 21

aminotes

Term Paper
on
DOCKER AND KUBERNETES CONTAINERS
Submitted to
Amity University Uttar Pradesh

In partial fulfilment of the requirements for the award of the degree


of
Bachelor of Technology
in
Computer Science and Engineering
by
Aminotes
A2305219999
Under the guidance of

Ms Faculty Name

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


AMITY SCHOOL OF ENGINEERING AND TECHNOLOGY
AMITY UNIVERSITY UTTAR PRADESH

JULY 2017

i
aminotes
Declaration

I Aminotes student of B.Tech (3-C.S.E.-21(Y)) hereby declare that the project titled
“Docker and Kubernetes Containers” which is submitted by me to Department of
Computer Science and Engineering, Amity School of Engineering Technology, Amity
University Uttar Pradesh, Noida, in partial fulfillment of requirement for the award of the
degree of Bachelor of Technology in Computer Science and Engineering, has not been
previously formed the basis for the award of any degree, diploma or other similar title or
recognition.

The Author attests that permission has been obtained for the use of any copy righted material
appearing in the Dissertation / Project report other than brief excerpts requiring only
proper acknowledgment in scholarly writing and all such use is acknowledged.

Date: __________________

Aminotes

A2305216649

3CSE-7 (2016-20)

ii
aminotes
CERTIFICATE

This is to certify that Mr Aminotes student of B.Tech in Computer Science and


Engineering has carried out work presented in the project of the Term paper entitle “Docker
and Kubernetes Containers” as a part of First year program of Bachelor of Technology in
Computer Science and Engineering from Amity University, Uttar Pradesh, Noida under my
supervision.

_________________________

Ms Faculty Name

Department of Computer Science and Engineering

ASET, Noida

iii
aminotes
ACKNOWLEDGEMENT

The satisfaction that accompanies that the successful completion of any task would be
incomplete without the mention of people whose ceaseless cooperation made it possible,
whose constant guidance and encouragement crown all efforts with success. I would like to
thank Prof (Dr) Abhay Bansal, Head of Department-CSE, and Amity University for giving
me the opportunity to undertake this project. I would like to thank my faculty guide Ms
Faculty Name who is the biggest driving force behind my successful completion of the
project. She has been always there to solve any query of mine and also guided me in the
right direction regarding the project. Without her help and inspiration, I would not have been
able to complete the project. Also I would like to thank my batch mates who guided
me, helped me and gave ideas and motivation at each step.

Aminotes

iv
aminotes
ABSTRACT

It wasn’t much earlier that the application software were big, huge and monolithic. They were
stored in large blocks sitting lonely within lumps of steel and silicon and they would stay here
opposing alteration and not ready to shift or move from there place. This was a problematic
issue for organisation that required to move fast. So it was no big deal that the virtual
machines (VM’s) caught on. Thereafter, apps were no further attached to these pieces of
hardware, enabling everything to flow swiftly. But unfortunately, these VM’s are also very
complex. They fake a whole computer inside of another computer, and this virtual computer
is still so complicate itself which needs managing. And since virtual machines are smaller
and easier to generate, there are many of them around requiring management.

Docker takes a different approach. If you put your software inside a container, it separates the
complexity of your application from the infrastructure underneath making the infrastructure
simpler and the application easier to ship around. On top of this organizational efficiency, the
leap in technical speed and efficiency compared to virtual machines is dramatic. Containers
boot in milliseconds, not minutes. Memory is shared, not allocated. This makes your
application much cheaper to run, but also means that you can architect your application in the
way that you want to, not in the way that fits the constraints of slow, inflexible infrastructure.
Kubernetes is also another open source platform just like Docker.

This report introduces an audit of the Docker and its fundamentals, concise history of Docker
and Containers, about its origin, various concepts, some toolset, architecture, and about its
usage.

v
aminotes
TABLE OF CONTENTS

DECLARATION ii

CERTIFICATE iii

ACKNOWLEDGEMENT iv

ABSTRACT v

TABLE OF CONTENTS vi

1. INTRODUCTION
1.1. Overview 1
1.1.1. What is Container? 2
1.2. History 2
1.3. Objective 3
1.4. Scope of Docker 4
1.5. Application 4
2. REVIEW
2.1. The Docker project 6
2.2. Containers vs. Virtual Machines (VM’s) 7
2.3. For what purpose is Docker good for? 8
2.4. Dockerizing applications 10
2.4.1. Manual builds
2.4.2. Dockerfile
2.5. CONTAINER TECHNOLOGY: How it fits in cloud & further
challenges? 11
2.6. PROS 12
2.7. CONS 13
CONCLUSION AND FUTURE PROSPECTS 14
REFERENCES 15

vi
aminotes
1. INTRODUCTION

1.1 OVERVIEW

Docker is an effective container technology currently used worldwide. It lets you to “build,
ship, and run any app, anywhere.” It has achieved a lot in an unbelievably short span of time,
now considered to be a standard method for solving one of the costliest features of software
that is deployment.

Docker is currently world’s foremost software container platform. It is an open source


container based technology. For developers it is used to eliminate “works on my machine”
problems when collaborating on code with workmates. Operators are using Docker to
running and handling applications alongside in isolated and segregated containers to get well
precise density. The enterprises uses this technology to build swift and easy software
transport channels to distribute new features quicker, securely, safely and confidently for both
Windows Server as well as Linux Server applications.

Docker allows you to breakdown your monolithic apps into a many “small apps” with their
reduced complexity and put them into isolated containers. In this way many small teams
might be able to work on their own application, using the finest technology for their task (you
are not bound to the same technology as the entire application). Moreover, it has great
synergy with the whole idea of micro services.

According to an article written on Linux.com;

“Docker is a tool that can package an application and its dependencies in a virtual container
that can run on any Linux server. This helps enable flexibility and portability on where the
application can run, whether on premises, public cloud, private cloud, bare metal, etc.”

Figure 1: Docker logo [1] Figure 2: Kubernetes logo [2]


1
aminotes
Kubernetes is another platform just like Docker. It is an open source system which is used to
manage containerized applications or software across numerous hosts, serving basic tools for
deployment, scaling, and maintenance of apps.

“Kubernetes builds upon a decade and a half of experience at Google running production
workloads at scale using a system called Borg, combined with best-of-breed ideas and
practices from the community. Kubernetes is hosted by the Cloud Native Computing
Foundation (CNCF).”

1.1.1 What are Containers?

Each and everything needed to enable a small part of software to function and run is packed
into isolate segregated containers. Dissimilar to VMs, the containers don’t package a
complete operating system - just settings and libraries necessary to make the software
function properly are required. This makes for efficient, light-weight, self-contained systems
and assures that software will run the same way each time, irrespective of where it is
deployed.

Container lets a developer to bundle-up an application in all of its components, so the stack of
code that it runs over, the dependencies connected with it altogether packaged up in a box
this container. It is an isolated environment inside which the application got everything that is
required to run inside of this container.

1.2 HISTORY

Docker was started as an in-house project by Solomon Hykes in France within DotCloud
Company, along with other engineers in DotCloud Company namely ‘Andrea Luzzardi’ &
‘Francois-Xavier Bourlet’. Also Jeff Lindsay was involved in it as an independent
collaborator. The DotCloud's registered technology got evolved into Docker, after which it
became so famous leading to DotCloud rebranded as Docker Inc.

2
aminotes
Docker was initially released in March 2013, as an open
source project. On 13 March 2014, with the release of
version 0.9, Docker dropped LXC as the default execution
environment and changed it with its own libcontainer library
written in the Go programming language. Then Docker 1.0,
released in June 2014, when Docker Inc. thought and
considered that this software container platform is suitably
advanced now and can be used in production with the
companies and partners offering paid for support choices.
This project is still evolving rapidly as shown by the monthly
release of the updates, which is adding up fresh features, [3]
Figure 3:Evolution of the Docker Inc.
and addressing issues as they are found. The project has
however successfully decoupled ‘ship’ from ‘run’, so images sourced from any version of
Docker can be used with any other version (with both forward and backward compatibility),
something that provides a stable foundation for Docker use despite rapid change. On 24th
October, 2015, the project was ranked to be the 20th most-starred GitHub project with more
than 25,600 GitHub stars, more than 6,800 forks, and approximately 1,100 contributors.

On May 2016, the study showed the main contributors for Docker were the following
organisations namely ‘The Docker team’, Canonical, CenturyLink, Cisco, Amazon,
Google, IBM, Huawei, Red Hat and Microsoft.

Docker occurrence increased by 160% in 2016 which was shown in a study of LinkedIn
profile of January 2017.

Kubernetes was established by Brendan Burns, Craig McLuckie and Joe Beda. It was rapidly
linked with other Google engineers including Brian Grant and Tim Hockin. It was first
declared by Google in mid-2014. Its design and development are deeply influenced by the
Google's Borg system, and plenty of its top contributors to the project earlier worked on
Borg. ‘Project Seven’ was the original codename within Google for Kubernetes, a reference
to a Star Trek character that is a 'friendlier' Borg. Even on the Kubernetes logo, the seven
spokes present on circular wheel of logo represents the codename.

3
aminotes
On 21st July 2015, Kubernetes v1.0 was released. Along with it, Google united with
the Linux Foundation forming the “Cloud Native Computing Foundation (CNCF)” and they
presented Kubernetes to be core technology.

1.3 OBJECTIVES

The destination of the research project undertaking is:

1. To study about the fundamentals of Docker and Container technology.


2. To study in brief about ‘the Docker project’, difference b/w containers & VM’s.
3. To grip information about Docker hub, Dockerfiles, Docker images, etc.
4. To know some of its benefits and what reasons Docker is good for?
5. To comprehend the Pros and Cons of Docker and its applications in advanced IT
world.

1.4 SCOPE OF DOCKER

Well Docker is really big thing nowadays.

Docker is now probably the fastest developing software project of all time. Recently, over the
last year Docker had gotten more than 7,500 forks and achieved around 30 thousands GitHub
stars. It has acknowledged outstanding amount of pull demands from the likes of big
companies including IBM, Red Hat, Cisco, Google, Microsoft & also VMware.

“ Docker has hit this critical mass by responding to a critical need for many software
organizations: the ability to build software in an open and flexible way and then deploy it
reliably and consistently in different contexts. There is no need for you to learn a new
programming language, purchase costly hardware, or do much in the way of installation or
configuration to build, ship, and run applications portably. ”

It has got a huge scope in the future. This technology is really revolutionising not only the
way we develop and deliver an idea applications but also the way deliver the IT
infrastructure.

4
aminotes
1.5 APPLICATION

Docker Inc., the company has established a straight pathway on progress of the cross service
management (libswarm) and the core capabilities (called ‘libcontainer’) and also on
messaging between containers (libchan). In the meantime the company already has exhibited
an eagerness to consume its own environment with the ‘Orchard Labs’ acquisition. With a
benevolent dictator in the shape of CTO Solomon Hykes at the helm there is a clear nexus of
technical leadership for both the company and the project. From the beginning the starting
Eighteen months of this project has presented a capability of moving fast just by the use of its
personal outputs, and no such signs are there of its fading.

“ Many investors are looking at the features matrix for VMware’s ESX/vSphere platform from
a decade ago and figuring out where the gaps (and opportunities) lie between enterprise
expectations driven by popularity of VMs & the present Docker ecosystem. Areas like
networking, storage and fine grained version management (for the contents of containers) are
presently underserved by the existing Docker ecosystem, and provide opportunities for both
start-ups and incumbents.

Over time it’s likely that the distinction between VMs and containers (the ‘run’ part of
Docker) will become less important, which will push attention to the ‘build’ and ‘ship’
aspects. The changes here will make the question of ‘what happens to Docker?’ much less
important than ‘what happens to the IT industry as a result of Docker?’ ”

Figure 4: Image showing the containers for shipping applications [4]

5
aminotes
2. REVIEW

2.1 The Docker project

The ‘Docker project’ is not the same as Docker Inc., the company. The Docker Inc. were
somewhat like the guardian of the Docker project, they were at where it all started and are the
major driving force behind it but they really don't own it. The Docker, the container
technology belongs to the community.

Its open source means everybody is free to contribute to it, use it and download it so long as
they add here to the terms of the Apache license 2.0

The aim of this Docker Project is all about providing these awesome open tools to built, ship
and run multiple applications better and to deploy it better. The same way the VMware is ton
more than ASXI Hypervisor, the Docker project is also way more than Docker engine. The
‘Docker Engine’ is being that core piece of software for building & managing images and
running containers. If you know VMware and you can relate to the comparison, Docker
Engine is kind of that core technology that all the other Docker project technologies along
with third party tooling build on and build around.

Everyone or everybody to this field is contributing to it .Some of the biggest names in the
industry is involved in it. The code is up there for the world to see on Github. By the way
core Docker components are written in ‘Go’ or ‘GoLang’, a modern programming language
that came out of google. Also we can see the plenary recycle see here

It’s heavily available on Github, it’s heavily developed back and it’s heavily used.

Docker Hub- It is the public Docker registry, a place where you can store and retrieve the
Docker images. Well there are almost over 240 thousands repositories there. Images from
those repositories have been pulled and so downloaded and used well over a billion times.
Over 5 million pulls and downloads every day. But that’s only public repositories and it’s
only on the official Docker hubs. So Docker hub itself, it also has its private repositories then
away from Docker hub there’s third party registries as well so the actual numbers will be
even higher.

6
aminotes

[5]
Figure 5: A sample of code from the core Docker engine

2.2 Containers vs. Virtual Machines (VM’s)

Containers and virtual machines both have same kind of resource isolation & allocation aids,
but they function in a different way because containers virtualize the operating system (OS)
instead of hardware. Among the two, containers are more portable, requiring less space and
efficient.

2.2.1 CONTAINERS
Containers are an abstraction at the app layer that bundles code and dependencies
together. Several containers are able to run on the same system and share the OS kernel
with other containers taking up less space. Each running as isolated processes in user
space.
2.2.2 VIRTUAL MACHINES
Virtual machines (VM), they are an abstraction turning single server into multiple servers.
The hypervisor allows many VMs to run on one machine. Each VM contains a complete
copy of an OS, one or more apps, necessary libraries and binaries - taking up tens of GBs.
VMs can also be slow during booting time.

7
aminotes

[6]
Figure 6: Diagram showing structure of Hypervisor VMware and the Container

Containers share a single kernel and application libraries. This can result into substantially
lesser RAM footprints as compared to virtualisation systems which makes use of RAM over
commitment. The storage footprints can be reduced where deployed containers share
principal image layers. Containers also shows a lower systems overhead than VMs, so the
performance of an application will be the same or better inside the container against the same
application running within a VM. A team of IBM researchers published a performance
assessment of Linux containers and virtual machines.

2.3 For what purpose is Docker good for?

1) REPLACES VIRTUAL MACHINES (VMS)

Docker can be used for replacing VMs in several situations. If you only concern about the
application, not about the operating system, Docker can substitute the VMs. Not only is
Docker faster than a VM to spin up, it is more light-weight to travel around, and because
of its layered file system, it is much quicker and easier to share changes along with others.
It’s also smoothly rooted in the command line and is extremely scriptable.

2) PROTOTYPING SOFTWARE

If you want to experiment or make any changes with software without disturbing your
current setup or going through the trouble of provisioning a VM, Docker can provide you
a sandbox atmosphere in milliseconds.

8
aminotes
3) ENABLING A MICROSERVICES ARCHITECTURE

Docker facilitates the disintegration of a complex system into a sequence of compose-able


parts, which allows you to reason about your services in a more discrete manner. This can
permit you to restructure your software to make its parts more pluggable and manageable
without disturbing the whole.

4) PACKAGING SOFTWARE

Docker is a great way to package software because a Docker image has commendably no
dependencies for a Linux user. You can construct your image and be assured that it can
run on any Linux machine—think Java, without the using a JVM.

5) MODELLING NETWORKS

Since you can spin up hundreds of isolated containers on a single machine, so modelling
networks is a breeze. This can be useful for checking real world scenarios without
breaking the bank.

6) ALLOWING FULL-STACK PRODUCTIVITY WHEN OFFLINE

Since you can bundle every part of your system altogether into Docker containers, you
can organise these to run on your laptop and work on the move, even when you are
offline.

7) REDUCING DEBUGGING OVERHEAD

Complex negotiations among different teams regarding software delivered is common


aspect within the industry. We have individually experienced limitless discussions about
problematic dependencies; broken libraries; updates applied wrongly, or in the wrong
manner, unreproducible bugs, and so on. Docker permits you to state noticeably (even in
script form) the steps for debugging the problem on a system with known properties,
forming bug and environment reproduction a much easy & simpler affair, and one
normally separated from the host environment given.

8) ENABLING CONTINUOUS DELIVERY

Continuous delivery (CD) is a paradigm for software delivery based on a pipeline that
rebuilds the system on every change and then delivers to production (or “live”) through
9
aminotes
an automated (or partly automated) process. Because you can control the build
environment’s state more exactly, Docker builds are more reproducible and replicable
than traditional software building methods. This makes implementing CD much easier.
Standard CD techniques such as Blue/ Green deployment (where “live” and “last”
deployments are maintained on live) and Phoenix Deployment (where whole systems are
rebuilt on each release) are made trivial by implementing a reproducible Docker-centric
build process.

2.4 Dockerizing applications

Virtually very often any Linux Application can be operated within the Docker container.
There aren't any limits on range of languages or frameworks. The only practical parameter in
it is that what a container is permitted to accomplish from an OS perspective. That bar might
be lowered from conducting containers in privileged manner, which drastically reduces
controls (also increases risk of this containerized app to be in a position to lead to harm to the
main workstations).

Containers originated from pictures, and these images and pictures can be created from
running containers. You will find essentially two methods to have software application in to
containers – Dockerfile & Manually.

2.4.1. Manual builds


The manual build gets started by running a container with a base operating system picture
or image. An interactive terminal can then be used to install applications and
dependencies using the package manager offered by the chosen flavour of Linux. Zef
Hemel delivers a walk-through of this process in his own article “Using Linux Containers
to Support Portable Application Deployment”. Once the application gets successfully
installed the container can be pushed to a registry (such as Docker Hub) or exported into a
tar file.

2.4.2. Dockerfile
A system used for scripting the production of Docker containers is called Dockerfile.
Every Dockerfile requires a base image to start and then a particular sequence of
commands that runs inside the container and/or files which gets added to the container.
The Dockerfile could also justify ports to be wide-open, the initial default command

10
aminotes
during startup and also the working directory at the time a container is started. Containers
built using Dockerfiles can be exported just same as the manual builds. Another place
Dockerfiles can be used, is in Docker Hub’s automatic build system so that images are
built from scrape within the system beneath the control of Docker Inc. Company (source
of that image or picture noticeable to everybody that uses it).
Single process?
Regardless of whether the images are constructed manually or using Dockerfiles a basic
consideration is that a single procedure is summoned whenever the container is launched. For
a container full filling for one purpose, for example, running an application server, running
single procedure is not an issue (and some contend that holders should just have a solitary
procedure). For circumstances where it is required to have several procedures running inside
a container a controller procedure must be launched that would then be able to bring forth the
other required processes. There is no unit system inside containers. So anything that depends
on upstart, system or comparative would not be able to work without alteration.

2.5. CONTAINER TECHNOLOGY: How it fits in cloud & further


challenges?

Docker is a container management system. It basically automates the process of making


containers for running applications or its constituents. Containers are managed by a set of
APIs which can be built from templates or commands. It's possible to create container based
systems on any OS which supports container like partitioning however Docker uses Linux
container tools. Hence Linux specific applications and components are run by Docker
containers. Although Docker can run in a VM hosted by a different Operating System but it
requires a Linux guest OS within the VM to serve the containers.

Though Docker containers can be made to run on non-Linux host OS but they are quiet
limited to the Linux apps and using Docker on external Linux hosts is difficult. The Docker
containers that are hosted on Windows servers can be valuable to operators which are having
huge requirements for Windows server and are willing to add Linux based applications in it.

“Enterprises can host containers on public cloud VMs in the data centres and available
Docker tools can help to deploy container-based components, support workflows and can
facilitate hybrid cloud use with easy failover support. The VM based virtualization have the
advantages of isolation which is useful for public clouds.”

11
aminotes
2.6. PROS
Some of the advantages of Docker are listed below:-

1) Simplified and Easy Configuration


Just like to the virtual machines, Docker offers a distinct working atmosphere to
deploy code, but there is no need of installing an operating system as it’s in virtual
machines. You can use a container to isolate the working atmosphere for different
apps that can be used from the development to the production without any change.
Nowadays, all top industries dealing in ‘IaaS’ & ‘PaaS’ support Docker.
2) Consistent Environment
Docker resolves one of the primary problems, which is establishing a steady
environment for code management through different machines. During progress of
application, the code may traverse through numerous machines from development to
the creation. Docker offers a constant atmosphere on each and every machine
allowing the code development and deployment is much easier and quicker.
3) Easy App Isolation
Application isolation is made easier and cost effective through Docker. With
containers, separate apps can be isolated, due to which they can have their own setup
environments without any influence on apps deployed inside other containers.
4) Faster Deployment
Containers have lessened the time of deployment a lot as compared to virtual
machines. That is why the top big names like Google, Amazon, and Facebook use
Docker.
5) Security
Using Docker is secure because every container is isolated. One container cannot
poke into another container. There is no sharing of resources between containers, as
each container has its own set of required resources. It means if something wrong
happens to one container, the data available in the container will be affected only;
rest other containers will remain safe.

12
aminotes
2.7. CONS
Docker has few disadvantages too:
1) Docker is good but it cannot totally substitute virtual machines (VM’s). There are few
applications which cannot be deployed inside containers perfectly. Docker is good for
micro-services.
2) Tools required to monitor & manage containers are currently limited. However, in
coming days, this limitation will finish off.

13
aminotes
CONCLUSION AND FUTURE PROSPECTS

Today, all around the world there is growing interest among the users and extensive
acceptance for Docker and container technology. This has forced legacy sellers to deliver at
least their first container products but in the future ahead it’s required to look how these
technologies can easily incorporate and fulfil the technical requirements.

In this term paper, I have given a precise and brief review to the container technologies that is
about ‘Docker’. Various uses, benefits as well as its applications have been discussed in it.
Another container platform i.e. ‘Kubernetes’ is a similar open source software container
which is used for better management and deployment of applications. This report was
basically focused about introducing to the Docker. I have mentioned the advantages along
with few disadvantages but honestly Docker is far better tool equipped with lot of user
friendly benefits.

“There are several alternatives to VMware container platform like Docker, Kubernetes,
CoreOS and Cloud Foundry. The Project Photon of VMware will be shipped with the Pivotal
Cloud Foundry. The Open Container Project (OCP) will go a long way toward driving the IT
industry to top one or two container deployment methodologies which will functionally
merge rest of all competing approaches.”

Already OCP is being signed by the VMware, Amazon Web Services, IBM, HP, EMC,
Microsoft, Google and Red Hat etc. The Google's open source project that is Kubernetes,
CoreOS's open source Rocket project as well as the Docker Platform will be contributing to
upcoming container administration & management.

14
aminotes
REFERENCES

Web Links:

1. https://www.docker.com
2. https://docs.docker.com/engine/docker-overview/
3. https://github.com/docker/
4. https://www.infoq.com/articles/docker-future
5. https://kubernetes.io/

Books:

1. ‘Docker in Practice’ by IAN MIELL & AIDAN HOBSON SAYERS


2. ‘The Docker Book: Containerization Is the New Virtualization’ by James Turnbull
3. Research paper: “Containers & Docker: Emerging Roles & Future of Cloud
Technology” by Sachchidanand Singh, Nirmala Singh
Image sources:
 [1] paulund.co.uk/list-containers-docker
 [2] kubernetes.io/
 [3] Video tutorial: Pluralsight- Docker and Containers- The Big Picture
 [4] www.docker.com/what-docker
 [5] overlay.go
 [6] Video tutorial: Pluralsight- Docker and Containers- The Big Picture

15

S-ar putea să vă placă și