Sunteți pe pagina 1din 40

VMware vSphere 5.

1 lab on our laptops



Lets say we want to learn a VMware vSphere products, such as ESXi server, vCenter server, HA,
vMotion, FT. We could do that with two physical servers on which we would install ESXi hypervisor, but
we dont have servers to spare, we dont want extra heat generated and electricity consumed in our
datacenter. Perhaps we want our test environment to travel with us. We can do this on our laptops!
So, we will create the lab that is depicted in the following diagram:

We have two ESXi hosts that are connected to various networks for various purposes: management,
NFS, iSCSI, vMotion and FT. Although some of these features (perhaps all?) could use single adapter,
this is not considered a best practice.
The only physical thing in this lab is my laptop. Everything else is virtual! Im running this lab on
Windows8 and VMware workstation 9.
To create this lab, we will go through these steps:
Installing and setting up VMware Workstation
Installing the ESXi host as a virtual machine
Installing a vSphere Client
Installing a vCenter Server
Configuring networking for NFS and vMotion
Try this all out


Installing and setting up VMware Workstation
After downloading this product from VMwares site, we start the installation. This is an easy, several
steps process. Lets begin

After starting the installer we click Next.

We will be OK with the Typical installation.

Lets leave a default installation location and click Next.

We deselect Check for product updates on startup.

Right now we dont want to participate in VMwares improvement program. So we deselect Help
improve VMware Workstation and click Next.

We want to create shortcuts.

We click Continue.

And the process is done.

Now its time to start the Workstation and go to the Virtual Network Editor. We select Edit->Virtual
Network Editor. The defaults should look like this:

This is not what we want, so we will first delete all networks by selecting each of them and clicking
Remove Network. Then we will add our networks to match our diagram. We click Add Network, and
then select VMnet1 for our management network and click OK. Then we make sure that the Host-
only (connect VMs internally in a private network) option is selected, the Use local DHCP service to
distribute IP address to VMs deselected, the Subnet IP is 1.1.1.0 and the Subnet mask is
255.255.255.0. When we click Apply our settings should look like this:

We repeat the process for all other networks. When we are done, within network connections we should
see all our networks:

We will rename all connections, so it is clearer to us which network is for which purpose. Right click a
network, selecting Rename, repeated for each connection will eventually yield to this situation:

Now we make sure that each connection has proper IP settings. For example the Management network:

And the FT network:

We should have in mind that these settings will break communications to real IP addresses used here.
For example we wont be able to reach anything that belongs to the 1.1.1.0/24 segment from our
laptop. Perhaps we should use 192.168.0.0/24 or similar networks, but this will do for the lab.


Installing the ESXi host as a virtual machine
From the Workstation we select File->New Virtual Machine and start the wizard. We select Custom
(advanced):

On the Choose the Virtual Machine Hardware Compatibility screen we click Next:

Now we browse for an ESXi 5.1 ISO image and Workstation will detect what OS is in the ISO:

Next we give a name and specify location for the VM:

On the Processor Configuration screen, we select one for Number of processors and two for
Number of cores per processor. These are minimums for ESXi 5.1. We could go higher here,
depending on our laptops hardware:

For memory size, again the minimum is 2GB, but we could go above:

On the Network Type we can select any option, because we will have to change it later, no matter
what we select now:

On the following several screens, we just accept defaults:





Finally, on the Ready to Create Virtual Machine we deselect Power on this machine after creation
and click Customize Hardware:

In the Hardware dialog, we only need to change the network adapter to match our requirements. For
the management network we will select VMnet1, for FT network VMnet5 and so on:

Our summary portion of the Workstation windows should now look like this:

Now we are ready to power the machine on, and begin installation of the ESXi. This is illustrated in the
following pictures:





Now we must specify and verify the root password and carry on:



After the VM is rebooted, we are presented with the ESXi console screen:

There are several more steps that need to be completed. We start completing them by pressing the F2
key. From the menu presented to us, we go under Configure Management Network. Under the IP
Configuration we specify the IP address of 1.1.1.2 and subnet mask of 255.255.255.0. Remember, this
has to be in the same segment as the IP address we specified under the VMnet1 adapters
configuration. The default gateway has no meaning to us in our lab, but we must specify something, so
we pick 1.1.1.1:

Under the DNS Configuration we need to specify a primary DNS server and a hostname. Here we can
make choices. If our laptop is running Windows7/8, we would need another VM inside Workstation to
act as a DNS server, or we could install a VM inside the ESXi host that we installed inside the
Workstation. Funny, isnt it :) If our laptop is running Windows server OS or Linux, we could install a
DNS directly in the OS that laptop is running. Im running Windows8, so for now I will choose 1.1.1.1
for DNS server and change it later when we install a DNS server as one of available options:

Finally, we press Escape and confirm our changes with Yes:

Now we have our first ESXi host installed and set up:

We can now ping the 1.1.1.2 from our laptop:

Installing a second ESXi host is almost identical. After we are done we have this screen:


Installing a vSphere Client
Now when we have two virtual ESXi hosts, we need a tool to manage them. This tool is called vSphere
Client. We can download this tool from VMware site. We have several choices here about where we
would install this client. Similar to a DNS server installation. We could run it from a host OS, which is
Windows8 in our case, we can run it from a VM inside the VMware workstation, or we could run it from
a VM installed within a ESXi host that is running as a VM inside the Workstation.

Lets run it from a Windows8 host OS
We have a VMware vSphere ISO image downloaded and unpacked. Then we start the installation:








Now we start vSphere client, and connect to one of our ESXi hosts:

First time we will be presented with this certificate warning, which we will acknowledge. We will also
acknowledge the evaluation dialog box warning:


And now we have our ESXi hosts managed with vSphere client, and we are ready to install our first
virtual machine, which will be a vCenter Server.



Installing a vCenter Server
From vSphere client, we select File->New->Virtual Machine, and complete a wizard. First we select
Custom for virtual machine creation type:

We give it a name:

For now, we only have a local datastore. So we will create this VM on that store, and later we will
migrate it on the shared datastore:

The hardware version should always be as higher as possible:

For guest OS version we select Microsoft Windows Server 2008 R2 (64-bit). We could say here
Microsoft Windows 2003 (64-bit) as well, but we have to have in mind that vCenter 5.1 can only be
installed on 64-bit version of Windows:

Depending on several choices, the amount of RAM required can vary. For lab environment I would
suggest at least 2GB. More is better, but we are limited by the max amount of physical RAM on our
laptop. For CPU, we can go with the defaults in our environment:


Although we created several networks, only one, VMnet1 or Management network is available to us at
this moment. This is OK for now:

Next several steps, we will go with the defaults, except for disk size. We cannot accept default of 40GB,
because we created our ESXi host with 40GB, but ESXi itself took some space:




After reviewing our settings, we need to mount an ISO image containing the installation of Microsoft
Windows Server 2008 R2 (64-bit):

Before we power on the VM, we should edit settings, go under Options, Boot Options, and we select
Force BIOS Setup. This is to ensure that when we power on the VM for the first time, it will go in the
BIOS, so we can say that we want it to boot from CD/DVD drive:

Now we power on the VM and install Windows server. I guess we all know how to do that, so I will not
show these steps.


Now from within the VM we mount an ISO image that contains a vSphere Server installation files, and
run the setup. Previously, we used the same installer to install the client and now we will use it to
install the server. So we select VMware vCenter Simple Install. We could install all of components
separately, such as database, Single Sign On and so on, but for the sake of simplicity, we will use the
option of simple install. This option will install all needed components for us: the database, Single Sign
On, Inventory Service and vSphere Server itself.

After clicking Install, a simple wizard is run. First, the SSO install will begin. We can safely disregard
this warning:

After several next, we need to provide the administrators password for the SSO service. Any other
service that needs to connect to SSO service will need this password:

For the database, we will use the defaults, because this is a lab installation:

Next, we can use the IP address or FQDN. It is recommended to use FQDNs, but we so far dont have a
DNS and the IP address will do for the lab:


We could change the port number, but for most installation, the default will do:

Now we wait for the SSO installation to complete. Time for coffee
After the SSO and its database is installed, the wizard will install the database for the vCenter server:


Same warning for IP/FQDN choice:

Its best to leave ports to defaults. Because we have small numbers of VMs, we dont need additional
random ports:

Then again, the defaults:

After another coffee, the vCenter database is installed, as well as the Inventory Service and the
vCenter itself. This is what we want to see:


Before we connect to vCenter server, one thing we should keep in mind: in the lab like this, starting of
vCenter server can take some time. So we need to be patient and wait for services to start:

We connect to vCenter server using the same interface as before, but now we use the IP address of
Windows server and credentials of local administrator:

Because now we dont talk to the ESXi server, but to the vCenter server, we again have the certificate
warning. We can safely ignore it.


Now we create a virtual datacenter:

And we add first ESXi host:

The wizard is fairly simple:

We accept the certificate:

And use the evaluation license:

For now, we will not engage the Lockdown Mode:

Now we select the datacenter. We only have one. Finally, we review the summary and click Finish.
The agent will be installed on ESXi host that will be used by the vCenter server to centrally manage this
host:

This process of adding the ESXi host should be repeated for the another ESXi host. Then we will have
this situation:

We have now two ESXi hosts managed by a single virtual vCenter server running on one of ESXi hosts.
Next thing is creating a shared storage and try one of advanced features, such as vMotion.

Configuring networking for NFS and vMotion
Remember our diagram and network settings in host OS? At this time we only have our management
network set up:

Now we should add two additional network adapters to our ESXi hosts. One will be VMnet2 and will be
used for the NFS and one will be VMnet4 for VMotion. First we power off ESXi hosts and add two
adapters in each hosts. Within the Workstation, we select Edit virtual machine settings, then click
Add, select Network Adapter and click Next:

For the Network Adapter Type we select Custom and then VMnet2 for NFS and VMnet4 for vMotion:

Now we power ESXi hosts back on and go under the network configuration. We should setup our newly
added network adapters. First we add a NFS adapter. We click Add Networking, select VMkernel
type and click Next. We can see our two adapters with proper network addresses listed along each
adapter:

We select vmnic1 and click Next. We give this network a name, check Use this port group for
management traffic and click Next. The NFS communication is handled through the management
enabled interface, and hence this option:

Next we specify the IP address settings:

Then we repeat these steps for vMotion network. The differences are in IP settings and the option Use
this port group for vMotion:

Now our network settings look like this:

Behind the scenes I have installed another VM, a SLES Linux which will be used as a NFS shared
storage. I wont be showing here how to install this Linux and set up a NFS share, but I will show how
the NFS setup screen should look like:

Here we can see that we publish the /NFS folder, which must be created on the file system. Also we
can see that the wildcard list is 2.2.2.*, which means that only hosts form only NFS network can
actually access the NFS store. Finally, we set up rw and no_root_squash options.
To use this NFS storage, under Configuration->Storage we click Add Storage. Then we select
Network File System as a storage type and click Next:

Now we give the IP address of our Linux NFS VM, folder that we share from that server and a datastore
name:

Please note that the NFS share should have these options enabled: rw, no_root_squash and appropriate
host access wildcard (in our case 2.2.2.*). If everything goes well, we should see our shared storage:




Try this all out
After we set up the other ESXi host the same way, we can first migrate our vCenter Server to the
shared storage and vMotion it from one host to another. First, lets move vCenter Server from ESXi-1 to
ESXi-2. We select Virtual Center VM, right click and click Migrate. Then we select Change
datastore and select NFS datastore. This could take a while, so its time for coffee.
Ok, finally! Now we have our vCenter Server on the shared storage and is powered on on the host
1.1.1.2:

Lets now vMotion this VM to another host, without interrupting the service. Right click the VM, select
Migrate, select Change host, select the 1.1.1.3 and complete the wizard. After a while we can see
that the VM is now running on another host and no service was interrupted:

In a similar fashion we can play with a cluster and other features. One final thing I would like to pint
out. I created this lab on a HP ProBook 4740s with 8GB of RAM and here is its health with only one
VM powered on:


Of course, powering a few more VMs wont affect the memory usage, because they will use the memory
already consumed by ESXi hosts, but disk here is a big issue. So I would recommend a 16GB of RAM
and SSD disk. This way we could try many vSphere 5.1 features.
Thanks for reading!

S-ar putea să vă placă și