Sunteți pe pagina 1din 51

TABLE OF CONTENTS MODULE 0: Introduction...................................................................................................................................5 VMware Infrastructure 3..................................................................................................................................5 MODULE 1: VMware Overview........................................................................................................................5 Key Benefits of Virtual Machines...................................................................................................................

5 VMware Products................................................................................................................................................5 VMware Infrastructure 3..................................................................................................................................5 VMware Virtual Desktop Infrastructure (VDI)...............................................................................................6 VMware Converter (Formerly P2V and VMware Importer)...........................................................................6 VMware Lab Manager.....................................................................................................................................7 VMware Workstation.......................................................................................................................................7 VMware ACE..................................................................................................................................................7 VMware Player................................................................................................................................................7 VMware Server................................................................................................................................................7 Useful to Know VMware File Extensions:..........................................................................................................8 MODULE 2: Install ESX Server.........................................................................................................................8 ESX Server 3 Requirements............................................................................................................................8 Minimum ESX Server Hardware Requirements..............................................................................................8 Enhanced Performance Recommendations......................................................................................................9 ESX Server Installation........................................................................................................................................9 Advanced Installation Options.......................................................................................................................10 Host-Based Licensing....................................................................................................................................10 Scripted ESX Server Installations......................................................................................................................11 Configuring Services.........................................................................................................................................12 Hostname Resolution.....................................................................................................................................12 Configure User Login....................................................................................................................................12 VMkernel Configuration................................................................................................................................13 Configure Log File Rotation..........................................................................................................................13 Configuring an NTP Client............................................................................................................................13 Allow VirtualCenter to See Configuration Changes.....................................................................................13 MODULE 3: Networking...................................................................................................................................14 ESX Server Networking and Virtual Switches..................................................................................................14 Virtual Switches.............................................................................................................................................14 Network Connections.........................................................................................................................................14 Service Console Port .....................................................................................................................................14 VMkernel Port...............................................................................................................................................15 Virtual machine port group............................................................................................................................15 Naming Virtual Switches and Connections.......................................................................................................15 Create Virtual Switches using the Command Line............................................................................................15 Locate and Configure Physical NICs.............................................................................................................15 Configure Virtual Switches............................................................................................................................16 Configure the VMkernel Port........................................................................................................................16 Configuring Service Console Networking.....................................................................................................16 Modify Virtual Switch Configurations..............................................................................................................17 Virtual Switch Properties...............................................................................................................................17 Network Policies............................................................................................................................................17 Network Policy: VLANs...............................................................................................................................17 Network Policy: Security..............................................................................................................................18 Network Policy: Traffic Shaping..................................................................................................................18 1

Network Policy: NIC Teaming.....................................................................................................................18 MODULE 4: Storage..........................................................................................................................................18 Fibre Channel SAN Storage...............................................................................................................................19 Addressing SAN LUNs in the VMkernel......................................................................................................19 Making SAN Storage Available to ESX Server............................................................................................19 VMFS Datastore................................................................................................................................................19 Extend a VMFS..............................................................................................................................................20 Multipathing with Fibre Channel...................................................................................................................20 Multipathing with iSCSI................................................................................................................................21 Managing VMFS Datastores..............................................................................................................................21 esxcfg-vmhbadevs..........................................................................................................................................21 fdisk................................................................................................................................................................21 vmkfstools......................................................................................................................................................21 ls.....................................................................................................................................................................23 ln....................................................................................................................................................................23 vdf..................................................................................................................................................................23 iSCSI SAN Storage............................................................................................................................................23 Components of an iSCSI SAN.......................................................................................................................24 Addressing in an iSCSI SAN.........................................................................................................................24 How iSCSI Targets are Discovered...............................................................................................................24 How iSCSI Storage Authenticates the ESX Server.......................................................................................25 iSCSI Software and Hardware Initiator.........................................................................................................25 Set Up Networking for iSCSI Software Initiator...........................................................................................25 Configuring iSCSI Storage............................................................................................................................25 NAS Storage and NFS Datastores.....................................................................................................................26 Addressing and Access Control with NFS.....................................................................................................26 Storage Considerations......................................................................................................................................27 ESX Server Feature Comparison by Storage Type........................................................................................27 Storage Considerations with Feature Components........................................................................................27 MODULE 5: VirtualCenter Installation..........................................................................................................28 VirtualCenter Software Installation...................................................................................................................28 Order of Installation.......................................................................................................................................28 VirtualCenter Database Overview.................................................................................................................28 VirtualCenter Server Overview.....................................................................................................................29 VirtualCenter Server Requirements...............................................................................................................29 Managing Across Geographies......................................................................................................................30 ESX Server 3 and VC 2 Architecture............................................................................................................30 MODULE 6: Virtual Machine Creation and Management.............................................................................30 Create a VM.......................................................................................................................................................30 What Files Make Up a Virtual Machine?......................................................................................................31 VM Virtual Hardware....................................................................................................................................31 Create Multiple VMs.........................................................................................................................................32 Create a Template..........................................................................................................................................32 Guest OS Customization................................................................................................................................32 Manage VMs......................................................................................................................................................32 Move VMs Between ESX Servers: Cold Migration.....................................................................................32 Snapshot a VM...............................................................................................................................................33 MODULE 7: VM Access Control......................................................................................................................33 VMware Infrastructure User Access..................................................................................................................33 Privileges........................................................................................................................................................33 2

Roles..............................................................................................................................................................34 Pre-defined and Custom Roles.......................................................................................................................34 Permissions....................................................................................................................................................35 VirtualCenter Security Model........................................................................................................................35 ESX Server Security Model...........................................................................................................................35 Accessing VMs Using Web Access...................................................................................................................36 Web Access Tasks.........................................................................................................................................36 MODULE 8: VM Resource Management........................................................................................................36 Using Resource Pools........................................................................................................................................36 ESX Server Sizing: VMkernel Resources....................................................................................................36 ESX Server Sizing: Service Console Resources...........................................................................................37 VMs CPU Resource Settings........................................................................................................................37 VMs Memory Resource Settings..................................................................................................................37 How VMs Compete for Resources................................................................................................................38 What is a Resource Pool?..............................................................................................................................38 Admission Control for CPU and Memory Reservations...............................................................................38 Tools for Resource Optimization.......................................................................................................................39 Virtual CPUs..................................................................................................................................................39 VMkernel Swap.............................................................................................................................................39 Migrate VMs with VMotion..............................................................................................................................39 Move VM Between ESX Servers: VMotion Migration...............................................................................39 Virtual Machine Requirements for VMotion.................................................................................................40 Host Requirements for VMotion...................................................................................................................40 CPU Constraints on VMotion........................................................................................................................40 Using Topology Maps to Plan VMotion Layout...........................................................................................40 VMware DRS (Distributed Resource Scheduler)..............................................................................................41 DRS: Purpose and Features..........................................................................................................................41 DRS Cluster Settings Automation Level....................................................................................................41 Resource Pools in a DRS Cluster.......................................................................................................................42 Delegated Administration..............................................................................................................................42 Planned Downtime: Maintenance Mode.......................................................................................................42 CPU and Memory Resource Allocation............................................................................................................42 Guidelines for Initial VM CPU Resources....................................................................................................42 Guidelines for Initial VM Memory Resources..............................................................................................42 MODULE 9: VM Resource Monitoring...........................................................................................................43 Monitoring using Performance-based Alarms...................................................................................................43 What is an Alarm?.........................................................................................................................................43 Creating a VM- and Host-Based Alarm.........................................................................................................43 Module 10: Data Protection...............................................................................................................................43 Backup Strategies...............................................................................................................................................44 VMware Consolidated Backup (VCB)..........................................................................................................44 File-Level Restore Methods...........................................................................................................................44 Full Virtual Machine Backup and Restore.....................................................................................................45 How VCB Works...............................................................................................................................................46 VCB Components..........................................................................................................................................46 Using VCB Command-Line Utilities.................................................................................................................46 Module 11: High Availability............................................................................................................................47 High Availability Strategies...............................................................................................................................47 Architecture of a VMware HA Cluster..........................................................................................................47 Configure VMware HA.....................................................................................................................................47 3

Prerequisites for VMware HA.......................................................................................................................47 What if a Host is Running but Isolated?........................................................................................................48 Guidelines for Isolation Response Setting.....................................................................................................48 MODULE 12: Security in a VMware Virtualized Environment...................................................................48 VMware Infrastructure Security........................................................................................................................48 Network Security: Security Considerations for VLANs..............................................................................49 Network Security: Secure Virtual Machines with VLANs...........................................................................49 Network Security: Virtual Switch Protection and VLANs...........................................................................49 How the Service Console is Secured.............................................................................................................50 Managing Firewalls...........................................................................................................................................50 Client Utilities................................................................................................................................................50 Service Console Firewall Management.........................................................................................................51 ESX Server User Administration.......................................................................................................................51 SSH Access........................................................................................................................................................51 Configuring Sudo...............................................................................................................................................51 TCP Wrapper Configuration..............................................................................................................................51

VMware Infrastructure 3 Review MODULE 0: Introduction


VMware Infrastructure 3 VI3 is a software suite for optimizing and managing IT environments through virtualization. IT provides virtualization, management, resource optimization, application availability and operational capabilities. The suite consists of the following applications: ESX Server: Platform on which virtual machines run. o VMware Virtual SMP: Multi-processor support (up to 4) for virtual machines. VMware VirtualCenter: Centralized management tool for ESX Servers and virtual machines. o VMware VMotion: Migration of virtual machines while they are powered on. o VMware HA (High Availability): VirtualCenters high availability feature for virtual machines. o VMware DRS (Distributed Resource Scheduler): VirtualCenters feature for dynamic balancing and allocation of resources for virtual machines. VMware Consolidated Backup: Centralized backup software for virtual machines. NOTE: VMotion, HA, and DRS are licensed individually and does not automatically come with the package.

MODULE 1: VMware Overview


Key Benefits of Virtual Machines 1. Isolation: VMs are isolated from other VMs. If one suffers a BSOD, all others keep running. 2. Encapsulation: A VM is represented as a small set of files. 3. Compatibility: Guest OSes see a standard x86 PC. 4. Hardware Independence: Guest OSes are protected from the details of (and changes in) physical hardware.

VMware Products VMware Infrastructure 3 VMware Infrastructure is comprised of the following products: VMware ESX Server Virtual infrastructure platform for datacenter environments. ESX Server uses a bare-metal architecture to provide the optimal performance and scalability for server applications running in virtual machines. It can support up to the following: o 16 sockets and up to 32 processors o Each ESX Server host can run up to 128 virtual processors concurrently and sharing up to 64GB of memory. 5

o With Virtual SMP, the ESX Server let you configure 2- or 4-way virtual machines for larger workloads. o VMkernel has complete control over hardware resources. o Uses a high-performance filesystem, VMFS-3. o Supports dynamic allocation of computing resources. o Enables VMs to use up to 4 physical processors with Virtual SMP. o Kernel is based on a modified version of Red Hat Enterprise Linux 3 (Update 6). o See http://www.vmware.com/pdf/vi3_301_201_installation_guide.pdf for more information on System Requirements. VMware VirtualCenter VMwares tool for managing your virtual infrastructure. It gives you a view of your entire virtual infrastructure spanning all your ESX Servers and virtual machines hosted on those servers. Used to provision new server virtual machines and is a repository for a library of standardized virtual machine templates Delivers the following features o VMotion - lets you migrate running virtual machines between servers so you can perform hardware maintenance and shift servers with minimal downtime. o VMware Distributed Resource Scheduler (DRS) helps you balance virtual machine workloads across hosts o VMware High Availability (HA) helps you manage virtual machines for high availability and disaster recovery. VMware Virtual Desktop Infrastructure (VDI) This solution allows organizations to host individual desktops inside virtual machines that are running in their data center using the VMware Infrastructure (ESX Server and VirtualCenter). Enables organizations to move sensitive data normally stored on a PC into virtual machines. Allows desktop environments, encapsulated in virtual machines, to be recovered and redeployed in the event of a disaster. Is idea for offsite facilities where confidential information and intellectual property can be securely stored and maintained in the data center. VMware Converter (Formerly P2V and VMware Importer) VMware Converter 3.0 is a tool that automates the process of converting physical machines, other virtual machine formats and third party image formats into VMware virtual machines. VMware Converter will convert the following: Physical machine to virtual machines with no disruption or downtime. VMware virtual machine formats (virtual to virtual): o Workstation 4.x and 5.x o ESX Server 2.5.x (if managed by VirtualCenter 2.x) o ESX Server 3.x o GSX Server 3.x o VMware Server 1.x o VirtualCenter 2.x Some 3rd party disk image formats (Windows OS only) 6

o o o o

Symantec Backup Exec System Recovery images up to version 10d StorageCraft ShadowProtect images Microsoft Virtual PC 7 Microsoft Virtual Server (any version)

VMware Lab Manager This environment is used to rapidly set up multiple test/development environments. It is also used to capture and reproduce software defects. It consists of the following components: Lab Manager Server = provides web and SOAP (Simple Object Access Protocol) interfaces for the Lab Manager system. It also manages and deploys configurations against a pool of Managed Server systems. Lab Manager Managed Server = ESX Server running the Managed Server software. It uses the Managed Server to deploy configurations and their VMs. Lab Manager storage server = it provides storage for VMs and media (CD and floppy images). Lab Manager client user = is a client who can use the Lab Manager console and Lab Manager SOAP API. VMware Workstation It is a virtual machine platform for the desktop that runs like an application on a Windows or Linux host. It lets you run OSes on top of each other. The OSes running on Workstation are called guest OSes and can be Windows, Linux, Novell, or Solaris x86. Other notable features are: Supports Windows Vista (both guest and host) Supports USB 2.0 devices Provides full support for 2-way Virtual SMP Provides VMware Converter for creating a new VM from a physical disk VMware ACE It allows IT desktop managers to deploy standardized client PC environments inside secure, centrally managed virtual machines called ACEs. Its features are: Manage virtual desktops from a single point of control Increase security and flexibility for mobile and remote workers Safely extend corporate resources to 3rd party unmanaged PCs Deliver fully configured virtual desktops for demos and training VMware Player VMware Player is free software that lets you run any preconfigured Windows or Linux 32- or 64-bit virtual machine on any PC. It is a way to introduce virtualization for first-time users. VMware Server VMware Server is a free virtualization product for Windows and Linux servers. It is also another good way to introduce virtualization to first-time users. It installs like an application, supports any Windows or Linux application, including pre-built virtual appliances from VMTN, and runs on any standard x86 hardware. It 7

supports Intel Virtualization Technology, Virtual SMP and 64-bit guest operating systems. It can run any virtual machine created by VMware GSX Server, ESX Server, or Workstation. Useful to Know VMware File Extensions: .vmx = VMware configuration file (text file) .vmdk = pointer that maps to the disk file -flat.vmdk = the disk

MODULE 2: Install ESX Server


ESX Server 3 Requirements Minimum ESX Server Hardware Requirements You need the following hardware and system resources to install and use ESX Server 3. At least two processors: 1500 MHz Intel Xeon and later, or AMD Opteron (32bit mode) for ESX Server 3 1500 MHz Intel Xeon and later, or AMD Opteron (32bit mode) for Virtual SMP 1500 MHz Intel Viiv or AMD A64 x2 dualcore processors 1GB RAM minimum. One or more Ethernet controllers. Supported controllers include: Broadcom NetXtreme 570x gigabit controllers Intel PRO/100 adapters For best performance and security, use separate Ethernet controllers for the service console and the virtual machines. A SCSI adapter, Fibre Channel adapter, or internal RAID controller: Basic SCSI controllers are Adaptec Ultra160 and Ultra320, LSI Logic FusionMPT, and most NCR/Symbios SCSI controllers. Fibre Channel. See the Storage / SAN Compatibility Guide. RAID adapters supported are HP Smart Array, Dell PercRAID (Adaptec RAID and LSI MegaRAID), and IBM (Adaptec) ServeRAID controllers. A SCSI disk, Fibre Channel LUN, or RAID LUN with unpartitioned space. In a minimum configuration, this disk or RAID is shared between the service console and the virtual machines. For hardware iSCSI, a disk attached to an iSCSI controller, such as the QLogic qla405x. For SATA, a disk connected through supported dual SASSATA controllers that are using SAS drivers.

ESX Server 3 supports installing and booting from the following storage systems: ATA disk drives Installing ESX Server 3 on an ATA drive or ATA RAID is supported. However, ensure that your specific drive controller is included in the supported hardware. Storage of virtual machines is currently not supported on ATA drives or RAIDs. Virtual machines must be stored on VMFS volumes configured on a SCSI or SATA drive, a SCSI RAID, or a SAN. 8

Serial ATA (SATA) disk drives SATA disk drives, plugged into dual SATA/SAS controllers, are supported for installing ESX Server 3 and for storing virtual machines on VMFS partitions. Ensure that your SATA drives are connected through supported SATA/SAS controllers: mptscsi_pcie LSI1068E (LSISAS3442E) mptscsi_pcix LSI1068 (SAS 5) aacraid_esx30 IBM serveraid 8k SAS controller cciss Smart Array P400/256 controller megaraid_sasDell PERC 5.0.1 controller

NOTE: Sharing VMFS datastores on SATA disks across multiple ESX Server 3 hosts is not supported. SCSI disk drives SCSI disk drives are supported for installing ESX Server 3. They can also store virtual machines on VMFS partitions. Storage area networks (SANs) SANs, both Fibre Channel and iSCSI, are supported for installing ESX Server 3. They can also store virtual machines on VMFS datastores.

NOTE: The minimum supported LUN capacity for VMFS3 is 1200MB. Enhanced Performance Recommendations The lists in previous sections suggest a basic ESX Server 3 configuration. In practice, you can use multiple physical disks, which include SCSI disks, Fibre Channel LUNs, RAID LUNs, and so on. Here are some recommendations for enhanced performance: RAM Having sufficient RAM for all your virtual machines is important to achieving good performance. ESX Server 3 hosts require more RAM than typical servers. An ESX Server 3 host must be equipped with sufficient RAM to run concurrent virtual machines, plus run the service console. For example, operating four virtual machines with Red Hat Enterprise Linux or Windows XP requires your ESX Server 3 host be equipped with over a gigabyte of RAM for baseline performance: 1024MB for the virtual machines (256MB minimum per operating system as recommended by vendors 4) 272MB for the ESX Server 3 service console Running these example virtual machines with a more reasonable 512MB RAM requires the ESX Server 3 host to be equipped with at least 2.2GB RAM. 2048MB for the virtual machines (512MB 4) 272MB for the ESX Server 3 service console ESX Server Installation The ESX Server physical setup must have both the service console and the VMkernel components installed on the same storage. The VMkernel allows the virtual machines and Service Console access to the systems hardware. The maximum number a x86-based disk can have is four primary partitions. To get around the four-partition limitation, you will need to create an extended partition. Within the extended partition, logical drives can further subdivide the space (Red Hat Linux, IDE disks can have up to 63 partitions and SCSI disks can have up to 15). The following partitions are required for the installation of an ESX Server: 9

/boot swap /(root) VMFS-3 o is a partition that holds a VMware File System (VMFS) and is optimized for storing virtual machines. Vmkcore o Used in the event of a serious error inside ESX Server. o The dump goes here when the kernel panics. /var/log (optional) o VMware recommends that this partition be separate from the root (/) filesystem to avoid filling it up. The minimum size is 500MB but VMware recommends 2000MB. /opt (optional) o directory used also to hold log files, specifically for the VMware HA product. It is recommended that you also have a dedicated partition for /opt as well.

There are three locations for storing ISO images: VMFS datastore o Allows you to share the ISO images across multiple ESX Servers o Recommended way of sharing ISO images NFS datastore o Also allows you to share the ISO images across multiple ESX Servers o Recommended way of sharing ISO images /vmimages o Directory on the service console o Storing ISO images in this directory makes images available only to this ESX Server o Recommended to create a separate partition for the /vmimages directory Advanced Installation Options This screen presents options for specifying the ESX Server boot loader options. Ideally, the boot loader should be placed where the service console partition resides. The ESX server can boot from a local SCSI LUN, FibreChannel SAN LUN or iSCSI SAN LUN. Once the ESX Server is installed, download the Virtual Infrastructure (VI) Client by going to the URL of the ESX server (i.e. http://<ESX Server IP>). The VI Client is the primary interface for managing all aspects of the Virtual Infrastructure environment (i.e. ESX Servers, Virtual Machines, etc.) It is also used to configure the ESX Server. Host-Based Licensing Licenses are distributed via a license file (.lic). The two modes of licensing that ESX Server supports are hostbased and server-based. Host-based licenses are installed on the ESX Server using the ESX Servers Configuration tab in the VI Client. Advantages of host-based licensing are that there is one less piece of infrastructure and it is sufficient for a small, manageable number of ESX servers. The disadvantage for this type of license is that licenses do not float and features that require VirtualCenter cannot be used. 10

o Starter Edition: Provides virtualization for the small business and branch office environments. Includes support for NAS and or local storage and the ability to deploy on a server with up to 4 physical CPUs and up to 8GB physical RAM. Certain standard functionality is disabled or available only with an optional add-on license, at additional cost. o Standard Edition: Provides an enterprise-class virtualized infrastructure suite for any workload. Provides full access to the full feature set of ESX Server version 3. All standard functionality is enabled and all optional add-on licenses can be configured with this license type. Unlimited max number of VMs, SAN, iSCSI, NAS, vSMP support, and VCB add-on at an additional cost. Scripted ESX Server Installations Scripted ESX Server installations are used to simplify installation of ESX Server software on multiple hosts. Scripted installs can be performed via: Using a boot floppy Configuring a PXE server Implementing third-party applications The source of the ESX Server software can be pulled from the following locations: CD-ROM HTTP (URL) NSF FTP server The installation script is a simple text file, containing a list of items, each identified by a keyword. The file consists of sections that must be specified in order. Items within the sections do not have to be in a specific order unless otherwise specified. The order is as follows: Commands %packages o This section specifies the packages for installation and is required for a successful installation. The only entry that needs to be specified is @ base. This entry tells the installer to read the list of packages to install from the file comps.xml. This file is found in the directory VMware/base, located on the package source such as the installation CD. %pre (optional) o This section is executed prior to installation. It is not common to use this section when performing a fresh installation. It can be used when performing an upgrade. %post (optional) o This section is executed after the installation. This section is commonly used to apply a standard configuration (e.g. DNS servers, NTP configuration, software installation, etc.) %vmlicense_text (optional) o This section is required only if using host-based licensing 11

o The text required in this section is the content of the host-based licensing file The first four sections are Red Hat based. Configuring Services Once the ESX Server has been deployed further configuration needs to be performed. Among them are: Name Resolution, Configure logins, VMkernel configuration, Log file rotation, Configure an NTP client, and allow command-line configuration changes to be reflected in VirtualCenter. The /etc directory contains most of the configurations on a UNIX or Linux system. It is similar to a Windows registry in that it contains custom configuration files. With the exception of the hosts database, all information is sourced from text files in this directory. Hostname Resolution These are the following files that are involved in hostname resolution for the ESX Server and service console: /etc/nsswitch.conf o entries in this file govern how hostnames and/or IP addresses are revolved /etc/hosts o local file on the service console for hostname lookups. o Entries in the hosts file would be formatted as follows: IP Address hostname. Typically, hostnames in this file are entered in lowercase letters. /etc/resolv.conf o the service console DNS configuration file. o This file is created during the installation based on the information provided during the time of installation. o Only three entries are required: A search list which specifies the local domain name by default At least two nameservers (a primary and secondary) that the resolver should query. Configure User Login A user must have an account and password in order to login to an ESX host. User accounts are stored in a file called /etc/passwd and user passwords are stored in an encrypted file called /etc/shadow. There are many different ways to login to an ESX 3.x host. They are the following: Using the VI client o Login can be governed via TCP wrappers Web Access o Changes to the Tomcat web service are not supported SSH client o Logins via SSH client are governed by the sshd server configuration files: /etc/ssh/sshd_config /etc/issue Messages can be displayed to any user logging into the system by modifying this file

12

VMkernel Configuration The /etc/vmware/esx.conf file contains all device configuration, network configuration, boot configuration, storage parameters, etc. for the ESX Server. Changes to this file can be accomplished via the VI Client, Virtual Center, and through the esxcfg-* tools. The commands to get (-g) and/or set (-s) values in the VMkernel are: esxcfg-advcfg g <value> esxcfg-advcfg s <value> The esxcfg-advcfg command can also be used to add, remove, or modify entries found in the esx.conf file. Configure Log File Rotation The directory /etc/logrotate.d has files that contain settings for log file rotation, compression, and time to keep old log files. To change the default settings such as log file rotation, compression and time to keep old log files, modify the file named /etc/logrotate.conf. RECOMMENDATIONS: It is recommended to change the size setting for the following log rotation scripts in the /etc/logrotate.d directory: esxcfg-boot esxcfg-firewall vmkernel vmksummary vmkwarning Increasing the size value to 4096K (4M) allows the log files to grow a little bigger before they get rotated. Enable compression by changing nocompress to compress will allow greater logging with minimal impact to the file system storage space. Configuring an NTP Client It is recommended that the time on an ESX Server be configured correctly which results in accurate timestamps on log messages and virtual machine may obtain time from the Service Console. The file that contains information that configures an NTP client is: /etc/ntp.conf o this file contains a list of possible servers to synchronize with o the order in which host to synchronize with o whether or not to listen for broadcast or multicast NTP packets o NOTE: On ESX Server 3.x you must enable the NTP client service on the firewall by typing the command esxcfg-firewall e ntpClient o An example in setting up the NTP client is listed below: Server 192.168.131.11 prefer (primary server to query first) Server 192.168.131.12 Allow VirtualCenter to See Configuration Changes By default, the VirtualCenter is not notified of any configuration changes that an administrator performs from the service console command line. Do the following to display the most up-to-date information: Click Refresh link (if one exists) in the ESX Servers Configuration tab 13

Log out of the VI Client and log back in. If neither worked, execute the command service mgmt-vmware restart which restarts the vmwarehostd and the VirtualCenter agent, vpxa on the ESX Server.

MODULE 3: Networking
ESX Server Networking and Virtual Switches Networking in ESX Server is based on the concept of virtual switches. Virtual switches are software constructs that are employed by the VMkernel. VMs gain access to networks by associating their virtual NICs to virtual switches. The VMkernel also uses virtual switches to access iSCSI and NAS-based storage and to implement VMotion. Virtual switches also give the service console access to its network. Virtual Switches A virtual switch serves three primary functions: Virtual machine communications to the outside world Communications to the service console VMkernel communications through an embedded IP stack that is used for VMotion migration, NFS access and iSCSI access These switches work at Layer 2 of the OSI Model. The first switch that is created during installation is the Management Switch. By default, the number of ports for a new virtual switch is 56 (v3.5). There is a maximum of 1016 ports per virtual switch. The maximum number of virtual switches supported per ESX Server is 127. NOTE: You cannot have two virtual switches mapped to the same physical NIC. However, you can have two or more physical NICs mapped to the same virtual switch. Network Connections When a virtual switch is created, one or more connections must be defined. There are three types of network connections: Service Console Port, VMkernel Port, and Virtual Machine Port. Service Console Port Port that gives access to the ESX Server management network Need to define a network label a user-chosen text string identifying the port Need to define an optional VLAN tag Need to define IP settings, either static (recommended) or DHCP By defining a service console port on a virtual switch with 2 or more outbound adapters, the service console gains the benefits of NIC teaming in the same way that virtual machines do Each service console and VMkernel port must be configured with its own IP address, netmask and gateway as separate IP stacks are configured for each.

14

VMkernel Port Port that gives access to VMotion, iSCSI and/or NFS/NAS networks (IP based storage) Required for VMotion Need to define a network label Need to define an optional VLAN tag Need to define whether or not to enable the port for VMotion Need to define IP settings Virtual machine port group Port that gives access to VM networks Need to define a network label Need to define an optional VLAN tag IP settings are configured by the guest OS for each virtual NIC configured for the virtual machine. More than one connection type can exist on a single virtual switch, or each connection type can exist on its own virtual switch. To create a network connection, use the VI Client. Select the ESX Server in the inventory, and then click its Configuration tab. Select the Networking link, then click the Add Networking link. Naming Virtual Switches and Connections Every virtual switch is identified by the name vSwitch#, where # is a sequential number, starting with 0. If there are multiple service console ports, each service console port is identified by the name vswif#, where # is a sequential number, starting with 0. Create Virtual Switches using the Command Line Configuring virtual networking on ESX Server 3.x can employ the use of command line utilities to manage network configurations. Four important command line utilities are: esxcfg-nics, esxcfg-vswitch, esxcfgvmknic, and esxcfg-vswif Locate and Configure Physical NICs The command line utility esxcfg-nic is used to configure or report current physical NIC configuration. Syntax: esxcfg-nics <options> <vmnic#> Options: -l prints current NIC configuration shows VMkernel names for physical NICs (i.e. vmnic#) NIC adapter PCI bus, slot and port Shows device driver name Link status Speed/duplex Vendor information -a -d sets NIC to auto negotiate (if required/desired) configures duplex setting 15

-s

configures speed setting

Configure Virtual Switches The command to create/modify/delete a virtual switch and add/remove portgroups is esxcfg-vswitch. These virtual switches will be used by virtual machines, the VMkernel and the service console. Syntax: esxcfg-vswitch <options> <vswitch> :<ports> Options: -a adds a virtual switch -d deletes a virtual switch -l lists virtual switch configuration -L links a physical NIC to a virtual switch -U unlinks a physical NIC from a virtual switch -p create a port group name (used w/ -v option) -v sets the VLAN ID for a specific port group -A adds a port group and name -D removes a port group and name -c checks for an existing name (returns 0 or 1) -C checks for an existing port group name By default, ESX Server 3.x will allow the creation of up to 128 virtual switches. To create more than 128 virtual switches, this feature must be enabled in the VMkernel. ESX Server 3.x can support up to 4096 virtual switch ports. Configure the VMkernel Port The command to configure a virtual switch port for use specifically by the VMkernel is esxcfg-vmknic. A VMkernel port is used for tasks such as VMotion migration and access to IP-based storage (iSCSI or NFS). This command is used to set the default gateway for the VMkernel port: esxcfg-route <gateway_IP_address>. Syntax: esxcfg-vmknic <options> <portgroup> Options: -a adds a VMkernel NIC to the system -d removes a VMkernel NIC from the portgroup -l print the VMkernel NIC configuration -i specify an IP address for the VMkernel port -n specify a netmask for the VMkernel port -e enable the VMkernel NIC on this portgroup -D disable the VMkernel NIC on this portgroup NOTE: esxcfg-route will also be required to set the VMkernel gateway. Configuring Service Console Networking The command to use when attaching the service console to multiple networks is esxcfg-vswif. This command can also be used to create an interface that sniffs traffic off a virtual switch used by virtual machines. You can use the ifconfig command to view active vmnic and vswif interfaces. 16

Syntax: esxcfg-vswif <options> <vswif#> Options: This command uses the same options as esxcfg-vmknic (only applicable to the Service Console, not the VMkernel) with these additions: -a add a vswif interface, requires IP parameters (e.g. i and n) -i IP address for the specified vswif -n IP netmask for the specified vswif -p sets the portgroup name of the COS interface -c checks for the existence of the COS vswif interface -D disables all vswif interfaces on COS (dangerous) -E enables all vswif interfaces on COS Modify Virtual Switch Configurations Virtual Switch Properties Every virtual switch has the following properties (tabs in the UI interface): General: allows you the ability to define the number of ports for the entire virtual switch Network Policies: allows you to configure Security, Traffic Shaping, and NIC Teaming. o The network policies of a virtual switch become the default policies for all ports on port groups created on this virtual switch. These policies can be modified at the port and port group level and can be used to override the default policies of the virtual switch. To get to this display, use the VI Client by selecting your ESX Server in the inventory, click its Configuration tab, click on the Networking link, then click the Properties...link next to the virtual switch. NOTE: The default number of ports on a new virtual switch is 56. However, there is an exception wherein the default number of ports for the virtual switch created during the ESX installation process is 24. The maximum number of ports is 1016. Network Policies There are four network policies that are defined for the entire virtual switch, service console port, the VMkernel port or a VM port group. When a policy is defined for an individual port or port group, the policy at this level overrides the default policies defined for the virtual switch. When a policy is defined at the virtual switch level, the policy is the default policy for all the ports on the virtual switch. Network Policy: VLANs VLANs operate at the Layer 2 level of the OSI model. This is the same layer at which MAC addresses and Ethernet operate. They allow the creation of multiple logical LANs within or across physical network segments. VLANs free network administrators from the limitations of physical network configuration. Benefits of VLANs are: Improved security: the switch only presents frames to those stations in the right VLANS Improved performance: each VLAN is its own broadcast domain Lower cost: less hardware is required

17

ESX Server provides VLAN support through virtual switch tagging, which is provided by giving a port group a VLAN ID. The VMkernel takes care of all the tagging and untagging as the packets pass through the virtual switch. Network Policy: Security Security policy allows administrators to configure Layer 2 Ethernet security options at the virtual switch and at the port groups. The three security policies available are: Promiscuous Mode: When set to Reject, placing a guest adapter in this mode has no effect on which frames are received by the adapter. This is the default setting. MAC Address Changes: When set to Reject, if the guest attempts to change the MAC address to something other than what is configured for the virtual HW, it stops receiving frames. The default setting is Accept. Forged Transmits: When set to Reject, drop any frames which the guest sends with a source MAC different from the currently configured MAC in the virtual HW. The default setting is Accept. Network Policy: Traffic Shaping Traffic shaping is a mechanism for controlling a VMs outbound network bandwidth. This only pertains to outbound and not inbound traffic. The three items that are configurable are: Average rate: specified in Kbps Peak rate: specified in Kbps Burst size: specified in KB Traffic shaping is off by default. Traffic shaping can be enabled for the entire virtual switch but keep in mind that port group settings override the switch settings. Network Policy: NIC Teaming NIC teaming allows you to determine how network traffic is distributed between adapters and how to re-route traffic in the event of an adapter failure. Default NIC teaming settings are set for the entire virtual switch and they can be overridden at the port group level. Port group failover can override vSwitch failover order. NIC teaming settings are: Load Balancing (outbound only) o Route based on the originating port ID (vSwitch Port-based) (default) o Route based on source MAC hash o Route based on IP hash (Choose an uplink based on a hash of the source and destination IP addresses of each packet) Network Failure Detection Notify Switches Rolling Failover Failover Order

MODULE 4: Storage

18

Fibre Channel SAN Storage When using Fibre Channel SAN Storage, the ESX Server requires the use of a Fibre Channel switch for communication to storage. The use of more than one switch allows for redundancy. A Fibre Channel switch interconnects multiple nodes, forming the fabric in a Fibre Channel network. Generally speaking, a Fibre Channel node is a server, storage system, or a tape drive. LUN masking makes a LUN invisible when a target is scanned, and is usually set at the SP (Storage Processor) level. The section to mask LUNs via the VI Client can be seen by going through the Configuration tab > Advanced Settings > Disk > Disk.MaskLUNs section. An example is listed below: Example: VMHBA1:0:1-10, 12-256 this example denotes that the only LUN visible to the ESX Server is LUN 11 Addressing SAN LUNs in the VMkernel Addressing SAN LUNs for the ESX Server has to follow a certain format. The VMkernel disk partition addressing scheme is as follows: vmhba: standard label that identifies a physical host bus adapter Adapter: Adapter ID that is assigned to each HBA Target ID: Represents the SCSI target that the Storage Processor presents LUN: Logical Unit Number Partition: Partition on the LUN, identified by a number Syntax: [vmhba Adapter]:[Target ID]:[LUN]:[Partition] vmhbaadapter#:target#:LUN#:partition# Examples: LUN addresses: vmhba0:0:11 vmhba1:1:12 Partition addresses: vmhba0:0:11:3 vmhba1:1:12:1 BEST PRACTICE: It is recommended to have 1 VMFS volume per LUN Making SAN Storage Available to ESX Server During the boot up sequence, the VMkernel detects the Fibre Channel storage adapter. At boot up, the VMkernel scans up to 256 LUNs (0-255). However, the ESX installer can only see the first 128 LUNs.

VMFS Datastore A VMware File System is a filesystem optimized for storing ESX Server virtual machines. It has the following properties: VMFS is a repository of virtual machines and virtual machine state. 19

VMFS can be deployed on a variety of SCSI-based storage devices, including Fibre Channel and iSCSI SAN equipment. Each virtual machines files are located in its own subdirectory. Contains templates and ISO images. VMFS volumes are addressed by a volume label, a datastore name, and physical address (i.e. vmhba1:0:0:1). Accessible in the service console underneath /vmfs/volumes directory. o This directory contains a subdirectory for each VMFS. o The serial number of the disk on which the VMFS resides is used as the name of the subdirectory.

Extend a VMFS The size of a VMFS can be extended dynamically. In the ESX Server context, an extent is a hard disk partition on a physical storage device that can be dynamically added to an existing VMFS-based datastore. The datastore can stretch over multiple extents, yet appear as a single volume (analogous to a spanning volume). When extending a VMFS datastore, the LUN identifier after spanning is the parents LUN. The maximum capacity of a VMFS is 2TB. The maximum amount of LUNs you can have to extent is 32 which gives you a total capacity of 64TB. A few reasons to extend a VMFS volume are: To give a VMFS more space without taking it offline To create a VMFS > 2TB In some cases, to improve overall I/O performance of the VMFS NOTE: Be aware that when using extents with multiple LUNs, the master extent member, which is the first LUN in the set, contains the metadata for the entire extent set. If that master LUN is lost, it could cause a loss of all data on the entire extent set! When adding an extent candidate to VMFS, you are provided a list of possible extent candidates that does not include LUNs with existing VMFSes. If you chose a candidate with existing data (e.g. an NTFS), you are warned that data will be permanently lost if you use it. Multipathing with Fibre Channel Multipathing allows continued access to SAN LUN in the event of hardware failure. The failover occurs automatically with a configurable delay. When multipathing is configured, exactly one path is active (in use) to any LUN at any time. You can enable or disable individual failover paths by changing their status. The following multipathing policies are currently supported: Fixed: The ESX Server host always uses the preferred path to the disk when that path is available. If it cannot access the disk through the preferred path, then it tries the alternate paths. Fixed is the default policy for active/active storage devices. Most Recently Used: The ESX Server host uses the most recent path to the disk until this path becomes unavailable. That is, the ESX Server host does not automatically revert back to the preferred path. Most Recently Used is the default policy for active/passive storage devices and is required for those devices. The ESX Server host automatically sets the multipathing policy according to the make and model of the array it detects. If the detected array is not supported, it is treated as active/active. 20

Multipathing with iSCSI Mutipathing with iSCSI provides a simpler multipath structure than fibre channel networks. IP networking already has multipath support built in. iSCSI initiators recognize multiple paths from a SendTargets discovery. Like our support with SANs, ESX uses multipathing for failover purposes only. The failover policies of fixed and MRU are the same policies used with SAN multipathing. With ESX 3, only active/passive configurations are supported. There is no heterogeneous multipathing, meaning you cannot use a NIC and an iSCSI adapter to access the same iSCSI storage. The software initiator only supports a single storage interface, in other words, the software initiator looks like a single iSCSI HBA. However, keep in mind that the software initiator sits on top of multiple NICs and therefore, multipathing can be performed through the networking layer in the VMkernel. It is possible to have both Fibre Channel and iSCSI HBAs in the same ESX server. However, they cannot be pointed to the same LUN. This is not a supported configuration.

Managing VMFS Datastores There are several commands that can be used via the service console to manage ESX storage. They are the following: esxcfg-vmhbadevs In order to run commands to configure your storage, you need a way to identify the LUN to the service console. Where the VMkernel addresses a LUN using the vmhba address (i.e. vmhba1:0:31), the service console addresses a LUNs device file name. You can identify LUNs that are available to an ESX host using the esxcfg-vmhbadevs command. This command lists device file names associated with a servers LUNs. Example output: /dev/sda, /dev/sdb, /dev/sdc, /dev/sdd, /dev/sde fdisk The standard Linux partitioning tool called fdisk can be used to partition LUNs from the command-line. When launching fdisk, you must use the Linux file name, for example, /dev/sda. One function you can perform in fdisk is to create a partition to hold a VMFS. VMFS volumes must be identified by a partition system ID of fb. VMware recommends creating partitions that align to the 64KB track boundaries. Doing so eliminates track crossing and benefits performance on all storage. To align a partition, such as a VMFS volume created during installation, to a specific boundary, use the fdisk command in expert mode. Expert mode provides extra functionality such as the ability to align partitions to a specific track boundary, and modify number of cylinders, sectors, and tracks. vmkfstools Once a partition is created on a LUN, you can create a VMFS volume in that partition by using the vmkfstools command. The command will allow you to do the following: 21

Create and label VMFS volumes Read VMFS volume metadata Add extents to VMFS volumes

Creating a VMFS volume: To create a VMFS volume, use the vmkfstools command using the following syntax: vmkftools -C vmfs2|vmfs3 -S label vmhba#:#:#:# -C: specifies the type of VMFS volume to create -S: specifies a simple name that references the VMFS volume use the vmhba format

Example: vmkfstools C vmfs3 S LUN_3 vmhba1:0:3:1 The process of creating a VMFS volume generates a unique, hexadecimal value known as a UUID. The UUID is used to name a file that corresponds to the LUN. This file is found under the service console directory /vmfs/volumes. Reading metatadata in a VMFS volume: To read VMFS volume metadata, use the vmkfstools command using the following syntax: vmkftools P h <label> -P: to read the metadata -h: to print output in MB or GB instead of KB <label>: is the VMFS volume label Example: vmkfstools P h LUN_3 The VMFS metadata is held in a number of system files: File descriptor system file - .fdc.sf Sub-block system file - .sbc.sf File block system file - .fbb.sf Pointer block system file - .pbd.sf Volume header system file - .vh.sf Extending VMFS Volumes: Extending a VMFS volume via the service console can also be performed with the vmkfstools command. The easier method is to add extents (a.k.a. spanning) to an existing VMFS volume. The merging of two VMFS volumes results in data loss on the target LUN (i.e. the LUN that is added as an extent). The LUN that will be added as an extent to a VMFS volume can either be formatted with a VMFS file system or a raw LUN. The Z option of vmkfstools allows you to add an extent to an existing VMFS. Example: vmkfstools Z <VMFS_extent> <VMFS-A> vmkfstools Z vmhba1:0:20:1 vmhba1:0:19:1 NOTE: Any data on VMFS_extent will be destroyed as it is merged with VMFS-A 22

Removing an Extended VMFS Volume: It is possible to remove an extent from a VMFS volume. However, this is a destructive process. To remove an extent, you must recreate the VMFS volume. This can be accomplished by using the vmkfstools P command to see extents, then use vmkfstools to create a new VMFS volume on the first extent. ls There are two ways to map label names to physical VMFS volumes. To do this you can use the command ls l /vmfs/volumes. This command produces a listing of the contents of the /vmfs/volumes directory. It shows the volume label to physical volume relationship. The vdf command can also be used to display the mapping between a VMFS volume label and the physical VMFS volume. Syntax: ls l /vmfs/volumes This command produces a listing of the contents of the /vmfs/volumes directory. It shows the volume label to physical volume relationship. ln To modify an existing VMFS volume label, or add a label to a VMFS that currently does not have one, use the ln command with the s and f options. In the command line example below, the first argument identifies the VMFS volume by its UUID file, which can be determined using either ls l /vmfs/volumes or vdf. The second argument defines the new volume label. The VMFS volume label is used to name a file under the /vmfs/volumes directory. Syntax: ln sf /vmfs/volumes/<UUID> /vmfs/volumes/<new_label_name> vdf Viewing Disk Consumption The vdf command is most commonly used to quickly assess VMFS volume consumption. It can be used for the following: Useful when deciding where to place VMDK files Can be inserted into a shell program that monitors disk consumption Shows the amount of space that exists in the volume, how much is consumed and how much is available Prints a percentage indicating amount of volume consumption Reports on both VMFS volumes and native Linux volumes NOTE: use vdf h to display values in GB or MB, instead of KB.

iSCSI SAN Storage iSCSI (Internet Small Computer System Interface) provides a different approach to fibre channel SANS: Cost: iSCSI is less to implement than Fibre Channel as you can use your existing NICs and Ethernet switches are cost less than fibre channel switches 23

Infrastructure: You can use your existing infrastructure (i.e. CAT5 wiring) and network knowledge. Routing: routing is already available in an IP network. Internet: iSCSI is Internet ready Longhaul transfers: iSCSI can do longhaul data transfers because it can use the Internet for transport. This is difficult with fibre channel, which must use a gateway to tunnel through or convert to IP.

How is iSCSI used with ESX Server Boot ESX Server from iSCSI storage o ESX Server is installed on an iSCSI LUN much like on a fibre channel LUN or local disk. Booting ESX from iSCSI is supported only with a hardware initiator. It is not supported with the software initiator. Create a VMFS on an iSCSI LUN o The VMFS can be used to hold VM State, ISO images, and templates Allow VM access to a raw iSCSI LUN Allow VMotion migration of a VM whose files resides on an iSCSI LUN Components of an iSCSI SAN An initiator transmits SCSI commands over the IP network. A target receives SCSI commands from the IP network. You can have multiple initiators and targets in the iSCSI network. iSCSI is SAN-oriented in that the initiator finds one or more targets, a target presents LUNs to the initiator, and the initiator sends it SCSI commands. An initiator resides in the ESX Server while targets reside in the storage arrays supported by the ESX Server. Addressing in an iSCSI SAN The main addressable, discoverable entity in iSCSI is an iSCSI node. An iSCSI node can be either an initiator, a target, or both. Both targets and initiators require names for the purpose of identification, so that iSCSI storage resources can be managed regardless of location. The IQN (iSCSI Qualified Name) naming convention is as follows: The string iqn A date code specifying the year and month in which the organization registered the domain or subdomain name used as the naming authority. The organizational naming authority string, which consists of a valid, reversed domain or subdomain name Optionally, a :, followed by a string of the assigning organizations choosing, which must make each assigned iSCSI name unique How iSCSI Targets are Discovered There are two methods supported in discovering iSCSI targets. They are: Static Configuration: IP address, TCP port, and the iSCSI target name are already available to the initiator. No target discovery is necessary. This option is convenient for small iSCSI setups. SendTargets: Initiator uses targets IP address and TCP port information to establish a discovery session to the IP address. The initiator then issues the SCSI SendTargets command to query information about the iSCSI targets available at the particular IP address.

24

NOTE: Hardware initiators support both the static and SendTargets configuration, whereas software initiators support only SendTargets. How iSCSI Storage Authenticates the ESX Server The method of authentication that iSCSI supports is CHAP (Challenge-Handshake Authentication Protocol). CHAP authentication is a mechanism in which the target (the storage resource) authenticates the initiator tying to access it (in this case, the ESX Server) CHAP can be enabled on either a hardware or software initiator. CHAP allows a password to be verified without sending the password (in cleartext) over the network. By default, CHAP is disabled. The CHAP password must match the CHAP password set at the target you wish to establish communication with. iSCSI Software and Hardware Initiator ESX Server provides full support for software and hardware initiators. However, it does not support both hardware and software initiators running simultaneously. The software initiator is a port of the Cisco iSCSI Initiator Command Reference implementation. It works with the vmkiscsid daemon that runs in the service console. Therefore, the service console and VMkernel NICs both need to talk to the iSCSI storage since the iSCSI daemon is what initiates the session, and what performs the login and authentication. The hardware initiator looks like any other SCSI adapter. SCSI LUNs are made available to the ESX Server from the iSCSI adapter. The iSCSI adapter has its own networking stack, which is referred to as the TOE (the TCP offload engine). Set Up Networking for iSCSI Software Initiator In order to set up networking for the iSCSI Software Initiator, both the service console and VMkernel need to access the iSCSI storage. There are two ways to do this: Have the service console port and VMkernel port share a virtual switch and be in the same subnet. Have routing in place so both the service console port and VMkernel port can access the storage. The software initiator works with a daemon called the vmkiscsid that runs in the service console. Therefore, the service console NIC and the VMkernel NIC both need to talk to the iSCSI storage since the iSCSI daemon is what initiates the session, and what performs the login and authentication. The actual I/O goes through the VMkernel. In order for the iSCSI software initiator to communicate with its target iSCSI storage, outgoing port 3260 needs to be opened in the service console firewall. This is done through the VI Client. To configure the iSCSI software initiator, select your ESX Server, click the Configuration tab, and select the Storage Adapters link. A list of available storage adapters is displayed. Select the iSCSI Software Adapter, then click the Propertieslink. Configuring iSCSI Storage The following commands can be used to configure the iSCSI storage: esxcfg-firewall [options] [service]: command lets you enable or disable the iSCSI software client in the service console 25

o o o o

q: e: d: s:

Display the current firewall settings Open the port in the firewall required by the specified service Close the port in the firewall required by the specified service List the known firewall services

esxcfg-swiscsi: command lets you enable or disable the iSCSI software or hardware adapter o e: Enable software iSCSI on the system, if disabled o D: Disable software iSCSI on the system, if enabled o q: Check if software iSCSI is enabled or disabled on the system o s: Scan the system for disk(s) available through the software iSCSI interface vmkiscsi-tool: command lets you configure or display iSCSI settings such as iSCSI name, alias and target devices o I: iSCSI name (used with subcommand) o k: Alias (used with a subcommand) o D: Discovery (used with a subcommand) o T: Target (used with a subcommand) o a: subcommand to add an iSCSI initiator property o l: subcommand to list an iSCSI initiator property

NAS Storage and NFS Datastores What are NAS and NFS? NAS stands for Network-Attached Storage. It is a specialized storage device that connects to a network and can provide file access services to an ESX Server. NAS is used because it is a lowcost, moderate-performance option. There is also less infrastructure investment required than with Fibre Channel. There are two key NAS protocols, which are: NFS (Network File System) version 3, over TCP only: ESX Servers use the NFS protocol to communicate with NAS servers. SMB (Windows networking, also known as CIFS) ESX Server supports the following shared storage capabilities on NFS volumes: User VMotion Create virtual machines Boot virtual machines Mount ISO files, which are presented as CD-ROMs to virtual machines In order for the ESX Server to access the NFS server, the ESX Server must be configured with a VMkernel port defined on a virtual switch that has access to the NFS server over the network. NOTE: You cannot format a NFS datastore as VMFS because it is not supported. Addressing and Access Control with NFS The /etc/exports directory defines the system allowed to access the shared directory. The options used in this file are: Name of the directory to be shared Subnet(s) allowed to access the share 26

rw: allows both read and write requests on this NFS volume no_root_squash: By default, the root user (whose UID is 0) is given the least amount of access to an NFS volume. This options turns off this behavior because the VMkernel needs to access the NFS volume using UID 0. sync: all file writes must be committed to the disk before the write request by the client is actually completed

Storage Considerations The following is a storage comparison of the Fibre Channel, NAS, and iSCSI storage systems Technology Fibre Channel iSCSI NAS Protocols FC/SCSI IP/SCSI IP/NFS Transfers Block access of data/LUN Block access of data/LUN File (no direct LUN access) Interface FC HBA Performance High (due to dedicated network) iSCSI HBA or NIC Medium (depends on integrity of LAN) NIC and IP switches Medium (depends on integrity of LAN)

ESX Server Feature Comparison by Storage Type Type Boot VM Boot ESX VMotion VMFS Server Fibre Chanel iSCSI NAS Yes Yes Yes Yes Yes No Yes Yes Yes Yes Yes No

RDM Yes Yes No

VM Cluster Yes Yes No

VMware HA and DRS Yes Yes Yes

VCB Yes Yes No

Storage Considerations with Feature Components Component Considerations VMFS One VMFS volume per LUN. Use more than one VMFS to maintain separate test and production environments. RDM (Raw Device Use RDMs with VMs for 1) physical-to-virtual clusters or cluster-across-boxes and 2) Mapping) use of hardware snapshotting functions of the disk array. Boot-from-SAN Each boot LUN should be seen only by the ESX Server booting from that LUN VMotion LUNs holding the VMs virtual disks must be visible from both source and destination ESX Servers VMware HA Each server has access to same shared storage. All LUNs use by clustered VMs must be seen by all ESX Servers iSCSI For best performance and security, put iSCSI on a separate and isolated IP network NAS/NFS For best performance and security, put NAS on a separate and isolated IP network. 27

ESX Server needs full access to NFS datastores to create directories, set permissions (use no_root_squash). 8 NFS mounts per ESX Server allowed by default. Avoid VM swapping to NFS volumes.

MODULE 5: VirtualCenter Installation


VirtualCenter Software Installation The VirtualCenter environment consists of the following software components: VirtualCenter Server: Service used to centrally administer ESX Servers. It directs actions to be taken on the virtual machines and the ESX Servers. VMware License Server: Server-based licensing for VirtualCenter and ESX Server functionality VI Client: GUI-based interface for configuring and managing ESX Servers and virtual machines. Web Access: web-based interface for managing virtual machines VirtualCenter database: Main repository of VirtualCenter information, including configuration and performance data. VirtualCenter Agents: Processes that run on the ESX Servers, used to receive tasks initiated by VirtualCenter. Order of Installation This is the recommended order of installation 1) Database Server a. Create database connection to either SQL Server or Oracle database b. NOTE: The database instance must be created for VirtualCenter before performing the installation. 2) License Server a. The License Server can be installed before or during the VirtualCenter Server installation (the VirtualCenter Server installer wizard prompts for and will install a license server if one is not already installed). 3) VirtualCenter Server 4) Virtual Infrastructure Client a. The Virtual Infrastructure Client can be installed at any time. VirtualCenter Database Overview The VirtualCenter Database is a storage area for maintaining VirtualCenter inventory as well as the status of each VM and each managed host. VirtualCenter Database Requirements VirtualCenter supports the database formats listed below. Database Type Service Pack, Patch, and Driver Requirements Microsoft SQL Server 2000 Standard SP4 For Windows 2000 and Windows XP, apply MDAC Microsoft SQL Server 2000 Enterprise 28

Microsoft SQL Server 2005 Standard Microsoft SQL Server 2005 Enterprise Microsoft SQL Server 2005 Express SP2 Oracle 9i release 2 Standard Oracle 9i release 2 Enterprise Oracle 10g Standard Release 2 (10.1.0.3.0) Oracle 10g Enterprise Release 2 (10.1.0.3.0) Oracle 10g Standard Release 2 (10.2.0.1.0) Oracle 10g Enterprise Release 2 (10.2.0.1.0)

2.8 SP1 to the client. Use SQL Server driver for the client. Install SP1 or SP2 for Microsoft SQL Server 2005. For Windows 2000 and Windows XP, apply MDAC 2.8 SP1 to the client. Use the SQL Native Client driver for the client. For Windows 2000 and Windows XP, apply MDAC 2.8 SP1 to the client. Use SQL native client driver for the client. Apply patch 9.2.0.8.0 to the server and client. None First apply patch 10.2.0.3.0 to the client and server. Then apply patch 5699495 to the client.

VirtualCenter Server Overview VirtualCenter Server is a system that acts as a proxy for ESX Servers that it manages that are geographically dispersed, directs actions taken upon VMs. It uses existing Windows-based user account and consists of five services: VMware Virtual Infrastructure Web Access: Allows users to manage VMs using a web browser VMware Virtual Mount Manager Extended: Service used during guest OS customization (during cloning a VM or deploying a VM from a template) VMware VirtualCenter Server: The heart of VirtualCenter. It centrally manages all tasks performed on the ESX Server and virtual machines. If the Windows OS that VirtualCenter Server is running on top of is a member of a Windows Domain (either NT4 or Active Directory), it will automatically access all Windows user and group accounts in that (and ay trusted) Windows Domains. VirtualCenter Server Requirements The VirtualCenter Server is a physical machine or virtual machine configured with access to a supported database. Hardware Requirements: The VirtualCenter Server hardware must meet the following requirements: Processor 2.0GHz or higher Intel or AMD x86 processor. Processor requirements can be larger if your database is run on the same hardware. Memory 2GB RAM minimum. RAM requirements can be larger if your database is run on the same hardware. Disk storage 560MB minimum, 2GB recommended. You must have 245MB free on the destination drive for installation of the program, and you must have 315MB free on the drive containing your %temp% directory. NOTE: Storage requirements can be larger if your database runs on the same hardware as the VirtualCenter Server machine. The size of the database varies with the number of hosts and virtual machines you manage. Using default settings for a year with 25 hosts and 8 to 16 virtual machines each, the total database size can consume up to 2.2GB (SQL) or 1.0GB (Oracle). 29

Microsoft SQL Server 2005 Express disk requirements The bundled database requires up to 2GB free disk space to decompress the installation archive. However, approximately 1.5GB of these files is deleted after the installation is complete. Networking Gigabit recommended.

VirtualCenter Server Software Requirements: The VirtualCenter Server is supported as a service on the 32bit versions of these operating systems: Windows 2000 Server SP4 with Update Rollup 1 Windows XP Pro SP2 Windows 2003 Server SP1 (all releases except 64bit) Windows 2003 Server R2 NOTE: For any operating system except Windows Server 2003 SP1, install Microsoft Windows Installer 3.1, otherwise your VirtualCenter installation can fail. VirtualCenter 2.x installation is not supported on 64bit operating systems. The VirtualCenter installer requires Internet Explorer 5.5 or higher to run.

Managing Across Geographies A single VirtualCenter management server can be used to manage multiple ESX Server datacenters in different geographical locations. NOTE: If the management server must go through a firewall in order to manage a group of ESX Servers, then ensure that ports 902 and 903 are open. By default, VirtualCenter and ESX Server software is configured to use TCP/IP ports 27000 and 27010 to communicate with the license server. ESX Server 3 and VC 2 Architecture The VI Client and the Web Client are the user interfaces used to access either the VirtualCenter Server or the ESX Server directly. The Web Client provides a browser-based interface for managing VMs. The Web Client requires that web access run on either the VirtualCenter Server or the ESX Server, or both. Two services exist on the ESX Server that are responsible for coordinating and launching tasks received from the VirtualCenter or from the client interfaces. hostd: is the ESX Server host agent vpxa: is the VirtualCenter agent The VirtualCenter Server sends task requests to the VirtualCenter agent, vpxa, which then forwards them to hostd. hostd is a background process on the ESX Server that launches the task to be performed.

MODULE 6: Virtual Machine Creation and Management


Create a VM A virtual machine is configured with a set of virtual hardware on which a supported guest OS and its applications run. The virtual machine is a set of discrete files. A VMs configuration file describes the VMs 30

configuration, which includes the virtual hardware such as CPU, memory, disk, network interface, CD-ROM drive, floppy drive, etc. What Files Make Up a Virtual Machine? File name Description VM_name.vmx Virtual machine configuration file VM_name.vmdk File describing virtual disk characteristics VM_name-flat.vmdk Preallocated virtual disk file that contains the data VM_name.nvram Virtual machine BIOS vmware.log Virtual machine log file vmware-#.log (where # is a number starting with 1) Files containing old virtual machine log entries VM_name.vswp Virtual machine swap file (limit is the reservation set) VM_name.vmsd File that describes virtual machine snapshots Additional files may exist if snapshots are taken or raw disk mappings are added NOTE: All virtual machines have one or more virtual disk files (first virtual disk has files VM_name.vmdk and VM_name-flat.vmdk; subsequent virtual disks are named VM_name_#.vmdk and VM_name_#-flat.vmdk, where # is the next number in the sequence starting with 1). VM Virtual Hardware Each guest OS sees ordinary hardware devices. It does not know that these devices are actually virtual. The following hardware can be found on a virtual machine: CPU and Memory o 1, 2, or 4 virtual CPUs (Virtual SMP license is required for 2- and 4-CPU VMs) o Specify maximum memory size. 16GB for ESX Server 3.0 64GB for ESX Server 3.5 Virtual Disk o Adding the first virtual disk implicitly adds a virtual SCSI adapter for it to be connected to. ESX Server offers a choice of either a virtual LSILogic adapter or a virtual BusLogic adapter. The virtual machine creation wizard in the VI Client automatically selects the type of virtual SCSI adapter based on the choice of guest OS. o Virtual disk files are VM_name.vmdk and VM_name-flat.vmdk o Advanced setting modes: Persistent or non-persistent (per disk configuration) Network Adapter o Connects to virtual switch CD-ROM drive o Connects to CD-ROM or ISO image Floppy drive o Connect to floppy or floppy image (.flp) Generic SCSI devices (such as tap libraries) o May be connected to additional SCSI adapters

31

Create Multiple VMs Templates are used to assist in the creation of multiple virtual machines. A template is a master image of a virtual machine that can be used to create and provision new virtual machines. A template can be stored in either normal or compact disk format. Normal disk format: the virtual machines disk files remain untouched. Use this option if you want to convert the template back into a running virtual machine. Compact disk format: the virtual disks files are compressed to remove redundant information and save space. This is only supported on VMFS-3 datastores. Templates can be stored in a VMFS or NFS datastore. Create a Template There are two ways to create a template. Clone to template: use this option if you would like to keep the original VM. Convert to template: use this option if you wish the original VM to go away after it is converted to a template. Guest OS Customization Cloning a VM is an alternative to deploying a VM from a template. Like deploying from template, when you clone, you have the option of customizing the guest OS in the clone. For guest OS customization to work, it must be enabled in VirtualCenter. To enable for Windows VMs, install sysprep files on VirtualCenter Server. It is already enabled for Linux VMs (Open Source components are installed on the VirtualCenter Server) To customize Windows VMs, install Microsoft sysprep files on the VirtualCenter Server: 1. Retrieve the installer for Microsoft sysprep 1.1 from the Microsoft web site. 2. Launch the installer that you just retrieved. 3. Extract the files to: C:\Documents and Settings\ALLUSERSPROFILE\Application Data\VMware\VMware VirtualCenter\sysprep\1.1, where ALLUSERPROFILE is usually All Users and the Application Data folder is a hidden folder. VirtualCenter supports guest OS customization for Windows 2000, Windows XP, and Windows 2003.

Manage VMs Move VMs Between ESX Servers: Cold Migration A cold migration is used to move a virtual machine from one ESX Server to another while the VM is powered off. With a cold migration, all the VMs files may or may not move. Keep in mind that the VMs files are located in a subdirectory on either a VMFS datastore or an NFS datastore. When the destination ESX Server is no able to see the VMs files (because, for example, the VMs files are located in a local datastore on the source ESX Server), then the files must be moved to a datastore visible to the destination ESX Server in order for the VM to be migrated. Perform a cold migration when: Moving a VM to an ESX Server with a local (non-shared) datastore 32

Moving VMs between ESX Servers using different CPU families

Snapshot a VM Snapshots let you preserve the state of a virtual machine so you can return to the same state repeatedly. A snapshot captures the entire state of a virtual machine at the time you take the snapshot. This includes the settings state, the disk state and the memory state (optional). For example, when snapshots are taken by VCB (VMware Consolidated Backup), the memory status is not captured. A virtual machine can have more than one snapshot. Each snapshot consists of the following files: Snapshot differences file: VM_name-00000#-delta.vmdk, where # is the next number in the sequence, starting with 1 Snapshot description file: VM_name-00000#.vmdk Memory state file: VM_name-SnapshotName.vmsn size of this file is the size of the VMs maximum memory (only if memory is captured, else the file is much smaller)

MODULE 7: VM Access Control


VMware Infrastructure User Access The main components of the Virtual Infrastructure security model are the following: User: login account with access to the Virtual Infrastructure Role: A set of one or more privileges Privilege: Specifies a task that a user is authorized to perform Permission: The pairing of a user and role (which consists of a set of privileges) Types of Users There are two types of users: VirtualCenter users and groups: those from the VC Servers domain ESX Server users and groups: those defined in its service console The same security model applies to both VirtualCenter users and ESX users, however, the permissions are different and there is no synchronization of permissions between VirtualCenter and ESX Server. Privileges Privileges are the building blocks of roles. Roles are then basically a set of one or more privileges. A privilege allows access to a specific task and is grouped with other privileges related to it. They are grouped in categories. The following list the default privileges that, when selected for a role, can be paired with a user and assigned to an object: Alarms Datacenter Datastore Extensions Folders Global 33

Host CIM Host Configuration Host Inventory Host Local Operations Network Performance Permissions Resource Scheduled Task Sessions Tasks Virtual Machine Configuration Virtual Machine Interaction Virtual Machine Inventory Virtual Machine Provisioning Virtual Machine State

Roles A role consists of one or more privileges managed through the VI client. Roles are not hierarchically organized, in other words, a role is neither superior to or subordinate to another role. All roles are independent of each other. Pre-defined and Custom Roles ESX Server and VirtualCenter provides default roles. They are: Default ESX Server user and group roles: No Access Read-Only Administrator Default VirtualCenter user and group roles: No Access Read-Only Administrator Virtual Machine Administrator Datacenter Administrator Virtual Machine Power User Virtual Machine User Resource Pool Administrator VMware Consolidated Backup User You cannot modify the default roles No Access, Read-Only, and Administrator. It is possible to modify the other default roles but it is recommended to create a custom role instead.

34

Permissions Permissions are granted by pairing a user (or group) with a role and assigning them to an inventory object. Permissions can also be propagated downwards through the inventory, if you choose. Permissions can be overridden at a lower level by adding a new permission to the same user. How Permissions Are Applied: Scenario 1: If a user is a member of multiple groups with permissions on different objects: o For each object on which the group has permissions, the same permissions apply as if granted to the user directly. Scenario 2: If a user is a member of multiple groups with permissions on the same object: o The user is assigned the union of privileges assigned to the groups for that object. Scenario 3: Permissions defined explicitly for the user on any object take precedence over all group permissions. Scenario 4: Permissions applied directly on an object take precedence over propagated permissions. VirtualCenter Security Model In the VirtualCenter security model, the VirtualCenter user is a Windows user account, either local or domain. The user is assigned a role. The user/role combination is applied to an object in the VirtualCenter inventory. The default permissions for VirtualCenter is by default, the local group, Administrators, is assigned the Administrator role at the topmost level of the Hosts & Clusters view and the Virtual Machines & Templates view. ESX Server Security Model In the ESX Server security model, the ESX user is a service console (Linux) user account. The ESX user is assigned either a default role or a custom, ESX Server role. The user/role combination is applied to a level in the EXS inventory (host, VM or resource pool level). By default, the service console users, vpxuser and root, are assigned the Administrator role at the ESX Server host level in the inventory. vpxuser is a user account used by the VirtualCenter Server to identify itself when sending task requests to the ESX Server. o vpxuser is created at the time that an ESX Server host is attached to VirtualCenter. It is not present on the ESX Server host unless the host is being managed through VirtualCenter. root is the administrator account on any Linux/UNIX system. o root performs the tasks requested by VirtualCenter.

35

Accessing VMs Using Web Access Web access is a browser-based application that focuses on managing VMs on ESX Server and VirtualCenter deployments. Before using Web Access, user must use either a valid user name and password to access the VirtualCenter Server or a valid user name and password to access the ESX Server. Remember to use a VirtualCenter user account and password if logging into VirtualCenter. If logging into an ESX Server directly, enter an ESX Server user account and password. Benefits: Administrators can provide end users access to VMs Users do not need to install the VI client onto their desktops Client devices allow users to use their local floppy and CD-ROM Users access web access via a web browser and point it to either a VirtualCenter server or an ESX Server. VMware has certified Web Access with the following browsers: Internet Explorer 6.0 Mozilla Firefox for Microsoft Windows 1.0.8 or higher Mozilla Firefox for Linux 1.0.8 or higher Netscape 7.0 Web Access Tasks Web Access is used to manage VMs only. Unlike the VI Client, the client can be used to manage ESX Server hosts and VMs. The list of VMs displayed depends on what you are logging into. If you log into Web Access on an ESX Server, you will see a list of all VMs located on that server. If you log into Web Access on a VirtualCenter Server, you will see a list of all VMs located on all ESX Servers managed by that VirtualCenter. You cannot create new VMs using Web Access. Creating new VMs must be done using the VI Client instead.

MODULE 8: VM Resource Management


Using Resource Pools Several components of the ESX Server require resources. Here is a way to calculate some of them. ESX Server Sizing: VMkernel Resources Memory: to calculate memory resources, find out the total amount of maximum memory required for each virtual machine, then sum the values. If you plan to overcommit your memory, then sum the minimum memory sizes for each virtual machine. Memory Overhead: each powered-on virtual machine has some memory overhead. The value goes from 78 to 350 MB. 79 MB of overhead would be for a 32-bit, single CPU VM with 256-512 MB maximum memory. 36

Disk Space: find out how much disk space is needed if this system were a physical machine. You must also account for the size of the VMkernel swap file allocated to each virtual machine when it is powered on. The maximum size of the VMkernel swap file equals the virtual machines maximum memory size. Therefore, the virtual disk(s) and the VMkernel swap file make up most of the disk space for each virtual machine.

ESX Server Sizing: Service Console Resources The default amount of memory that the Service Console requires is 272 MB of memory. This is also the recommended size. Other than memory, the Service Console also requires the following: Memory: 272 MB Disk Space: requires disk space for its partitions One NIC: sufficient for the service console, which connects it to the management network. The Service Console can also share the same physical NIC with virtual machines. The Service Console is a single-CPU operating system and can be scheduled on the same physical CPU as virtual machines. VMs CPU Resource Settings A virtual machine has three setting that affect its CPU resource allocation. VirtualCenter keeps track of reservations. CPU limit: defines the maximum amount of CPU, measured in MHz, that this virtual machine is allowed. CPU reservation: defines the amount of CPU, measured in MHz, reserved for this virtual machine when CPU contention occurs. If the virtual machine does not use the total amount of its CPU reservation, then the unused portion is available for use by other virtual machines until the virtual machine needs it. CPU shares: each virtual machines is granted a number of CPU shares. The more shares a VM has, the more often it gets a timeslice of the CPU when there is no CPU idle time. Shares option kicks in once there is contention for resources amount 2 or more VMs. VMs Memory Resource Settings A virtual machine has four memory settings that affect its memory resource allocation. Available memory: the amount of memory given to the virtual machine at the time it was created. It is the virtual machines maximum memory size. Memory limit: defines the maximum amount of available memory that can reside in RAM. By default, available memory and memory limit are initially the same value. Memory reservation: is the amount of RAM reserved for that virtual machine. Memory reservation differs from CPU reservation in that memory reserved for a virtual machine will not be donated to other virtual machines under any circumstances. Memory shares: memory shares are separate from CPU shares but are applied the same way. A virtual machines memory shares control how often it wins competition for memory when memory is scarce. NOTE: If the value of available memory and memory reservation differ, the VMkernel allocates a per-VM swap file to cover the difference between available memory and the memory reservation. Therefore, the total 37

size available to a virtual machine could consist of physical memory (whose size is determined by available memory) and swap space (provided by the VM swap file.) How VMs Compete for Resources The proportional share mechanism applies to CPU and memory allocation, and only operates when virtual machines are contending for the same resource. Shares guarantee that a virtual machine be given a certain amount of a resource (CPU or memory.) We can add shares to a VM while it is running, and it will get more access to that resource (assuming there was competition). When we add a new VM, it gets shares too, and its share amount factors into the mix; but the existing VMs are guaranteed not to be starved for the resource. And when we delete or power off a VM, that mean there are fewer shares in play, so the surviving VMs get more access. Shares specify the relative priority or importance of a virtual machine. If a virtual machine has twice as many shares of a resource as another virtual machine, it is entitled to consume twice as much of that resource. Shares are typically specified as High, Normal, or Low and these values specify share values with a 4:2:1 ratio, respectively. You can also choose Custom to assign a specific number of shares (which expresses a proportional weight) to each virtual machine. When you assign shares to a virtual machine, you always specify the relative priority for that virtual machine. CPU and memory share values, respectively, default to: High: 2000 shares per virtual CPU and 20 shares per megabyte of virtual machine memory. Normal: 1000 shares per virtual CPU and 10 shares per megabyte of virtual machine memory. Low: 500 shares per virtual CPU and 5 shares per megabyte of virtual machine memory. What is a Resource Pool? A resource pool is a named object in the VirtualCenter inventory. It is a pool of CPU and memory for VMs. It can also be used on a stand-alone host or a DRS-enabled cluster (group of hosts). It has associated access control and permissions. Each resource pool has the following values: Reservations: minimum amount of resource required by the resource pool (in MHz and MB) Limit: maximum amount of resource given to this resource pool (in MHz and MB). Unlimited access, by default (up to the maximum amount of resource accessible) Share: guaranteed amount of resource for the resource pool from the parent (Low, Normal, High) NOTE: There is the option to enable Expandable Reservation. VMs and sub-pools may draw from its pools parent. This can only be configured at the resource pool level and not on virtual machines. Admission Control for CPU and Memory Reservations When you power on a virtual machine, the system checks the amount of CPU and memory resources that have not yet been reserved. Based on the available unreserved resources, the system determines whether it can guarantee the reservation for which the virtual machine has been configured (if any). This process is called admission control. If enough unreserved CPU and memory are available, or if there is no reservation, the virtual machine is powered on. Otherwise, an Insufficient Resources warning appears. 38

Tools for Resource Optimization Virtual CPUs A virtual machine can be configured with 1, 2, or 4 virtual CPUs (VCPUs). When a VCPU needs to be scheduled, the VMkernel maps a VCPU to a hardware execution context (H.E.C). A hardware execution context is a processors capability to schedule one thread of execution. The number of hardware execution contexts available for scheduling depends on the type of system being used. In general, a socket is another term for the entire physical processor package. A socket contains one or more CPUs in the same package. Each of these CPU equivalents is a core. For example, a single-core, dual-socket system has two sockets with one core in each socket, and a dual-core, single-socket system has one socket containing two cores. In relation to hardware execution contexts, a dual-core, single-socket system has two cores and therefore, two hardware execution contexts (without Hyper-Threading enabled.) A quad-core, single-socket system has four cores and therefore, four hardware execution contexts (without Hyper-Threading enabled.) VMkernel Swap When a virtual machine is powered on for the first time, the system allocates a VMkernel swap file for it. This file will serve as backing store for the virtual machines RAM contents. In the event that the VMkernel needs to reclaim some or all of this virtual machines memory, and if the balloon driver cannot free enough memory, the VMkernel will copy pages contents to the VMkernel swap file before giving them to other virtual machines. The size of the VMkernel swap file is determined by the following formula: How much memory the virtual machine can use minus how much RAM is reserved for the virtual machine. Limit amount Reservation amount

Migrate VMs with VMotion Migrating virtual machines between ESX Servers is called VMotion. A VMotion migration moves a VM that is powered on. Move VM Between ESX Servers: VMotion Migration Why migrate using VMotion? Improve overall hardware utilization Allow continued VM operation while accommodating scheduled hardware downtime VMotion allows working processes in a VM to continue throughout a migration. The entire state of the VM is moved to the new host even while the data storage remains in the same datastore.

39

Virtual Machine Requirements for VMotion Migrating a VM with the following conditions produces an error: VM has an active connection to an internal virtual switch VM has an active connection to a CD-ROM or floppy device with a local image mounted VM has its CPU affinity set to run on one or more specific, physical CPUs VM is in a cluster relationship (e.g. using MSCS) with another VM Migrating a VM with the following conditions produces a warning: VM is configured with an internal switch but is not connected to it VM is configured to access a local CD-ROM or floppy image but is not connected to it VM has one or more snapshots No guest OS heartbeats are being received (due to guest OS not responding or VMware tools not configured properly) Host Requirements for VMotion The source and destination ESX Servers must have the following: Visibility to all SAN LUNs (either FC or iSCSI) and NAS devices used by VM (shared storage) A Gigabit Ethernet backplane (interconnection) Access to the same physical networks Consistently labeled virtual switch port groups Compatible CPUs meaning source and destination server have CPUs from the same compatibility group. New CPU features exposed, which introduce new VMotion compatibility constraints and trade-offs. CPU Constraints on VMotion CPU Characteristics Clock speeds, cache sizes, hyperthreading, and number of cores Manufacturer (Intel or AMD) Family (P3, P4, Opteron) Presence or absence of SSE3 instructions Virtualization Hardware Assist Execution-Disable

Exact Match Required? No Yes Yes For 32-bit VMs: No For 64-bit VMs on Intel: Yes Yes (but customizable)

Why or why not? Virtualized away by VMkernel Instruction sets contain many small differences Multimedia instructions usable directly by applications VMwares Intel 64-bit implementation leverages VT Guest OS relies on NX/XD bit if detected

Using Topology Maps to Plan VMotion Layout The Maps panel in the VI Client provides a visual understanding of the relationships between the virtual and physical resources available in VirtualCenter inventory. Maps are a visual way of verifying that the VMotion requirements relating to networks and datastores are met by a particular set of hosts. Maps show the graphical representation of the relationships between the following: Hosts Virtual machines Networks 40

Datastores

VMware DRS (Distributed Resource Scheduler) VMware has a feature called Distributed Resource Scheduler (DRS) which allows you to aggregate CPU and memory resources of the compute resource, which is either a standalone host or a cluster enabled for VMware DRS. A DRS cluster is implicitly a resource pool. DRS: Purpose and Features Goals of DRS: Balance VM load across hosts in a cluster Enforce resource policies accurately (reservations, limits, shares) Respect placement constraints o Affinity and anti-affinity rules o VMotion compatibility (CPU type, SAN and LAN connectivity) Initial Placement: Power on VM in a resource pool Recommend host with a prioritized list Does not use VMotion Dynamic Balancing: Monitor key VM, pool, and host metrics Deliver entitled resources to pools and VMs Recommend migrations with a prioritized list DRS Cluster Settings Automation Level DRS has the following automation settings for the DRS cluster: Manual: When you power on a virtual machine, VMware DRS displays a list of recommended hosts. When the cluster becomes unbalanced, DRS displays recommendations for virtual machine migration. Partially automated: When you power on a VM, VMware DRS places it on the best-suited host. When the cluster becomes unbalanced, VMware DRS displays recommendations for virtual machine migration. Fully automated: When you power on a VM, VMware DRS places it on the best-suited host. When the cluster becomes unbalanced, VMware DRS migrates virtual machines from overutilized hosts to underutilized hosts to ensure a balanced use of cluster resources. NOTE: You can customize the automation level for individual virtual machines in a DRS cluster to override the automation level set for on the entire cluster. This allows you to fine tune automation to suite your needs.

41

Resource Pools in a DRS Cluster Delegated Administration A pool can reflect any organizational structure that makes sense to you, such as a pool for each department, or a project or a client, etc. You can associate access control and permissions to different levels in the resource pool hierarchy. A cluster administrator is given at least the Datacenter Administrator role. A pool administrator is given the role of Resource Pool Administrator. An end user is given at least the Virtual Machine Power User role. Planned Downtime: Maintenance Mode Maintenance mode restricts the VM operations on the host to allow you to conveniently shut down running VMs, or VMotion VMs to other hosts, in preparation for host shut down. Normal mode o You can power on VMs as needed, and VMs can be migrated to this host Enter Maintenance mode o All running VMs must either be shut down or migrated to other hosts; no new VMs can be powered on; no VMs will be migrated to this host Maintenance mode o All VMs have been manually powered off or migrated to other hosts, and no new VMs can be powered on; no VMs will be migrated to this host

CPU and Memory Resource Allocation Guidelines for Initial VM CPU Resources Initial CPU Reservation per VM: Typically between 5 10 percent of a single processor Initial CPU Limit per VM: Keep limit to 15 20% above the reservation If VM never reaches its limit, consider lowering this value CPU Shares per VM: Leave at Normal value Guidelines for Initial VM Memory Resources Initial memory Reservation per VM: 50% of the limit value Set low enough for guest OS and application to be viable and minimize balloon driver surrender requests and VMkernel swap activities Initial memory Limit per VM: Should satisfy requirements of guest OS and application(s) Set the maximum memory value Memory Shares per VM: Leave at Normal value 42

Memory overhead per powered-on VM: Ranges between 79 734 MB Depends on a VMs maximum memory size, number of vCPUs and whether it is running a 32-bit or 64bit guest OS o Virtual machine admission control considers not only the VMs allocated memory, but also considers overhead memory, which ranges from: 79 MB for a single-vCPU, 256 MB, 32-bit 734 MB for a 4-vCPU, 16 GB, 64-bit VM

MODULE 9: VM Resource Monitoring


Monitoring using Performance-based Alarms What is an Alarm? Alarms are asynchronous notifications of changes in host or virtual-machine state. When a host or virtualmachines load passes certain configurable thresholds, the Virtual Infrastructure client will display messages to this effect. You can also configure VirtualCenter to transmit these messages to external monitoring systems. Creating a VM- and Host-Based Alarm Create an alarm by right-clicking on a VM or host and choose Add Alarm The Trigger Types that can be configured are the following: VM or Host CPU Usage VM or Host Memory Usage VM or Host Network Usage VM or Host Disk Usage VM or Host State Once alarms are configured, actions can be to send external messages or to respond to problem proactively. Actions that can be performed are: Send a notification e-mail Send a notification trap Run a script Power on a VM Power off a VM Suspend a VM Reset a VM

Module 10: Data Protection


43

Backup Strategies Different strategies exist to back up data. They are: Perform a VM file-level backup using a backup agent in the VM o Use to perform file-level backups of the guest OS o Backups from within the virtual machine, using a backup agent, are best for application data because no system shutdown is required. In contrast, virtual disk backups are best for system images, because they always result in a bootable virtual disk, suitable for rapid deployment. Perform a Windows VM file-level backup using VMware Consolidated Backup (VCB) Perform a full virtual machine backup using VCB VMware Consolidated Backup (VCB) VMware Consolidated Backup (VCB) is the backup solution for a data center, where VMs reside on a SAN. With VCB, the task of doing a backup is taken out of the VM, out of the ESX Server and placed onto a dedicated, physical host, which is referred to as the backup proxy server. The backup task is offloaded to the backup proxy server and as a result, backups do not interfere with the ESX Servers workload. The backup proxy server is a physical machine. With VCB, the physical machine will be able to read data directly off the VMFS volume located on the SAN. If tape storage is also located on the SAN, then the backup is basically LAN free. Properties of VMware Consolidated Backup (VCB) A LAN-free, online backup solution of VMs for ESX Server + SAN o Backup is offloaded to a dedicated, physical host Filesystem-consistent backup o VMware Tools can quiesce filesystem before files are backed up Supports different backup flavors o File-level backup (Windows guests) o Full virtual machine backup (all guests) Works with major 3rd party backup software Requires a physical, Windows 2003 Server and fibre channel SAN or iSCSI PROS A way to provide LAN-free, online backups of virtual disks Not necessary to purchase a backup client per VM Possible to back up to FC-attached tape drives Integrated with backup solutions No resource contention on ESX Server host Better performance. Shorter backup window. CONS Requires use of a separate physical server running Windows (the backup proxy) Cannot restore directly into the virtual disk

File-Level Restore Methods A backup is useful only if you can restore from it. There are three approaches to restoring data from a VCB file-level backup. All three approaches assume that you are using a backup agent to perform the restore. The tradeoff between these approaches is the number of backup agents required vs. ease of performing the restore. Centralized Restore 44

o With this approach, the backup agent runs only on the proxy server. As a result, the proxy server is used to handle both backups and restores. The backup administrator restores files to a directory on the backup proxy. Then, the backup administrator uses a Windows file share (i.e. CIFS) to copy the restored files back into the appropriate VM(s). Per-group Restore o This is a distributed approach. With the per-group restore, separate backups into logical groups. Then, designate a VM in each group responsible for holding restored files for that backup group. Install a backup agent into that VM. Then, assign a group administrator to be responsible for restoring data to the VM in the group with the backup agent installed. Note that the only reason for installing a backup agent into the VM is for restore purposes only. All backups of VMs in this group continue to be performed by the backup proxy. After the group administrator restores files to the designated VM, the files are then copied to the appropriate virtual machine using a Windows share. Self-service Restore o With this approach, a backup agent is installed in each and every VM. Like the per-group restore, the backup agent is to be used only for restores. Backups continue to be performed by the backup proxy.

Full Virtual Machine Backup and Restore To perform a full virtual machine, or image-level, backup, the command vcbMounter, can be used to back up a VM managed by VirtualCenter. Third-party backup agents are also supported. vcbMounter syntax: example: vcbmounter.exe h 192.168.1.210 u administrator p vmware a ipaddr:192.168.56.124 t fullvm r C:\myvm -h: VirtualCenter Server hostname or IP address -u: username -p: VirtualCenter Server password -a: attribute used to find the VM to backup -t: type of backup to perform, either file or fullvm -r: path of the backup location NOTE: The directory in which the backup is located must not exist yet. If a fullvm backup is performed, vcbMounter exports the VM and copies the VMs files to the backup location specified on the backup proxy server. To restore from a full virtual machine backup, first copy the files comprising the backup from the VCB Proxy Server to the service console. Then perform the restore using the vcbRestore command from the service console. Example: To restore from a backup that has been copied to the service console directory named /vmfs/volumes/TempStorage/MyBackup, run the vcbRestore command: vcbRestore HUP s /vfs/volumes/TempStorage/MyBackup where: HUP: refers to the h, -u, -p options -s: tells vcbRestore where the backup is located 45

How VCB Works VCB Components The components involved in the VCB backup process are: hostd: Process on the ESX Server that initiates commands on VMs on behalf of software like VirtualCenter and VCB VM to be backed up: VMware Tools will be involved with the backup VirtualCenter Server: Initiates communication between VCB proxy and ESX Server Fibre Channel SAN or iSCSI storage: Where the VMFS datastore containing the VMs files reside; must be accessible by both the ESX Server and the VCB Proxy VCB proxy server: Physical, Windows system that contains the third-party backup software as well as the VCB framework 3rd party backup software (Windows only): consult the Backup Software Compatibility Guide for ESX Server 3.x for supported list When a snapshot is created, a disk write buffer is added to the virtual machine in order to allow user access to the virtual machine while it is being backed up. When integrated with supported 3rd party backup software, a file-level backup is available for Windows virtual machine only. When a full virtual machine backup is performed, each virtual machines disk is exported into sparse file format before being backed up.

Using VCB Command-Line Utilities The VCB software contains a set of command-line utilities that allow you to do non-standard tasks, such as back up individual virtual disks of a virtual machine, or restore single files from a full virtual machine backup. The command-line utilities exist on both the VCB proxy server and the service console. For the most part, the commands are similar on both. However, not all commands are available on both. Table below shows the various commands and where they apply: Command vcbVmName vcbSnapshot vcbMounter vcbExport mountvm vcbRestore vcbUtil Description Search for VMs and obtain some basic configuration info Create/delete/find/get properties for VM snapshots Get access to VM data to be backed up Export disk from VM snapshot Manually mount a VMs disk(s) Restore a full VM backup by VCB List VMs resource pools and location information On Proxy X X X X X On Service Console X X X X X X

NOTE: On the proxy server, all commands are not case-sensitive and are located in the directory C:\Program Files\VMware\VMware Consolidated Backup Framework. On the service console, all commands ARE casesensitive and are located in the directory /user/sbin.

46

Module 11: High Availability


VMware High Availability (HA) is an option for VMware ESX Server and must be licensed separately. It is an availability solution that deals with failures (host failures in VC2). It also has the following characteristics: Provides high availability to virtual machines through automatic failover on a cluster of ESX Server hosts An optional VirtualCenter feature Configuration, management and monitoring done through the VI Client Customizable behavior for individual virtual machines HA (reactive solution) is responsible for bringing up the VMs. However, DRS (proactive solution) is responsible for distributing the load when the VMs migrate to other ESX servers once HA performs the migration. If during a VMotion, either the source or destination host were to fail, and leave the VM in a failed state, VMware HA would automatically start the VM on the source/destination host (or some other host if both the source/destination were to fail). NOTE: VMware HA in VC 2 deals only with host failures, not individual VM failures. To deal with a VM failure, you can monitor the guest heartbeat of the VM, and then take action based on that such as using a VC alarm to launch a script. High Availability Strategies There are three main implementation schemes for clustering in ESX Server: Cluster-in-a-box: this provides simple clustering to deal with software crashes or administrative errors. The cluster consists of multiple virtual machines on a single ESX Server. Cluster-across-boxes: this allows you to deal with the crash of an ESX Server, since the virtual machines in the cluster are located across multiple ESX Servers. Physical-to-virtual cluster: this provides a standby host for multiple physical machines on one standby box with multiple virtual machines. In other words, a physical machine is clustered with a virtual machine on an ESX Server (the standby host.) Architecture of a VMware HA Cluster A key component to the VMware HA architecture is the cluster of hosts. When each host was added to the cluster, the VMware HA agent was uploaded to the host. The VMs files are located on shared storage and therefore, each host in the cluster needs access to the same resources. VMware HA does not use VMotion, DRS does. Rebalancing is through DRS, which uses VMotion to automatically balance the overall cluster load. If you lose the VirtualCenter Server, VMware HA will still work because the HA agents bypasses the VirtualCenter Server for communication. The HA agents talk to each other directly via the service console (on the ESX Servers).

Configure VMware HA Prerequisites for VMware HA In order for the HA cluster to work properly, there are two prerequisites: 47

You should be able to power-on a VM from all hosts within the cluster o Each host in the cluster should have access to the virtual machines files and should be able to power on the VM with no problem. Each host in the cluster is configured for DNS o VMware HA needs to be able to resolve each hosts fully qualified domain name

What if a Host is Running but Isolated? Suppose you have a VMware HA cluster with three nodes, and one node loses contact with the other two. How does the node determine whether it has become isolated or that the other nodes have crashed? VMware HA waits 15 seconds before deciding whether a host is isolated or not. This interval is now configurable in ESX Server 3.5. Each node has a node isolation verification address that it tries to ping to determine if it is connected to the network or isolated. This verification address must be some good address that is known to be up. By default, this address is the default gateway for the service console interface. A different isolation address can be specified by setting the advanced option das.isolationaddress, which is a cluster-wide setting, which can be set in the Advanced Options menu of the VMware HA properties. Guidelines for Isolation Response Setting If an ESX Server becomes isolated, but its virtual machines can still access the production network, then the isolation response for these virtual machines should be to leave the virtual machine powered on. However, if the virtual machine will not be able to access the production network if its host becomes isolated from the cluster, then the isolation response for these virtual machines should be to power them off so that the other ESX Servers can take ownership of those virtual machines.

MODULE 12: Security in a VMware Virtualized Environment


VMware Infrastructure Security The ESX Server supports both locally-attached storage and remote storage such as Fibre Channel, iSCSI (H/W and S/W Initiated), and NFS. These are some steps in securing access to remote storage: Fibre Channel Zoning LUN masking (on the array or ESX Server) NFS Share options Network segmentation

iSCSI CHAP 48

o ESX Server supports one-way CHAP authentication (i.e. the target authenticates the initiator, not vice-versa) Per target CHAP is not supported

Network Security: Security Considerations for VLANs A VLAN is an effective means of controlling how widely data is transmitted within the network. Here are some ideas on how to secure VLANs. Treat VLANs as part of a broader security implementation Create a separate VLAN or virtual switch to facilitate communication between management tools and the service console. o Because the service console is the point of control for the ESX Server, safeguarding it from misuse is crucial. VMware virtual switches do not support the concept of native LAN. Set up a separate VLAN or virtual switch for VMotion and for network attached storage (NAS or iSCSI) Network Security: Secure Virtual Machines with VLANs Virtual machines are isolated from each other. One virtual machine cannot read or write another virtual machines memory, access its data or use its applications. However, within the network, any virtual machine or group of virtual machines can still be the target of unauthorized access from other virtual machines and might require further protection by external means. Here are some ideas: Configure software firewalls on some or all virtual machines. o Software firewalls can slow performance. Therefore, balance the security needs against the performance needs. Install a software firewall on a virtual machine at the head of the virtual network. Implement network segmentation o User separate physical network adapters for virtual machine zones to ensure isolation o Setup VLANs Create different virtual machine zones within an ESX host on different network segments o Minimizes the risks of data leakage from one VM zone to the next. Network Security: Virtual Switch Protection and VLANs VMware virtual switches provide safeguards against certain threats to VLAN security. Virtual switches are designed to protect VLANs against attacks which involve VLAN hopping. VMware virtual switches do not: Obtain MAC addresses from observable traffic and therefore are not vulnerable to MAC flooding attacks. Perform the dynamic trunking required for 802.1q and ISL tagging attacks Allow frames to leave their correct broadcast domain (VLAN), therefore are not vulnerable to multicast brute-force attacks Speak spanning-tree protocol (STP) Support dynamic trunking protocol (DTP) VLAN trunking protocol (VTP) Inter-switch link protocol (ISL) Link aggregation control protocol (LACP) 49

Virtual switches also help protect against various types of attacks such as double-encapsulation attacks, spanning tree attacks, and random frame attacks. How the Service Console is Secured The service console is protected through the following ways: Protected with a firewall Only services essential to managing ESX server are enabled All communications from clients are encrypted through SSL by default The Tomcat Web service has been modified to run only those functions required for administration and monitoring by a Web client FTP and Telnet are not installed and the ports for these services are closed by default The number of application that use a setuid or setgid flag has been minimized

Managing Firewalls Client Utilities The following ports lists predetermined TCP and UDP ports used for management access to the VirtualCenter Server, ESX Server hosts, and other network components. Port 22 80 443 902 903 2049 2050 5000 3260 8000 8042 8045 27000 27010 Purpose Utilized by SSH clients (v2.0 SSH is supported) HTTP access. Redirected to port 443 HTTPS access Authentication traffic from the VI client to VirtualCenter or the ESX Server host Remote console traffic generated by user access to virtual machines NOTE: may not be used in ESX Server 3.5 Transactions from your NFS storage devices Traffic between ESX Server hosts for VMware HA and EMC Autostart Manager Transactions from your iSCSI storage devices Incoming requests from VMotion Traffic between ESX Server hosts for HA and EMC Autostart Manager License transactions from ESX Server to the license server (outgoing) License transaction from the license server (incoming) Traffic Type TCP in TCP in TCP in UDP out TCP in TCP in/out TCP out UDP in/out TCP out TCP in/out TCP out UDP in/out TCP out TCP in

50

Service Console Firewall Management The following are the characteristics of the service console firewall: ESX Server 3.x Service Console is installed with a packet-filtering firewall by default o Inspects and processes all packets based on a set of defined rules Service Console firewall rules are set from most to least restrictive o Deny everything first, then selectively open required ports Firewall rules are based on an ordered chain o The packets received by the service console firewall are checked against the chain. The three default chains are: INPUT processes all packets received by the service console OUTPUT processes all packets transmitted by the service console FORWARD processes packets received on one interface and forward them to another. This chain is always disabled on the service console because the service console is not a router. o In addition to the three default, there are six additional rules on the service console icmp-in icmp-out log-and-drop valid-source-address valid-source-address-udp valid-tcp-flags Each chain has a target that indicates what to do with the packet when it matches a rule o Targets used are: ACCEPT, DROP, LOG, or REJECT The esxcfg-firewall command is used to manage the service console firewall Changes made to the service console firewall configuration are logged at /var/log/vmware/esxcfgfirewall.log Each time the service console firewall is reconfigured the following steps always occur o Flush the existing firewall rules o Delete all the firewall rules o Rebuild the firewall rules opening only those ports required for management

ESX Server User Administration

SSH Access Configuring Sudo TCP Wrapper Configuration

51

S-ar putea să vă placă și