Documente Academic
Documente Profesional
Documente Cultură
10
11
12
13
14
15
16
17
The Cisco UCS fabric interconnects support 2 different modes which have an affect
upon switching and data flow through each fabric interconnect:
End Host Mode (also referred to as End Host Virtualization or EHV mode) EHV is
the default configuration for the UCS fabric interconnects and is considered a best
practice for Vblock. EHV mode makes each fabric interconnect appear to upstream
Cisco switches as if it were a host with many network interfaces. From a data flow
perspective, the end result of EHV is that MAC learning and Layer 2 forwarding are
only performed for devices (blades) connected to ports that are designated as server
ports. A dynamic pinning algorithm is used to pin each blade VIC to an uplink port at
the interconnect, determining the data path for that VIC on that fabric. To upstream
switches the uplink port on the fabric interconnect presents itself as a NIC on a host
and there is no MAC learning or forwarding performed for uplinks.
Switch Mode in switch mode (the non-default mode for a Vblock), each fabric
interconnect appears to upstream switches as a layer 2 switch, and layer 2 forwarding
is performed on behalf of both uplink and server ports at the fabric interconnect. The
downside of switch mode is that MAC learning is turned on for both uplink and server
ports, causing the fabric interconnect to build a MAC table for upstream data center
devices, as well as Vblock devices (the MAC table supports 32,000 entries (13,800
usable) so though this raises the possibility the MAC table could be overrun, its an
unlikely scenario). Since switch mode advertises the fabric interconnect as a layer 2
switch, the interconnect becomes available to participate in spanning tree when
configured in switch mode, which could make the Vblock subject to STP loop
management and port blocking.
18
19
20
21
22
23
24
25
26
27
28
29
Let us consider the example of deploying eight identically configured blade servers.
Each blade server should have two vNICs, two vHBAs, the internal disks should be
mirrored, it should boot from SAN and the local disks should be scrubbed if the blade
is not associated with a service profile. The blades should all communicate over the
same VLAN and VSAN. In this case it is possible to create eight Service Profiles
ensuring that the policies, VLAN IDs, VSAN IDs are all set identically in each of the
profiles. This could be time consuming and would require great care. Service Profile
Templates greatly simplify this task and enable rapid deployment of servers. The
requirements are now captured in the template and from the template eight identical
profiles can be generated and applied to a pre-defined pool of servers.
Initial service profile template: If a service profile is created from an initial
template, then it inherits all the properties of the service profile template. If,
however, changes are made to the service profile template, then the service profile
must be updated manually because it is not connected to the template.
Updating service profile template: If a service profile is created from an updating
template, then it inherits all the properties of the service profile template. If changes
are made to the service profile template, then these changes are automatically made
to the service profile.
30
Service Profile Templates are created using the UCS Manager. Template allows one to
specify the various attributes of the server discussed earlier. Among other things, the
template is used to specify the boot policy, local disk policy, assign WWNN, create
vHBAs, assign WWPN, and assign MAC addresses to the vNICs. The corresponding
pools that were created are specified in the template. When a Service Profile is
created from a Service Profile Template, the UUID and WWNN for the server, the
WWPN and MAC address for the vHBAs and vNICs are assigned from the respective
pools specified. After creating a Service Profile from the template, the new service
profile can then be associated with a blade in the UCS chassis.
31
LAN Cloud
Appliances
Internal LAN
Policies
Pools
Traffic Monitoring Sessions
32
33
34
35
The Audit Logs give detailed insight into operations within the UCS frame with
hyperlinks to the affected objects within the Infrastructure.
36
37
38
38
39
39
40
40
41
Within the Vblock RBAC schema privileges are cumulative with the Admin and AAA
roles overriding all. The Admin and AA roles are Global to the UCS, but other roles
can be specific to certain objects and Locales within the hierarchy.
42
43
44
45
The UCS can exist together with external security mechanisms and will validate users
against these listed products and schemas.
46
Pools provide the ability to allocate server attributes in a UCS domain while enabling
the centralized management of shared system resources.
47
48
WWNN pools are used in the UCS environment to assign a block of virtualized WWNs
that can be assigned to a server when a service profile is created
Worldwide Port Name Pools (WWPN): When a profile is being built, the number of
virtual host bus adapters (vHBAs) can be specified. Each vHBA needs to have a unique
virtual WWPN assigned to it. In most cases your WWPN pool should equal the
number of blades multiplied by two, because each blade has two virtual HBAs
present. Multiple WWPN pools can be created on a per-application basis to minimize
SAN zoning requirements.
Server pools: In the UCS environment, servers can be organized into server pools that
can be used to associate servers with a profile. This can be especially useful if your
servers have different physical attributes (processor, memory, and internal disk).
Note: Servers can belong to multiple server pools and can include servers from any
chassis in the system
49
50
51
52
53
54
Virtual Machines and Virtual Disks exist as files within a VMFS file system that can
either reside on a NFS export or on one or more LUNS. In either case the VMFS file
system is typically shared among all members of a cluster. This give the flexibility to
move virtual machines between ESX servers. Boot devices are typically for the
exclusive use of a single ESX server and contains the ESX hypervisor or virtual
operating system.
RAID 1/0
Offers the best all around performance of the 3 supported RAID types.
Offers very good protection and can sustain double drive failures that are not in the same mirror set.
Economy is the lowest of the 3 RAID types since usable capacity is only 50% of raw capacity.
RAID 5
Offer the best mix of performance, protection and economy.
Has a higher write performance penalty than 1 since two reads and two writes are required to perform
a single write, however for large block sequential writes, optimization eliminates this penalty since
parity may be calculated in memory.
With single parity this is more susceptible to double drive failure data loss or error on track during a
drive rebuild.
Economy is excellent with usable capacity at 80% of raw capacity.
EMC Best Practice for EFD, SAS and FC drives.
RAID 6
Offers the best protection and read performance comparable to RAID-5.
Has significant write performance penalty since three read and three writes are required to perform a
single write.
Economy is very good with usable capacity at 75% of raw capacity.
EMC Best Practice for SATA and NL-SAS drives.
On VNX, the same data protection is defined for all tiers in a mixed pool.
Drive capacities and speeds should not be mixed in the same tier. For example, never mix 600GB 15K SAS and
300GB 15K SAS in the same tier or 300GB 10K SAS and 300GB 15K SAS.
59
Tier Advisor
Note: The VNX storage systems use 6Gbps back-end connections while the VMAX
uses 4Gb Fibre Channel. This difference is inconsequential and the drive technology
and speed, rather than the back-end interconnect is more important when
determining performance. Additionally, form factor is changing and now the VNX
offers both 2.5 and 3.5 form factor. Again, these produce the same IOP and response
time and these decisions are more about footprint than performance.
62
Overview:
As stated previously, the goal of any storage system is to minimize the limitations of physical hard disk drives
(HDD). This is accomplished by writing to cache and asynchronously destaging to HDD, detecting sequential
operations and prefetching data from HDD and positioning it in cache in anticipation of host request, and
data reuse and locality of references. However, the amount of available cache on a storage processor is
limited. Using Enterprise Flash Drives in a FAST Cache configuration allows EFD drives to be used as an
extension of cache on the storage processor. Copies of frequently accessed data is promoted to FAST cache
allowing fast access to data and thus accelerating application performance. A small amount of EFD can have a
significant impact on performance do to the typical workload skew where a small data does a large
percentage of the work. Additionally, not all workloads are consistent and activity varies over time. The
FAST Cache EFD is a shared resource on the system and is much more efficient than configuring devices
entirely on EFD.
Details:
FAST Cache is a feature of the VNS Storage system. FAST Cache is not used in the VMAX based Vblock 2
systems as these are typically configured with multiple directors and large global memory.
FAST Cache is enable/disabled on a LUN basis for LUNS created from RAID groups, and on a pool basis for thin
devices. FAST Cache is not used if a LUN is created on EFD RAID Groups or Pools.
FAST Cache can be configured up to 2TB in size. The base configuration for the VCE VN Series Vblock is 100 200 GB.
FAST Cache is always configured as RAID 1 Mirrors. Global host spare is always configured to provide
proactive sparing and automatic online rebuild in the event of a EFD drive failure.
FAST Cache operates on 64 K extents. DRAM Cache on the storage processor is configurable from 2K 16 K
extent size
Relative speed differences: DRAM access is in the nanosecond to microsecond range, EFD is in the
microsecond to millisecond range, but still in the order of 10X faster than that of HDD.
64
65
67
68
69
71
72
74
This slide shows a conceptual diagram of a storage system attached to 2 hosts. Each
host has an associated Storage Group; Storage Group A for Server A, and Storage
Group B for Server B. The LUNs used on the physical storage system are sequential,
from 0 through 7, but dont have to be. Each LUN on the storage system (ALU or Array
Logical Unit) has been mapped to a LUN number (sometimes called the LUN alias) as
seen by the host (HLU or Host Logical Unit). It is important to note that each host
sees LUN 0, LUN1, etc, and there is no conflict due to multiple instances of the LUN
numbers being used. There is no conflict because the mappings are stored in a
translation table which is apart of the Access Logix database.
Each server sees the LUNs presented to it by the Storage Group as a virtual storage
system.
75
Initiator registration records contain the information shown above. This information
will be sent to each attached VNX by the host agent when it starts, typically shortly
after the host OS starts. Optionally, the user may start or stop the agent at any time
form the GUI or CLI. Initiator information may also be manually entered via the GUI
or the CLI, which is necessary on Operating Systems that do not support a host agent.
The storage system uses this information to match hostnames to HBAs, and, since
different operating systems use slightly different sets of SCSI commands, to
determine the appropriate response to host LUN access requests.
Viewing the Connectivity Status window from the Unisphere GUI will display this
information.
76
Access to the LUNs is controlled by an Access Control List, or ACL, which contains the
128-bit Globally Unique ID of the LUN, and the 128-bit Unique IDs of the HBAs in the
host. The HBA UID consists of a 64-bit World Wide Node Name (WWNN) followed by
a 64-bit World Wide Port Name (WWPN). The LUN UID is assigned to the LUN when it
is bound, and includes time-related information. If the LUN is unbound, and an
identical LUN is bound again, they will have different UIDs.
Each request for LUN access references the ACL in order to determine if a host should
be allowed access.
Thanks to the disk-based nature of the database, it is persistent and can survive
power and/or SP failures. If an SP fails and is replaced, the new SP assumes the
WWPNs of the failed SP, and no changes need be made to the database. If a host HBA
fails and is replaced, the new HBA will have a different WWN, the replaced HBA
making the hosts entry in the database incorrect. The information for the old HBA
needs to be removed from the database and the information for the new HBA needs
to be entered (de-registration and registration respectively).
77
78
In the event of a front-end path failure there is no need to trespass LUNs immediately. The
Upper Redirector driver routes the I/O to the SP owning the LUNs through the CMI channel.
In the event of a back-end path failure there is no need to trespass LUNs immediately. The
Lower Redirector routes the I/O to the SP owning the LUNs through the CMI channel. The
host is unaware of failure and the LUNs do not have to be trespassed. An additional benefit of
the lower-redirector is internal in that the replication software drivers (including meta-lun
components) are also unaware of the redirect.
79
EMC PowerPath 5.1 is the minimum required to support ALUA mode. Refer to the
latest configuration guide for up to date requirements.
80
81
82
83
84
85
86
87
88
89
90
91
92
93
95
ESX(i) Server supports various file formats for Virtual Machine virtual disks. Some of these formats
facilitate so called thin provisioning, or the allocation of capacity on demand. The availability of
different file formats for virtual disks provides Administrators with the opportunity to weigh footprint
versus performance considerations in setting up a suitable storage environment for a VM.
Thick provisioning lengthens the virtual disk provisioning process and increases storage footprint, but
reduces ongoing performance overhead.
Thin provisioning shortens disk provisioning and lowers storage footprint, but increases ongoing
overhead as storage is allocated and zeroed.
Vblock Storage arrays support virtual provisioning (thin provisioning) at the Storage Platform itself.
When coupled with ESX(i) thin provisioning, array-based thin provisioning can reduce the storage
footprint associated with consolidating a number of virtual machines to a storage area. Virtual Desktop
Infrastructure (VDI) solutions frequently leverage virtual provisioning options (such as VNX thinprovisioned snaps) to try to reduce the storage footprint associated with consolidating hundreds of
desktops to storage. Virtual disk thin provisioning makes it possible to have thin virtual machine disks
when they are initially deployed.
The VMFS does not reserve disk space until needed and is distinct from array-based thin volumes. This
technology equips the Administrator with more flexibility in disk space provisioning such as improved
disk utilization and improved disk-related operations, like backup.
The implementation of alarms and alerts in vCenter as well as the VMFS volume grow feature enable
dynamic expansion of shared storage pools cutting down allocated but unused space within the
datastore.
96
97
98
99
100
101
102
103
104
105
Depending on the Vblock model, cabling to the aggregation layer is either handled
on-site or as part of the VCE manufacturing process (depending on whether an
aggregation layer is included in the Vblock model architecture). In either instance,
best practice is to cross-connect each fabric interconnect to the aggregation layer
switches (5548s or 7010s), to ensure that the loss of a fabric interconnect or switch
does not result in the loss of an entire fabric (A or B). This cabling configuration is
detailed in the VCE build documentation for Vblock, and generally implemented as
part of the manufacturing process. From a data flow perspective this means that if
the VLANs in a Vblock configuration are distributed across both fabrics, data can
transit either fabric interconnect or switch to the aggregation layer the actual data
path chosen largely depends upon the uplink the fabric interconnect selects on behalf
of a server CNA (VIC), which is controlled by a dynamic pinning algorithm at the
interconnect.
106
107
108
109
110
111
112
NPIV provides a means to assign multiple FCIDs to a single N port. This feature allows
multiple applications on the N port to use different identifiers and allows access
control, zoning, and port security to be implemented at the application level.
You must globally enable NPIV for the switch to allow the NPIV-enabled applications
to use multiple N port identifiers.
113
Management
Each NPV device has its own IP address and management port for
management and debugging
All relevant configs are supported via SNMP and CLI
FM support for discovering and configuring NPV switches
No change in image upgrade and installation procedure
Note: Always check the support matrix / eLab for updated support information.
114
115
The slide shows the topology of the Vblock Series 300 infrastructure and the Fibre
Channel switches where NPIV is enabled with the Fabric Interconnects configured in
End Host Mode.
116
The fabric interconnect operates in N-Port Virtualization (NPV) mode and not as a FC
switch in the fabric. This means that it does not require a FC domain ID to keep the
number of domain IDs in the SAN fabric the same. The fabric interconnect joins the
fabric through a normal FLOGI. The FLOGI that comes from the server blade adapters
is translated by the NPV process into FDISC into the fabric.
Make sure that upstream MDS switches are NPIV enabled and assign the selected
interface to the Cisco UCS with the appropriate VSAN number.
N Port virtualization (NPV) reduces the number of Fibre Channel domain IDs in SANs.
Switches operating in the NPV mode do not join a fabric; rather, they pass traffic
between NPV core switch links and end devices, which eliminates the domain IDs for
these edge switches.
117
118
There are several mechanisms for controlling a hosts access to LUNs. Soft zoning,
which is zoning the WWPN of the HBA to the WWPN of the VNX SP port, controls
Target visibility per HBA. The Fibre Channel switch might also implement hard zoning.
Hard zoning uses a route table, located in the switch, to specify which Switch Ports
can be included in a zone. This type of zoning ties the HBA port to the SP port through
specific FC switch ports.
Fabric zoning controls target presentation and tells an ESX host whether a target
exists or not. If the host cant get to the target, it cant see the LUNs. In many wellmanaged SAN environments, both soft and hard zoning are in use. The purpose of
using both is to make accidental access to volumes by servers very unlikely.
Zoning is especially important in environments where physical Windows servers are
accessing the SAN, because Windows operating systems typically write a disk
signature on any storage volumes they see. These volumes might, in fact, be in use by
non-Windows systems. WWNs are assigned by the manufacturer of the SAN
equipment. HBAs and SPs have WWNs. WWNs are used by SAN administrators to
identify your equipment for zoning purposes.
The VNX or the hosts themselves will usually implement LUN masking, which controls
LUN visibility per host. LUN masking can also be done on the ESX/ESX(i) host server.
However this is not typically done for the sake of security and data integrity. LUN
masking is normally performed at the VNX level and, with newer switches, can also
be done at a switch/fabric level. LUN Masking can be done on the newest switches
however it is not supported.
When a LUN is masked, the VNX hides the fact that the LUN exists from the host,
hence it does not allow any communication with it.
119
120
121
122
123
124
125
126
127
128
129
130
131
- Not all VLANs need to be routed in a Vblock. NFS, vMotion, 1000V packet and
control, these stay inside the Vblock platform and AMP
- VLANs that are routed outside the Vblock and AMP, ESX Management, and
Management VMs
- Customer VM Vlans do not need to touch the AMP unless the customer needs to
132
133
134
135
136
137
138
138
139
139
140
140
141
141
142
142
143
143
144
145
145
146
146
147
148
Auditing and Accountability is the process of reviewing security logs to ensure that a
breach did not occur. This can be done in many different ways. The simplest is to have
all the logs forwarded to a Syslog server for analysis and archival. Automation can be
implemented along with Syslog to parse the log files in almost real-time, or a Security
Incident and Event Monitoring System can be utilized to provide tools and
automation to assist.
149
150
Private vLANs are an extension of standard vLANs and are called secondary vLANs, as
they must be associated to a standard, or Primary, vLAN. When a Private vLAN is
configured, it is classified as either Isolated or Community. An isolated PVLAN
restricts Layer-2 communication between all devices on the same PVLAN. They can,
however, communicate with promiscuous devices (typically router interfaces) and
with devices outside of the PVLAN normally. The Community vLAN groups devices
into subsets, or communities, that are allowed to communicate with one another.
Devices in different communities cannot communicate with each other, however. All
devices, regardless of community, can communicate with the promiscuous devices.
151
In a virtualized environment, physical firewalls may not be the most effective tool. A
virtual firewall, such as the vShield suite or Ciscos Virtual Security Gateway can be
used to manage communication between VMs. VMs can be grouped into Zones, to
configure access controls at the group level, and access controls can be specified at
the VM level using port/protocol/etc.
152
vSANs offer the same segregation of data as vLANs no data can get from one vSAN
to another without an explicit routing configuration. vSANs get slightly more complex
to deploy in large numbers due to physical port limitations host or storage ports can
typically only reside in a single vSAN. If you have UCS Firmware 1.4 or higher, F-Port
Trunking is supported, which allows multiple vSANs to coexist on a single interface. It
also supports F-Port Port-Channeling, which allows bonding multiple physical
interfaces together into a logical interface. Without this feature, the number of vSANs
is limited by the number of Fibre Channel ports in the Fabric Interconnect.
The other consideration to the number of usable vSANs is the number of front-end
ports on the storage array. For a VMAX system, the number of ports can scale fairly
high, depending on how many engines you have in the array. For a VNX, however, the
maximum number of front end ports is 10 (12 2 for replication and migration).
153
Storage separation can take many forms. At the logical level, data can be mingled on
the physical drives, but logical controls such as the LUN boundary and masking
prevent hosts from seeing each others data. If that level of separation is not
sufficient, you can allocate storage on a physical level, where a pool, disk group/RAID
group can be created for each unit of separation. When combined with masking
controls, the data is now protected at the logical and physical level.
154
155
Data that is stored on a storage array, or even backup media, can be encrypted for
protection as well. Encrypting and decrypting data requires the use of a key, and in a
typical storage environment, multiple keys will be needed. To manage these keys, a
Key Management Server is used. Currently, in a Vblock infrastructure, data at rest
encryption can be enabled on the Vblock Series 700 MX utilizing the Symmetrix
VMAX array.
The VMAX offers a feature called D@RE, or Data at Rest Encryption. By obtaining
VMAX engines with back-end encryption I/O modules, the array can encrypt and
decrypt data as it is written to and read from the array at line rate. Data is not
encrypted between the host and the array, and is encrypted within the array before it
is written to disk. Conversely, the data is decrypted as it is read from disk and then
sent back to the server in an unencrypted format.
The Service Processor on the VMAX runs a key management client that interfaces
with the key manager server (RSA Data Protection Manager).
PowerPath/VE does not currently provide an encryption option, and there is currently
no encryption option for Vblock Series 300 platforms.
156
The VMware Update Manager (VUM) is a tool that integrates with vCenter to provide
a patch management solution for a virtual machine environment. VUM can be used
to update vSphere servers for patches as well as upgrades (in some cases). In
addition, it can update patches on Windows and Linux virtual machines. These
patches can be for the operating system, or for services that are integrated with the
OS (Exchange, SQL, etc.). In addition, 3rd party applications can be written to utilize
VUM for distribution, patching, etc. PowerPath/VE is one such application.
The other components of the Vblock platform UCS, storage, switches, etc.
typically dont have patches associated. Instead, they have software or firmware
upgrades that are released regularly, but far less frequently than OS patches. These
upgrades are tested by VCE, and if determined to be safe for Vblock infrastructures,
will be made available. Because the Vblock infrastructure is an integrated solution, it
is important that all components are validated with new revisions of any one
component.
157
The Vblock platform, just like any other piece of technology, is only as secure as the
protocols that are used to access it. Using unsecure protocols provides a vehicle for
attackers to gain access to sensitive information. Using secure protocols prevents an
unauthorized person from using any information that they may gain access to.
Similarly, disabling unused services and keeping the systems internal firewall (if
available) hardened reduce the vulnerability of a system.
158
159
Resource pools are finite collections of consumables. These pools can be physical,
such as storage, servers, CPU or memory, or virtual, such as UUID, MAC addresses
and WWPN addresses. The advantage of resource pools is that a Service Provider can
create the resource pools for each consumer, and then that consumer can choose
how to allocate those resources. For example, a particular application may require a
large amount of storage, so it is allocated 50% of the available storage. A different
application may require a significant amount of memory, so it is given 30% of that
resource.
In addition, some resources, such as memory and CPU, can be fluid. That is, in times
of peak demand excess capacity can be dynamically used by applications that require
it, essentially utilizing the same model as QoS.
160
161
162
163
164
165
166
167
168
169
170
171
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189