Sunteți pe pagina 1din 10

There are several types of widgets in vRealize Log Insight that you can add to a

dashboard. These include:


A Chart widget that contains a visual representation of events with a link to
a saved query.
A Query List widget that contains title links to saved queries.
A Field table widget that contains events where each field represents a
column.
A simplified Event Types table widget that contains similar events combined
in single groups.
A simplified Event Trends table widget that shows a list of event types found
in the query, sorted by number of occurrences. This is a quick way to see what
sorts of events are happening very frequently in a query.

The password policy in ESXi 6 has following requirements:


Passwords must contain characters from at least three character classes.
Passwords containing characters from three character classes must be at least
seven characters long.
Passwords containing characters from all four character classes must be at
least seven characters long.
An uppercase character that begins a password does not count toward the
number of character classes used.
A number that ends a password does not count toward the number of character
classes used.
The password cannot contain a dictionary word or part of a dictionary word.

Does Storage DRS fully supports VMFS and NFS storage?


Yes, Storage DRS fully supports VMFS and NFS datastore. However, it does not
allow adding NFS datastore and VMFS datastore into same datastore cluster.

What are the core Storage DRS features?


Resource aggregation: It enables to group multiple datastores in to a single
flexible pool of storage called a Datastore Cluster (aka Storage DRS POD).
Initial placement: This feature takes care of disk placement for operations
such as Create virtual machine, add disk, clone, relocate.
Load balancing based on Space and IO: Storage DRS dynamically balance the
Storage DRS cluster imbalance based on Space and IO threshold set. Default space
threshold per datastore is 80% and default IO latency threshold is 15ms.
Datastore maintenance mode: This feature helps when admin wants to do
maintenance activity on storage. Similar to host maintenance mode, Storage DRS will
Storage vMotion all the virtual machine files.
Inter/Intra VM Affinity rules: As name states, we can have affinity/anti-
affinity rules between virtual machines or VMDKs.

What are the requirements of the Storage DRS cluster?


VMware vCenter server 5.0 or later
VMware ESXi 5.0 or later
VMware vSphere compute/hosts cluster (recommended)
VMware vSphere enterprise plus license
Shared VMFS or NFS datastore volumes
Shared datastore volumes accessible by atleast one ESXi host inside the
cluster. VMware recommends to have full cluster connectivity, however, it is not
enforced
Datastore inside Storage DRS cluster must be visible in only one data center.

What are the constraints on Storage DRS?


VMFS, NFS cannot be part of the same datastore cluster
Max 64 datastores per datastore cluster
Max 256 PODs per vCenter server
Max 9000 VMDKs per POD

What are the various disk types Storage DRS supports?


Storage DRS supports following disk types:
Thick provisioned
Thin provisioned
Independent disk
vSphere Linked clones

Does Storage DRS prefer moving Powered-off VMs to Powered ON?


Yes, Storage DRS prefers moving powered-off virtual machines to Powered on
virtual machines to reduce the storage vMotion overhead. In case of moving powered-
off virtual machine, there is no need to track the VMDK block changes.

How the initial placement of virtual machine with multiple disks works, calculation
on the virtual machine or on the individual disks?
Disks are considered individually but depending on virtual machines disk
affinity. They can be on a same datastore or placed on different datastores.
However, disks are considered individually.

Does Storage DRS consider VM swap file?


Initial placement algorithm does not consider swap file. Storage DRS initial
Placement algorithm does not take virtual machine swap file capacity into account.
However, subsequent rebalance calculations are based on space usage of all
datastores, Therefore, if a virtual machine is powered on and has a swap file, it
is counted toward the total space usage.
Swap file size is dependent on virtual machine RAM and reserved RAM. If
reserved RAM is equal to RAM assigned to virtual machine, there will be no swap
file for that virtual machine. Also, there is a way to dedicate one of the
datastores as swap file datastore where all the swap files from all the virtual
machines will be stored.

Disable Network Rollback


Rollback is enabled by default in vSphere 5.1 and later. You can disable rollback
in vCenter Server by using the vSphere Web Client.
Procedure
In the vSphere Web Client, navigate to a vCenter Server instance.
On the Manage tab, click Settings..
Select Advanced Settings and click Edit.
Select the config.vpxd.network.rollback key, and change the value to false.
If the key is not present, you can add it and set the value to false.
Click OK.
Restart vCenter Server to apply the changes.

Replicating a Virtual Machine and Enabling Multiple Point in Time Instances


When you configure replication of a virtual machine, you can enable multiple
point in time (PIT) instances in the recovery settings in the Configure Replication
wizard. vSphere Replication retains a number of snapshot instances of the virtual
machine on the target site based on the retention policy that you specify. vSphere
Replication supports a maximum of 24 snapshot instances. After you recover a
virtual machine, you can revert it to a specific snapshot.
vSphere Replication does not replicate virtual machine snapshots.

There are three methods of VLAN tagging that can be configured on ESXi/ESX:
External Switch Tagging (EST)
Virtual Switch Tagging (VST)
Virtual Guest Tagging (VGT)
• Virtual machines protected by VMware vSphere Fault Tolerance (vSphere FT) cannot
be replicated with vSphere Replication.
• Virtual machines protected by VMware vSphere High Availability (vSphere HA) can
be replicated. However, when a replicated virtual machine is recovered by vSphere
HA, vSphere Replication might require a full sync.
• VMware vSphere vMotion® and VMware vSphere Distributed Resource Scheduler™
(vSphere DRS) are fully supported. vSphere Replication will continue without a full
sync
• vSphere Replication can replicate virtual machines with snapshots. Snapshots at
the source are not reproduced at the target location. vSphere Replication will
recover a virtual machine at the target location, using the latest replicated data
regardless of whether this data was originally stored in a regular virtual disk
file or a virtual machine snapshot file at the source location.
• VMware vSphere vApps™ are not fully supported. Although it is possible to
replicate the virtual machines that compose a vApp, vSphere Replication cannot
replicate the constructs and policies of the vApp itself. Use this work-around:
Replicate the virtual machines contained in the vApp and recover them at the target
location. Create a new vApp at the target location with the same configuration and
policies as the original vApp. Import the recovered virtual machines into the newly
created vApp at the target location.
• VMware vCloud® Networking and Security™ configurations will not be recovered.
Therefore, it is recommended that administrators configure vCloud Networking and
Security solutions for each location and avoid replicating vCloud Networking and
Security virtual appliances.

Does VMware vSphere® Replication™ maintain a virtual machine’s snapshot hierarchy?


No. vSphere Replication does not replicate a virtual machine snapshot
hierarchy from the source location to the target location. Snapshots are collapsed
into a single aggregate virtual machine disk (VMDK) file at the target location

Add Hosts to a vSphere Distributed Switch


Prerequisites
Verify that enough uplinks are available on the distributed switch to assign
to the physical NICs that you want to connect to the switch.
Verify that there is at least one distributed port group on the distributed
switch.
Verify that the distributed port group have active uplinks configured in its
teaming and failover policy.
If you migrate or create VMkernel adapters for iSCSI, verify that the teaming and
failover policy of the target distributed port group meets the requirements for
iSCSI:
Verify that only one uplink is active, the standby list is empty, and the
rest of the uplinks are unused.
Verify that only one physical NIC per host is assigned to the active uplink.

What must be enabled to ensure that VM Component Protection (VMCP) works in a High
Availability cluster&
All Paths Down (APD) Timeout

vrealize log insight search queries


Save a Query in vRealize Log Insight
Rename a Query in vRealize Log Insight
Load a Query in vRealize Log Insight
Delete a Query from vRealize Log Insight
Share the Current Query
Export the Current Query
Take a Snapshot of a Query

Guest Operating System Customization Requirements


VMware Tools Requirements
The current version of VMware Tools must be installed on the virtual
machine or template to customize the guest operating system during cloning or
deployment.
Virtual Disk Requirements
The guest operating system being customized must be installed on a disk
attached as SCSI node 0:0 in the virtual machine configuration.
Windows Requirements
Microsoft Sysprep tools must be installed on the vCenter Server system.
See Installing the Microsoft Sysprep Tool.
The ESXi host that the virtual machine is running on must be 3.5 or
later.
Linux Requirements
Customization of Linux guest operating systems requires that Perl is
installed in the Linux guest operating system.
Verifying Customization Support for a Guest Operating System
To verify customization support for Windows operating systems or Linux
distributions and compatible ESXi hosts, see the VMware Compatibility Guide. You
can use this online tool to search for the guest operating system and ESXi version.
After the tool generates your list, click the guest operating system to see whether
guest customization is supported.

UEFI Secure Boot for a Virtual Machine Prerequisites


Verify that the virtual machine operating system and firmware support UEFI boot.
EFI firmware
Virtual hardware version 13 or later.
Operating system that supports UEFI secure boot.

Long Distance vMotion requirements in VMware vSphere 6.0


A RTT (round-trip time) latency of 150 milliseconds or less, between hosts.
Your license must cover vMotion across long distances. The cross vCenter and
long distance vMotion features require an Enterprise Plus license. For more
information, see Compare vSphere Editions.
You must place the traffic related to virtual machine files transfer to the
destination host on the provisioning TCP/IP stack. For more information, see the
Place Traffic for Cold Migration, Cloning, and Snapshots on the Provisioning TCP/IP
Stack section in the vCenter Server Host Management guide.

Port groups per standard switch 512


Static/Dynamic port groups per distributed switch 10,000
Static/dynamic port groups per vCenter 10,000

Virtual SAN disk groups per host 5


Magnetic disks per disk group 7
SSD disks per disk group 1

Virtual NICs per virtual machine 10

Virtual CPUs per virtual machine (Virtual SMP) 128

LACP - LAGs per host 64

Fault Tolerance maximums


Virtual disks 16
Disk size 2 TB
Virtual CPUs per virtual machine 4
RAM per FT VM 64 GB
Virtual machines per host 4
Virtual CPU per host 8
Virtual SCSI adapters per virtual machine 4
Virtual SCSI targets per virtual SCSI adapter 15
Virtual SCSI targets per virtual machine 60

Which 3 features require a shared storage infrastructure to work properly?


Fault Tolerance
Distributed Power Management
Dynamic Resource Scheduler

After disabling DRS, the cluster’s resource pool hierarchy and affinity rules are
not retained when DRS is turned back on. If you disable DRS, the resource pools are
removed from the cluster.
Note: If there are vApps in the cluster, disabling DRS prompts you to delete all
vApps in the cluster.
To avoid losing the resource pools when disabling DRS, save a snapshot of the
resource pool tree on your local machine. You can then use the resource pool tree
snapshot to restore the resource pool when you enable DRS.

Fault Tolerance Requirements, Limits, and Licensing


Requirements
Intel Sandy Bridge or later. Avoton is not supported.
AMD Bulldozer or later.
Use a 10-Gbit logging network for FT and verify that the network is low
latency. A dedicated FT network is highly recommended.
Limits
das.maxftvmsperhost - The maximum number of fault tolerant VMs allowed on a
host in the cluster. Both Primary VMs and Secondary VMs count toward this limit.
The default value is 4.
das.maxftvcpusperhost - The maximum number of vCPUs aggregated across all
fault tolerant VMs on a host. vCPUs from both Primary VMs and Secondary VMs count
toward this limit. The default value is 8.
Licensing
The number of vCPUs supported by a single fault tolerant VM is limited by the level
of licensing:
vSphere Standard and Enterprise. Allows up to 2 vCPUs
vSphere Enterprise Plus. Allows up to 4 vCPUs

vSphere Features Not Supported with Fault Tolerance


Snapshots. Snapshots must be removed or committed before Fault Tolerance can
be enabled on a virtual machine. In addition, it is not possible to take snapshots
of virtual machines on which Fault Tolerance is enabled.
Note:
Disk-only snapshots created for vStorage APIs - Data Protection (VADP) backups are
supported with Fault Tolerance. However, legacy FT does not support VADP.
Storage vMotion. You cannot invoke Storage vMotion for virtual machines with
Fault Tolerance turned on. To migrate the storage, you should temporarily turn off
Fault Tolerance, and perform the storage vMotion action. When this is complete, you
can turn Fault Tolerance back on.
Linked clones. You cannot use Fault Tolerance on a virtual machine that is a
linked clone, nor can you create a linked clone from an FT-enabled virtual machine.
VM Component Protection (VMCP). If your cluster has VMCP enabled, overrides
are created for fault tolerant virtual machines that turn this feature off.
Virtual Volume datastores.
Storage-based policy management.
I/O filters.

Requirements for Migration Between vCenter Server Instances


The source and destination vCenter Server instances and ESXi hosts must be
6.0 or later.
The cross vCenter Server and long-distance vMotion features require an
Enterprise Plus license. For more information, see
http://www.vmware.com/uk/products/vsphere/compare.html.
Both vCenter Server instances must be time-synchronized with each other for
correct vCenter Single Sign-On token verification.
For migration of compute resources only, both vCenter Server instances must
be connected to the shared virtual machine storage.
When using the vSphere Web Client, both vCenter Server instances must be in
Enhanced Linked Mode and must be in the same vCenter Single Sign-On domain. This
lets the source vCenter Server to authenticate to the destination vCenter Server.

Virtual Hardware Version Products


14 ESXi 6.7
13 ESXi 6.5
12 Fusion 8.x
11 ESXi 6.0
10 ESXi 5.5
9 ESXi 5.1
8 ESXi 5.0

Teaming and Failover Policy on a Distributed Port


Load Balancing
Route based on the originating virtual port . Based on the virtual port where
the traffic entered the vSphere distributed switch.
Route based on IP hash. Based on a hash of the source and destination IP
addresses of each packet. For non-IP packets, whatever is at those offsets is used
to compute the hash.
Route based on source MAC hash. Based on a hash of the source Ethernet.
Route based on physical NIC load. Based on the current load of the physical
network adapters connected to the port. If an uplink remains busy at 75% or higher
for 30 seconds, the host proxy switch moves a part of the virtual machine traffic
to a physical adapter that has free capacity.
Use explicit failover order. Always use the highest order uplink from the
list of Active adapters which passes failover detection criteria.
Note
IP-based teaming requires that the physical switch be configured with EtherChannel.
For all other options, EtherChannel should be disabled.
Failover Order
Active Uplinks — Continue to use the uplink when the network adapter
connectivity is up and active.
Standby Uplinks — Use this uplink if one of the active adapter’s connectivity
is down.
Note
When using IP-hash load balancing, do not configure standby uplinks.
Unused Uplinks — Do not use this uplink.

Check Compliance for a VM Storage Policy


Compliant The datastore that the virtual machine or virtual disk uses
has the required storage capabilities.
Noncompliant The datastore supports specified storage requirements, but
cannot currently satisfy the virtual machine storage policy. For example, the
status might become Not Compliant when physical resources for the datastore are
unavailable or exhausted. You can bring the datastore into compliance by making
changes in the physical configuration, for example, by adding hosts or disks to the
cluster. If additional resources satisfy the virtual machine storage policy, the
status changes to Compliant.
Out of Date The status indicates that the policy has been edited, but
the new requirements have not been communicated to the datastore where the virtual
machine objects reside. To communicate the changes, reapply the policy to the
objects that are out of date.
Not Applicable This storage policy references datastore capabilities that
are not supported by the datastore where the virtual machine resides.

Requirements and Limitations for vMotion Without Shared Storage


The hosts must be licensed for vMotion.
The hosts must be running ESXi 5.1 or later.
The hosts must meet the networking requirement for vMotion. See vSphere
vMotion Networking Requirements.
vMotion network has at least 250 Mbps of dedicated bandwidth per
concurrent vMotion session
The maximum supported network round-trip time for vMotion migrations is
150 milliseconds (long-distance)
On each host, configure a VMkernel port group for vMotion. To have the
vMotion traffic routed across IP subnets, enable the vMotion TCP/IP stack on the
host.
If you are using standard switches for networking, ensure that the
network labels used for the virtual machine port groups are consistent across
hosts.
Note: By default, you cannot use vMotion to migrate a virtual machine
that is attached to a standard switch with no physical uplinks configured, even if
the destination host also has a no-uplink standard switch with the same label. To
override the default behavior, set the
config.migrate.test.CompatibleNetworks.VMOnVirtualIntranet advanced settings of
vCenter Server to false. The change takes effect immediately.
The virtual machines must be properly configured for vMotion. See Virtual
Machine Conditions and Limitations for vMotion
The source and destination management network IP address families must
match. You cannot migrate a virtual machine from a host that is registered to
vCenter Server with an IPv4 address to a host that is registered with an IPv6
address.
You cannot use migration with vMotion to migrate virtual machines that
use raw disks for clustering.
If virtual CPU performance counters are enabled, you can migrate
virtual machines only to hosts that have compatible CPU performance counters.
You can migrate virtual machines that have 3D graphics enabled. If the
3D Renderer is set to Automatic, virtual machines use the graphics renderer that is
present on the destination host. The renderer can be the host CPU or a GPU graphics
card. To migrate virtual machines with the 3D Renderer set to Hardware, the
destination host must have a GPU graphics card.
You can migrate virtual machines with USB devices that are connected to
a physical USB device on the host. You must enable the devices for vMotion.
You cannot use migration with vMotion to migrate a virtual machine that
uses a virtual device backed by a device that is not accessible on the destination
host. For example, you cannot migrate a virtual machine with a CD drive backed by
the physical CD drive on the source host. Disconnect these devices before you
migrate the virtual machine.
You cannot use migration with vMotion to migrate a virtual machine that
uses a virtual device backed by a device on the client computer. Disconnect these
devices before you migrate the virtual machine.
You can migrate virtual machines that use Flash Read Cache if the
destination host also provides Flash Read Cache. During the migration, you can
select whether to migrate the virtual machine cache or drop it, for example, when
the cache size is large.
Virtual machine disks must be in persistent mode or be raw device mappings
(RDMs). See Storage vMotion Requirements and Limitations.
Virtual machine disks must be in persistent mode or be raw device
mappings (RDMs). For virtual compatibility mode RDMs, you can migrate the mapping
file or convert to thick-provisioned or thin-provisioned disks during migration if
the destination is not an NFS datastore. If you convert the mapping file, a new
virtual disk is created and the contents of the mapped LUN are copied to this disk.
For physical compatibility mode RDMs, you can migrate the mapping file only.
Migration of virtual machines during VMware Tools installation is not
supported.
Because VMFS3 datastores do not support large capacity virtual disks,
you cannot move virtual disks greater than 2 TB from a VMFS5 datastore to a VMFS3
datastore.
The host on which the virtual machine is running must have a license
that includes Storage vMotion.
ESXi 4.0 and later hosts do not require vMotion configuration to
perform migration with Storage vMotion.
The host on which the virtual machine is running must have access to
both the source and target datastores.
For limits on the number of simultaneous migrations with vMotion and
Storage vMotion, see Limits on Simultaneous Migrations.
The destination host must have access to the destination storage.
When you move a virtual machine with RDMs and do not convert those RDMs to
VMDKs, the destination host must have access to the RDM LUNs.
Consider the limits for simultaneous migrations when you perform a vMotion
migration without shared storage. This type of vMotion counts against the limits
for both vMotion and Storage vMotion, so it consumes both a network resource and 16
datastore resources.

You can create a virtual machine, resource pool, or child vApp within a vApp.

NFS Protocol Version 4.1


NFS 4.1 provides multipathing for servers that support session trunking.
NFS 4.1 does not support hardware acceleration. This limitation does not
allow you to create thick virtual disks on NFS 4.1 datastores.
NFS 4.1 supports the Kerberos authentication protocol
NFS 4.1 uses share reservations as a locking mechanism.
NFS 4.1 supports inbuilt file locking.
NFS 4.1 supports nonroot users to access files when used with Kerberos.
NFS 4.1 supports traditional non-Kerberos mounts. In this case, use security
and root access guidelines recommended for NFS version 3.
Does not support simultaneous AUTH_SYS and Kerberos mounts.
NFS 4.1 with Kerberos does not support IPv6. NFS 4.1 with AUTH_SYS supports
both IPv4 and IPv6.

ESXi Hardware Requirements


Supported server platform
ESXi 6.0 requires a host machine with at least two CPU cores.
ESXi 6.0 supports 64-bit x86 processors released after September 2006
ESXi 6.0 requires the NX/XD bit to be enabled for the CPU in the BIOS.
ESXi requires a minimum of 4GB of physical RAM. It is recommended to provide
at least 8 GB of RAM to run virtual machines in typical production environments.
To support 64-bit virtual machines, support for hardware virtualization
(Intel VT-x or AMD RVI) must be enabled on x64 CPUs.
One or more Gigabit or faster Ethernet controllers
SCSI disk or a local, non-network, RAID LUN with unpartitioned space for the
virtual machines.
For Serial ATA (SATA), a disk connected through supported SAS controllers or
supported on-board SATA controllers. SATA disks will be considered remote, not
local. These disks will not be used as a scratch partition by default because they
are seen as remote.
You cannot connect a SATA CD-ROM device to a virtual machine on an ESXi 6.0
host. To use the SATA CD-ROM device, you must use IDE emulation mode.
Storage I/O Control Requirements
Datastores that are Storage I/O Control-enabled must be managed by a single
vCenter Server system.
Storage I/O Control is supported on Fibre Channel-connected, iSCSI-connected,
and NFS-connected storage. Raw Device Mapping (RDM) is not supported.
Storage I/O Control does not support datastores with multiple extents.
Storage I/O Control (SIOC) requires Enterprise Plus licensing

DCUI/Console of ESXi using ALT+F Keys


ALT+F1 = Switches to the console.
ALT+F2 = Switches to the DCUI.
ALT+F11 = Returns to the banner screen.
ALT+F12 = Displays the VMkernel log on the console.

Proactive HA configuration options.


Quarantine mode for all failures. Balances performance and availability, by
avoiding the usage of partially degraded hosts provided that virtual machine
performance is unaffected.
Quarantine mode for moderate and Maintenance mode for severe failure (Mixed).
Balances performance and availability, by avoiding the usage of moderately degraded
hosts provided that virtual machine performance is unaffected. Ensures that virtual
machines do not run on severely failed hosts.
Maintenance mode for all failures. Ensures that virtual machines do not run
on partially failed hosts.

Limitations of the LACP Support on a vSphere Distributed Switch


The LACP is not supported with software iSCSI port binding. iSCSI
multipathing over LAG is supported, if port binding is not used.
The LACP support settings are not available in host profiles.
The LACP support is not possible between nested ESXi hosts.
The LACP support does not work with the ESXi dump collector.
The LACP control packets (LACPDU) do not get mirrored when port mirroring is
enabled.
The teaming and failover health check does not work for LAG ports. LACP
checks the connectivity of the LAG ports.
The enhanced LACP support works correctly when only one LAG handles the
traffic per distributed port or port group.
The LACP 5.1 support only works with IP Hash load balancing and Link Status
Network failover detection.
The LACP 5.1 support only provides one LAG per distributed switch and per
host.

vCenter HA Hardware and Software Requirements


ESXi
ESXi 5.5 or later is required.
Three hosts are strongly recommended. Each vCenter HA node can then run
on a different host for better protection.
Using VMware DRS to protect the set of hosts is recommended. In that
case, a minimum of three ESXi hosts is required.
Management vCenter Server (if used)
Your environment can include a management vCenter Server system, or you
can set up your vCenter Server Appliance to manage the ESXi host on which it runs
(self-managed vCenter Server)
vCenter Server 5.5 or later is required.
vCenter Server Appliance
vCenter Server 6.5 is required.
Deployment size Small (4 CPU and 16GB RAM) or bigger is required to
meet the RTO. Do not use Tiny in production environments.
vCenter HA is supported and tested with VMFS, NFS, and vSAN datastores.
Ensure you have enough disk space to collect and store support bundles
for all three nodes on the Active node.
Network connectivity
vCenter HA network latency between Active, Passive, and Witness nodes
must be less than 10 ms.
The vCenter HA network must be on a different subnet than the
management network.
Licensing required for vCenter HA
vCenter HA requires a single vCenter Server license.
vCenter HA requires a Standard license.

Are there any limits for VDP?


VDP supports these specifications:
Each vCenter Server can support up to 20 VDP appliances.
Each VDP appliance supports backup for up to 100 virtual machines.
Only 1 VDP appliance can exist per ESXi/ESX host.
Support for 0.5 TB, 1 TB, or 2 TB of deduplicated backup data.

VDP comes in 3 different OVA sizes: 0.5 TB, 1 TB, and 2 TB


OVA Size Disk Space Required
0.5 TB 850 GB
1 TB 1600 GB (1.57 TB)
2 TB 3100 GB (3.02 TB)

For vCenter Server 6.0, the bundled PostgreSQL database is suitable for
environments with up to 20 hosts and 200 virtual machines. For the vCenter Server
Appliance, you can use the embedded PostgreSQL database for environments with up to
1,000 hosts and 10,000 virtual machines.
For vCenter Server 6.5, the bundled PostgreSQL database is suitable for
environments with up to 20 hosts and 200 virtual machines.

VDP - 50VM per appliance, included in vSphere Essential Plus and higher
VDP Advanced - 200 VM per appliance, application agents, sold separatly, CBT, FLR
without agents, replication of backups between VDP Advanced appliances, external
proxy, disk expansion
Linux FLD requires external proxy
storage
2TB - 6 GB RAM; 4TB - 8 GB RAM
6TB - 10 GB RAM; 8TB - 12 GB RAM
max 10 VDP appliances per vCenter
Backup source - schedule - retention policy

vCenter upgrade order: Platform Services Controller, vCenter, Update manager,


Hosts, Tools , Hardware

S-ar putea să vă placă și