Documente Academic
Documente Profesional
Documente Cultură
Hyperconvergence
Posted at 13:11h in Blog & Opinion, VMworld 2016 by James Green 0 Comments
RECOMMEND:
In the data center, technology advances in distinct steps rather than smoothly along a continuum.
One day we dont have SSDs, and the next, the first SSD is generally available and we do.
Market adoption, however, smooths out those distinct steps into what looks more like a gradually
sloping line. Some technologies see more rapid adoption than others, and a recent example of
this is hyperconvergence.
Image source: Joint Flash Memory Summit presentation by EMC, HGST, and Mellanox
BUT! you say. The whole purpose of placing disks inside of servers is to allow the low
latencies associated with that physical proximity. If you put these flash devices in a shared
storage appliance, arent we right back to traditional SAN and NAS?
Thats an astute observation, and thats the key difference between external SSDs accessed via
NVMe over Fabrics as opposed to a traditional remote storage protocol like iSCSI or NFS. Using
one of these SCSI-based protocols can add 100+ microseconds of latency just due to translation
(and thats without accounting for the network).
NVMe/f is different. NVMe commands and structures are transferred end-to-end, resulting in
zero translation overhead, and only a very minimal amount for encapsulation. This design has
allowed the NVMe/f developers to come within a few microseconds of matching direct-attached
PCIe latency. Whats the impact of that?
room in organizations for slower, cheaper storage. With that in mind, many storage experts
propose the following architecture for future storage topologies. It will leverage both NVMe/f
connected flash devices as well as more traditional SCSI-based block/file or object storage.
The flash devices at the top of the rack could be used for caching (as will be presented briefly) or
it could be used as a sort of Tier 0 in a scenario where some sort of auto-tiering mechanism will
promote and demote hot data to and from the top-of-rack flash.
The I/O filter running in the user world means that security and kernel stability are not
compromised, yet a solution can be inserted directly into the storage path at the vSphere host
level. In the case of FVS, this will be for the purpose of cacheing.