EMC announces its next Generation VMAX Array

EMC announces its next Generation VMAX Array

EMC has unveiled its new message of Powerful, Trusted Agile at this redefined event today. Along with that comes the next generation VMAX hardware, the VMAX3. With this newest revision of EMC’s flagship product they look to bring the trust, and control of Centralized IT together with the cost, agility, and scale of modern Self Service IT. This release brings us three new hardware models: 100k, 200k, and 400k. Each of the new models are built on a single new architecture designed for hybrid cloud scale. This includes a new Operating System named Hypermax and a major overhaul of the virtual matrix.
vmatrixThe RapidIO Virtual Matrix has been replaced with a 56GB/s Infiniband Dynamic Virtual Matrix.  What do they mean by dynamic? This new design allows for the vertical and horizontal movement of CPU resources inside the array.  CPU resources are divided into three groups or pools: front end host access, back in storage access, and data services.  Vertical movement allows for movement of CPU cores between pools. Horizontal movement does away with the cpu-to-director constraint of the past by allowing CPU resources to be shared amongst each pool.  For example if you have a front end director which suddenly needs more CPU than it has, it can pull from the entire pool of resources.  This is a pretty awesome feature if you’ve ever spiked the CPU on a director.

 

This new dynamic virtual matrix is enabled by a significant enchantment to the enginuity operating system enhancement, Hypermax.  This brings an embedded hypervisor to run various data services directly on the VMAX hardware.   Tools such as VPLEX, RecoverPoint, File and Cloud gateways, and management tools can now run directly on the hypervisor for ultra-low latency access while decreasing footprint and cost required to run these data services.

 

VMAX3 allows you to manage application storage by using Service Level Objectives (SLO) using policy based automation rather than the tiering we have today. This allows you to ensure a certain level of performance for your workload over the entire lifetime of the data and array.  As you add workload additional resources may be needed to ensure your Service Level.  The VMX comes with 4 SLO policies defined. Each has a set of workload characteristics which determine the drive types which will be used for the SLO.

VMAX3_Defined_SLO

The VMAX3 is offered in three different type: 100k, 200k, and 400k.  While these may seem to match to 10k, 20k, and 40k of today – they do not.  EMC believes most existing 20k and 40k arrays today will fit nicely into the 200k. Some of the highlighted hardware changes include much needed support for 16 Gbps FC, a native SAS 6Gbps back-end allowing for three times the bandwidth,  high density DAEs allowing for up to 720 drives per engine using 2.5” form factor or 360 using 3.5”, Ivy Bridge CPU, and support for larger memory configurations.

 

1 to 2 Engines 1 to 4 Engines 1 to 8 Engines
250K IOPS 850K IOPS 3.2M IOPS
1440*  2.5” Drives 2880* 2.5” Drives 5760* 2.5” Drives
2.4PB*, 64 Ports 4.8PB*, 128 Ports 9.6PB*, 256 Ports

 

We’re also getting some changes to Timefinder allowing for hybrid cloud scale snapping.  SnapVX will now allow you to have up to 1024 copies per source.  Each snap is targetless requiring no setup of savepools , etc. Snapshots aren’t devices at all.  A device is only required when you access a snapshot by binding a device to it.  Snaps are completely non-impactful to IO.

 

This this new hypervisor based data services EMC also announces something called EMC ProtectPoint, which was demoed at EMC World.  For more information on that you can check out my post.

1 Comment


Comments are closed, but trackbacks and pingbacks are open.