I recently attended Storage Field Day 7 and spent some time talking about the concept of data virtualization. Data virtualization seeks to add a layer of abstraction between the storage type and the client. Data virtualization, similar to what server virtualization did for compute resource, seeks to free the data from the underlying physical resources. Primary Data seeks to make data virtualization the cornerstone of software-defined storage.
In November of last year Primary Data came out of stealth to address the problem of data mobility using data virtualization. Today data is locked up in storage arrays, public cloud providers, and local server storage. Each of these types of data repositories had different data service offerings ranging from rich to extremely limited. The metadata and data are locked away of the silo of the repository. A few solutions exist in the market today for data virtualization, but they rely on the data capabilities of the encapsulated storage target. Primary Data offers a true data virtualization platform which brings rich data services into the virtualization layer regardless of underlying target capabilities.
The key to Primary Data’s solution lays in the separation of the metadata from the data. This means that Primary Data becomes a control channel which brokers a connection to the data. One of primary benefits this method brings is access to the data completely protocol agnostic. It doesn’t matter if the backend is direct attached storage, fiber channel, network storage, or even an object layer. Data targets can be flash storage or magnetic media; it doesn’t matter to Primary Data.
Most modern storage arrays either fit into the high performance all-flash configuration or a hybrid array which offer native data tiering. Data is placed a array, locally on the server, or in a cloud provider based on a few factors. Generally the most prevalent of these are cost and performance requirements. People say things like “the fastest storage you have” or “the cheapest slowest storage” but often have no real idea of performance requirements of the dataset. By storing the metadata and brokering the data connection, Primary Data allows data to be moved to the correct target and ensure a balance between cost and performance – all without sacrificing any rich data services. Suddenly an older storage array without robust data services can leverage Primary Data.
Metadata on Primary Data offers several advantages over a normal coupled metadata and data approach. Primary Data houses metadata locally and globally de-duplicated, which helps reduced the storage footprint any environments which have copies and copies of the same data. Because metadata is centrally storage, searches and data accesses are accelerated. This Big Metadata creation uses by Primary Data has tremendous value for storage performance and cost considerations.
More and more enterprises are strings to implement Software-defined storage and running into a new set of challenges in doing so. The idea is the change how data is storages, placed, and accessed. Primary Data offers a unique offering to do just that. Its policy-based automation aims to become a key in successfully moving storage environments to the next level. Primary Data defiantly is looking to make a splash in the storage arena and helping IT shops transform how they think about data today.
Disclaimer: I attended Storage Field Day 7 as a delegate. My travel, accommodations, and most meals were paid for. There is no requirement to blog or tweet about my experiences and I am not compensates in any way
Comments are closed, but trackbacks and pingbacks are open.