In the IT industry today it is nearly impossible not to hear the word cloud dozens of times a day, but many storage administrators treat cloud a four letter word.  The basic tenant for a storage administrator is to ensure an organization’s data is safe and secure.  If the storage administrator makes a mistake bad, bad things happen.  Companies fold, black holes collapse, and sun exploded.  NetApp is trying to change the minds of those storage administrators, and for good reason.  IT organizations are always looking to do more work with lesss money, and cloud storage can’t be ignored as a viable way to do that.   At Storage Field Day 9, NetApp talked a fair amount about how they are embracing cloud storage as key to the industry’s future.  No more a storage vendor affords not to embrace cloud storage,  and NetApp sees it as a key.  Part of the future for NetApp is expanding theRead More →

Pure Storage Logo

This week I’ve been spending some time at Pure Accelerate,  where I’ve been able to talk to the engineering and executive teams behind the new FlashBlade system. In an attempt to embrace its start up cultural roots, Pure Storage developed FlashBlade as a startup inside the company.  What that means is they hired new engineering staff to build a unique and separate product from the ground up.  The new team members, to keep the development secretive,  were not connected to other traditional Pure employees on Linkedin. While the development was largely separate,  some of the FlashArray development team did help where it main sense.  That collaboration resulted in a fork of the FlashArray management interface which is used by FlashBlade. The result of the startup of a company is a new and a unique product. The first thing to understand about FlashBlade is what it is not.  It is nor a replacement for a low latency andRead More →

If you’ve worked in IT for any amount of time you’ve likely heard the term “secondary storage” which you’ve known as a backup tier.  You’ve also heard of “tier 2” storage for test and development workloads not needing the data services of production.  These two terms have had very different requirements.  Backups target storage is generally cheap, deep, and optimized for sequential writes.  Test/dev storage, on the other hand, needs to have different performance since it has actual workloads. Cohesity thinks this needs to change.  They content that secondary storage needs to be anything that is not primary storage. Redefining a term and carving out a new market segment is no small task, but Cohesity shows some pretty interesting use cases: Data Protection for VMware environments – Once a hypervisor snapshot is created the data is sent to the Cohesity array where things like deduplication and replication can be applied. This gives you unlimited snaps without theRead More →

Image: Gartner Inc Gartner recently released their Magic Quadrant (MQ)for x86 Server Virtualization Infrastructure. Both The Register and ZDnet have questioned the logic in the research. Many of the subject matter experts I follow on Twitter have questioned if the MQ is little more than marketing for those vendors that spend money with Gartner. I tackle the question and make some suggestions for Gartner which includes following the lead of Big 4 consulting practices that have to deal with similar issues. Read More →

My wife’s legacy PDA may be the perfect metaphor for the advantages and disadvantages of hyperconverged infrastructure. I discuss why convergence may benefit the majority of data centers while not being a universal solution for every enterprise.Read More →

I recently attended Storage Field Day 7 and spent some time talking about the concept of data virtualization.  Data virtualization seeks to add a layer of        abstraction between the storage type and the client. Data virtualization, similar to what server virtualization did for compute resource, seeks to free the data from the underlying physical resources. Primary Data seeks to make data virtualization the cornerstone of software-defined storage. In November of last year Primary Data came out of stealth to address the problem of data mobility using data virtualization.  Today data is locked up in storage arrays, public cloud providers, and local server storage.  Each of these types of data repositories had different data service offerings ranging from rich to extremely limited. The metadata and data are locked away of the silo of the repository. A few solutions exist in the market today for data virtualization, but they rely on the data capabilities of theRead More →