I started my career in the information technology industry when I was young, so young that I was still in high school.  I wasn’t working at Burger King or the mall like most of my friends, instead I was putting my passion for technology to work bringing the people of the Cincinnati area dial-up internet access. A few friends and I had connected with a local businessman and, somehow, the idea to start a new company was formed.  In the beginning, it was a handful of us, with me and my friend Todd doing all the server and development work. We spent countless hours of the mid-90s building servers, creating web pages, and working hard in out tiny server room. It felt like the pinnacle of technology to me. The whole world was at our fingertips just waiting for us the capture it. Roles shifting and Todd moved on to bigger and better-paying things.  Soon after heRead More →

In the IT industry today it is nearly impossible not to hear the word cloud dozens of times a day, but many storage administrators treat cloud a four letter word.  The basic tenant for a storage administrator is to ensure an organization’s data is safe and secure.  If the storage administrator makes a mistake bad, bad things happen.  Companies fold, black holes collapse, and sun exploded.  NetApp is trying to change the minds of those storage administrators, and for good reason.  IT organizations are always looking to do more work with lesss money, and cloud storage can’t be ignored as a viable way to do that.   At Storage Field Day 9, NetApp talked a fair amount about how they are embracing cloud storage as key to the industry’s future.  No more a storage vendor affords not to embrace cloud storage,  and NetApp sees it as a key.  Part of the future for NetApp is expanding theRead More →

Pure Storage Logo

This week I’ve been spending some time at Pure Accelerate,  where I’ve been able to talk to the engineering and executive teams behind the new FlashBlade system. In an attempt to embrace its start up cultural roots, Pure Storage developed FlashBlade as a startup inside the company.  What that means is they hired new engineering staff to build a unique and separate product from the ground up.  The new team members, to keep the development secretive,  were not connected to other traditional Pure employees on Linkedin. While the development was largely separate,  some of the FlashArray development team did help where it main sense.  That collaboration resulted in a fork of the FlashArray management interface which is used by FlashBlade. The result of the startup of a company is a new and a unique product. The first thing to understand about FlashBlade is what it is not.  It is nor a replacement for a low latency andRead More →

If you’ve worked in IT for any amount of time you’ve likely heard the term “secondary storage” which you’ve known as a backup tier.  You’ve also heard of “tier 2” storage for test and development workloads not needing the data services of production.  These two terms have had very different requirements.  Backups target storage is generally cheap, deep, and optimized for sequential writes.  Test/dev storage, on the other hand, needs to have different performance since it has actual workloads. Cohesity thinks this needs to change.  They content that secondary storage needs to be anything that is not primary storage. Redefining a term and carving out a new market segment is no small task, but Cohesity shows some pretty interesting use cases: Data Protection for VMware environments – Once a hypervisor snapshot is created the data is sent to the Cohesity array where things like deduplication and replication can be applied. This gives you unlimited snaps without theRead More →

I recently attended Storage Field Day 7 and spent some time talking about the concept of data virtualization.  Data virtualization seeks to add a layer of        abstraction between the storage type and the client. Data virtualization, similar to what server virtualization did for compute resource, seeks to free the data from the underlying physical resources. Primary Data seeks to make data virtualization the cornerstone of software-defined storage. In November of last year Primary Data came out of stealth to address the problem of data mobility using data virtualization.  Today data is locked up in storage arrays, public cloud providers, and local server storage.  Each of these types of data repositories had different data service offerings ranging from rich to extremely limited. The metadata and data are locked away of the silo of the repository. A few solutions exist in the market today for data virtualization, but they rely on the data capabilities of theRead More →

EMC has unveiled its new message of Powerful, Trusted Agile at this redefined event today. Along with that comes the next generation VMAX hardware, the VMAX3. With this newest revision of EMC’s flagship product they look to bring the trust, and control of Centralized IT together with the cost, agility, and scale of modern Self Service IT. This release brings us three new hardware models: 100k, 200k, and 400k. Each of the new models are built on a single new architecture designed for hybrid cloud scale. This includes a new Operating System named Hypermax and a major overhaul of the virtual matrix. The RapidIO Virtual Matrix has been replaced with a 56GB/s Infiniband Dynamic Virtual Matrix.  What do they mean by dynamic? This new design allows for the vertical and horizontal movement of CPU resources inside the array.  CPU resources are divided into three groups or pools: front end host access, back in storage access, and dataRead More →