In my twenty years of enterprise infrastructure experience, I’ve noticed a few things that are universal to every organization.  One of the most universally time-consuming things about working IT is usually disaster recovery testing. We all know that business continuity is extremely important, but that doesn’t make testing and executing recovery plans any less expensive.  It takes compute power to takes full and incremental copies of the data and, of course, storage to house the backups.Organizations also spend weeks and weeks of people’s time planning, documenting, executing, and remediating disaster recovery plans.  Until needed business resiliency often seems like a waste of money and time – but that all changes when you need it. When finally needed everyone remembers what a great investment data protection is, but what about all the rest of time?  Can’t data resilience be more than a one-trick pony? The simple answer is “yes” it is possible to use all the data copiesRead More →

Developer turned network engineer turned developer, Matt Oswalt discusses the goal of continuous integration networking. The goal of making changes to the network at peak and ensuring the change achieves the desired result. Matt introduces the concept of his open source project ToDD and how continuous integration is a achievable long term goal.    Show Notes  ToDD Github  Spirient Virtualization Field Day 6 Presentation Matt’s blog Touchless network configuration usecase Interop  Read More →

In the IT industry today it is nearly impossible not to hear the word cloud dozens of times a day, but many storage administrators treat cloud a four letter word.  The basic tenant for a storage administrator is to ensure an organization’s data is safe and secure.  If the storage administrator makes a mistake bad, bad things happen.  Companies fold, black holes collapse, and sun exploded.  NetApp is trying to change the minds of those storage administrators, and for good reason.  IT organizations are always looking to do more work with lesss money, and cloud storage can’t be ignored as a viable way to do that.   At Storage Field Day 9, NetApp talked a fair amount about how they are embracing cloud storage as key to the industry’s future.  No more a storage vendor affords not to embrace cloud storage,  and NetApp sees it as a key.  Part of the future for NetApp is expanding theRead More →

Pure Storage Logo

This week I’ve been spending some time at Pure Accelerate,  where I’ve been able to talk to the engineering and executive teams behind the new FlashBlade system. In an attempt to embrace its start up cultural roots, Pure Storage developed FlashBlade as a startup inside the company.  What that means is they hired new engineering staff to build a unique and separate product from the ground up.  The new team members, to keep the development secretive,  were not connected to other traditional Pure employees on Linkedin. While the development was largely separate,  some of the FlashArray development team did help where it main sense.  That collaboration resulted in a fork of the FlashArray management interface which is used by FlashBlade. The result of the startup of a company is a new and a unique product. The first thing to understand about FlashBlade is what it is not.  It is nor a replacement for a low latency andRead More →

If you’ve worked in IT for any amount of time you’ve likely heard the term “secondary storage” which you’ve known as a backup tier.  You’ve also heard of “tier 2” storage for test and development workloads not needing the data services of production.  These two terms have had very different requirements.  Backups target storage is generally cheap, deep, and optimized for sequential writes.  Test/dev storage, on the other hand, needs to have different performance since it has actual workloads. Cohesity thinks this needs to change.  They content that secondary storage needs to be anything that is not primary storage. Redefining a term and carving out a new market segment is no small task, but Cohesity shows some pretty interesting use cases: Data Protection for VMware environments – Once a hypervisor snapshot is created the data is sent to the Cohesity array where things like deduplication and replication can be applied. This gives you unlimited snaps without theRead More →