As a long time enterprise infrastructure specialist, I’ve spent countless hours trying to optimize the performance of environments.  Early in my career, I spent some time on a team who worked very closely with the monitoring team where I learned how hard it was to correlate the volumes of data collected. We were collecting so much data about our environment that it was almost overwhelming. Things like the temperature of the CPU, how many storage IOs were pending, and memory usage was.  We had all this awesome data and what did we do with it?  We set up monitoring to make sure numbers didn’t cross a certain threshold.  When it did cross that threshold, we sent an alert.  All this data at our fingertips and all we used it was for alerting.  I knew something was off, but I was green and didn’t  understand that we were missing the bigger picture. That was a long time ago,Read More →

In my twenty years of enterprise infrastructure experience, I’ve noticed a few things that are universal to every organization.  One of the most universally time-consuming things about working IT is usually disaster recovery testing. We all know that business continuity is extremely important, but that doesn’t make testing and executing recovery plans any less expensive.  It takes compute power to takes full and incremental copies of the data and, of course, storage to house the backups.Organizations also spend weeks and weeks of people’s time planning, documenting, executing, and remediating disaster recovery plans.  Until needed business resiliency often seems like a waste of money and time – but that all changes when you need it. When finally needed everyone remembers what a great investment data protection is, but what about all the rest of time?  Can’t data resilience be more than a one-trick pony? The simple answer is “yes” it is possible to use all the data copiesRead More →