What the heck is tail latency anyways?

What the heck is tail latency anyways?

thanksgiving turkeySometimes when you’re roasting a Thanksgiving turkey in the oven, you notice what appears to be uneven cooking on the bird. Uneven cooking can create a problem for the cook because you risk drying out one part or under cooking another. The last thing you want to do is serve undercooked and dangerous food. The second to last thing you want to do is serve dry turkey to your grandma. Uneven cooking is the bain of cooks everywhere, so they take great care to avoid and mitigate the problem. No, you haven’t stumbled onto a random food post, I just wanted to point out that consistency and predictability matters in the kitchen. Just like it matters in the kitchen, it matters in nearly every aspect of technology.

We want web pages to load quickly, search results to return instantly, and our turkeys to cook evenly. We want gravy without lumps and wifi without bumps. Nowhere is consistently and predictability more important than the world of enterprise storage. The measure we tend to value the most for measuring storage performance is latency. We can categorize storage latency into two buckets: average latency and tail latency.

The Storage Networking Industry Association (SNIA) talked a lot about tail latency at Storage Field Day 12, where I was unfortunately not present. Average latency is the average amount of it takes to complete a single transaction. Think of this is your storage devices reaction time. While less latencsnia logoy is always better, average latency is fine because it’s predictable. It’s the devil you know. What we need to watch out for is called tail latency. Tail latency is the type of latency spikes that will make you serve that dry turkey to grandpa. It’s not predictable. You can’t plan for it. All you can do is throw some gravy on it and hope. According to SNIA, these spikes in latency can be up to 10x that of average latency.

In a typical SDS setup, we spread data across a lot of storage devices. When we need to access the data, we request bits and pieces from across the system and wait for a response. The slowest drive to respond causes a delay in the entire response. Waiting seems silly in this case because we can likely get the data we need from another drive – but we value data recovery over response time, so we don’t.

dino tailIf we want to reduce tail latency, and not just throw gravy on it, what can we do? For starters, we can build smarter software. Software that can tag an IO with a retry policy hint allowing it to favor response time over data recovery. In other words, it can say the IO is part of a bigger request and get it fast – and if you can’t get it soon don’t bother getting it just fail. Then we can retrieve the data from another drive. We can also aim to for storage networks with deterministic latency with higher bandwidth and less congestion, QoSing, or buffering.

You might think these latency spikes are uncommon, and you would be partially right. SNIA estimates over 2% of IO may be suffering due to tail latency. SNIA is working on several initiatives to help reduce these tail latencies, and more information is available on their website.

Comments are closed, but trackbacks and pingbacks are open.