Last month I wrote a paper for Red Hat customers called Low Latency Performance Tuning Guide for Red Hat Enterprise Linux 6 or LLPTGFRHEL6 for short 😉
It’s the product of significant research and my hands-on experiments into what configurations provide tangible benefit for latency-sensitive environments. Although the traditional audience for this paper is the financial services industry, I have found that there are all sorts of latency-sensitive workloads out there. From oil and gas to healthcare to the public sector and cloud, everyone wants the best performance out of their shiny new kit.
This paper started out as a formal response to many similar questions I was receiving from the field. Initially a 1-2 page effort, within a day it had blown up to 14 pages of stuff from my mountain of notes. Talk about boiling the ocean…although I was happy with the content, the formatting left a little to be desired so I pared it back to about 7 pages and linked out to other in-depth guides where it made sense…
I’m mostly happy with how it turned out…I know that customers were looking for this type of data (because they asked me over and over) and so I set out to conduct numerous experiments filling out each bullet point with hard data and zero hand-waving. I wanted to explicitly corroborate or dispel certain myths that are floating around out there about performance impact of various knobs, so I tested each in isolation and reported my recommendations.
I do hope that this paper helps to guide administrators in their quest to realize ROI from both their hardware and software investments, please have a look and let me know what you think!
P.S. are there any other performance domains, workloads, use-cases or environments that you’d like us to look at? Someone mentioned high-bandwidth-high-latency (long-fat-pipe) experiments…would that be of interest?