Battle Plan for RDMA over Converged Ethernet (RoCE)

What is all that %sys time ?  “I never know what she’s _doing_ in there…” Ha!

12:01:35 PM CPU %usr %nice %sys %iowait %irq %soft %idle
12:01:36 PM all 0.08 0.00  3.33 0.00    0.00 5.00  91.59
12:01:36 PM 0   0.00 0.00 40.59 0.00    0.00 59.41  0.00

...

You can instantly find out with ‘perf top’.  In this case (netperf), the kernel is spending time copying skb’s around, mediating between kernel and userspace.  I wrote a bit about this in a previous blog post; the traditional protection ring.

All that copying takes time…precious, precious time.  And CPU cycles; also precious.  And memory bandwidth…etc.

HPC customers have, for decades, been leveraging Remote Direct Memory Access (RDMA) technology to reduce latency and associated CPU time.  They use InfiniBand fabrics and associated InfiniBand verbs programming to extract every last bit of performance out of their hardware.

As always, that last few percent performance ends up being the most expensive.  Both in terms of hardware and software, as well as the people-talent and their effort.  But it’s also sometimes the most lucrative.

Over the last few years, some in-roads have been made in lowering the bar to entry into RDMA implementation, with one of those being RoCE (RDMA Over Converged Ethernet).  My employer Red Hat ships RoCE libraries (for Mellanox cards) in the “High Performance Networking” channel.

I’ve recently been working on characterizing RoCE in the context of it’s usefulness in various benchmarks and customer loads, so to that end I went into the lab and wired up a pair of Mellanox ConnectX-3 VPI cards back-to-back with a 56Gbit IB cable.  The cards are inside Sandy Bridge generation servers.

Provided some basic understanding of the hideous vernacular in this area, it turns out to be shockingly easy to setup RoCE.  Here’s some recommended reading to get you started:

First thing, make sure your server is subscribed to the HPN channel on RHN.  Then let’s get all the packages installed.

# yum install libibverbs-rocee libibverbs-rocee-devel libibverbs-rocee-devel-static libibverbs-rocee-utils libmlx4-rocee libmlx4-rocee-static rdma mstflint libibverbs-utils infiniband-diags

The Mellanox VPI cards are multi-mode, in that they support either Infiniband or Ethernet.  The cards I’ve got came in Infiniband mode, so I need to switch them over.  Mellanox ships a script called connectx_port_config to change the mode, but we can do it with driver options too.

Get the PCI address of the NIC:

# lspci | grep Mellanox
 21:00.0 Network controller: Mellanox Technologies MT27500 Family [ConnectX-3]

Check what ethernet devices exist currently:

# ls -al /sys/class/net

I see ib0/1 devices now since my cards are in IB mode.  Now let’s change to ethernet mode.  Note you need to substitute your PCI address as it will likely differ from mine (21:00.0).  I need eth twice since this is a dual-port card.

 # echo "0000:21:00.0 eth eth" >> /etc/rdma/mlx4.conf
 # modprobe -r mlx4_ib
 # modprobe -r mlx4_en
 # modprobe -r mlx4_core
 # service rdma restart ; chkconfig rdma on
 # modprobe mlx4_core
 # ls -al /sys/class/net

Now I see eth* devices (you may see pXpY names depending on the BIOS), since the cards are now in eth mode. If you look in dmesg you will see the mlx4 driver automatically sucked in the mlx4_en module accordingly.  Cool!

Let’s verify that there is now an InfiniBand device ready for use:

# ibstat
CA 'mlx4_0'
	CA type: MT4099
	Number of ports: 2
	Firmware version: 2.11.500 <-- flashed the latest fw using mstflint.
	Hardware version: 0
	Node GUID: 0x0002c90300a0e970
	System image GUID: 0x0002c90300a0e973
	Port 1:
		State: Active  <-------------------- Sweet.
		Physical state: LinkUp
		Rate: 40
		Base lid: 0
		LMC: 0
		SM lid: 0
		Capability mask: 0x00010000
		Port GUID: 0x0202c9fffea0e970
		Link layer: Ethernet
	Port 2:
		State: Down
		Physical state: Disabled
		Rate: 10
		Base lid: 0
		LMC: 0
		SM lid: 0
		Capability mask: 0x00010000
		Port GUID: 0x0202c9fffea0e971
		Link layer: Ethernet

Cool so we’ve got our RoCE device up from a hardware init standpoint, now give it an IP like any old NIC.

Special note for IB users:  most IB switches have a subnet manager built in (RHEL ships one too, called opensm).  But when using RoCE there’s no need for opensm as it’s specific to InfiniBand fabrics and plays no part in Ethernet fabrics. And since RoCE runs over Ethernet, there is no need for a subnet manager.  The InfiniBandTA article I linked above goes into some detail about what benefits the SM provides on IB fabrics.

Now we get to the hard and confusing part.  Just kidding, we’re done.  Was it that intimidating ?Let’s test it out using an RDMA application that ships with Red Hat MRG Messaging, called qpid-latency-test.  I chose this because it supports RDMA as a transport.

# yum install qpid-cpp-server qpid-cpp-server-rdma qpid-cpp-client qpid-cpp-client-devel -y
# qpidd --auth no -m no
 2013-03-15 11:45:00 [Broker] notice SASL disabled: No Authentication Performed
 2013-03-15 11:45:00 [Network] notice Listening on TCP/TCP6 port 5672
 2013-03-15 11:45:00 [Security] notice ACL: Read file "/etc/qpid/qpidd.acl"
 2013-03-15 11:45:00 [System] notice Rdma: Listening on RDMA port 5672  <-- Sweet.
 2013-03-15 11:45:00 [Broker] notice Broker running

 

Defaults: around 100us.

# numactl -N0 -m0 nice -20 qpid-latency-test -b 172.17.2.41 --size 1024 --rate 10000 --prefetch=2000 --csv
 10000,0.104247,2.09671,0.197184
 10000,0.11297,2.12936,0.198664
 10000,0.099194,2.11989,0.197529
 ^C

With tcp-nodelay: around 95us

# numactl -N0 -m0 nice -20 qpid-latency-test -b 172.17.2.41 --size 1024 --rate 10000 --tcp-nodelay --prefetch=2000 --csv
 10000,0.094664,3.00963,0.163806
 10000,0.093109,2.14069,0.16246
 10000,0.094269,2.18473,0.163521

With RDMA/RoCE/HPN:  around 65us.

# numactl -N0 -m0 nice -20 qpid-latency-test -b 172.17.2.41 --size 1024 --rate 10000 --prefetch=2000 --csv -P rdma
 10000,0.065334,1.88211,0.0858769
 10000,0.06503,1.93329,0.0879431
 10000,0.062449,1.94836,0.0872795
 ^C

Percentage-wise, that’s a really substantial improvement.  Plus don’t forget all the %sys time (which also includes memory subsystem bandwidth usage) you’re saving.  You get all those CPU cycles back to spend on your application!

Disclaimer:  I didn’t do any heroic tuning on these systems.  The above performance test numbers are only to illustrate “proportional improvements”.  Don’t pay much attention to the raw numbers other than order-of-magnitude.  You can do much better starting with this guide

So!  Maybe kick the tires on RoCE, and get closer to wire speed with lower latencies.  Have fun!

Big-win I/O performance increase coming to KVM guests in RHEL6.4

I finally got the pony I’ve been asking for.

There’s a very interesting (and impactful) performance optimization coming to RHEL6.4.  For years we’ve had to do this sort of tuning manually, but thanks to the power of open source, this magical feature has been implemented and is headed your way in RHEL6.4 (try it in the beta!)

enterprise

What is this magical feature…is it a double-rainbow ?  Yes.  All the way.

It’s vhost thread affinity via virsh emulatorpin.

If you’re familiar with the vhost_net network infrastructure added to Linux, it moves the network I/O out of the main qemu userspace thread to a kthread called vhost-$PID (where $PID is the PPID of the main KVM process for the particular guest).  So if your KVM guest is PID 12345, you would also see a [vhost-12345] process.

Anyway…with the growing amount of CPUs/RAM available and proliferation of NUMA systems (basically everything x86 these days), we have to be very careful to respect NUMA topology when tuning for maximum performance.  Lots of common optimizations these days center around NUMA affinity tuning, and the automatic vhost affinity support is tangentially related to that.

If you are concerned with having the best performance for your KVM guest, you may have already used either virsh or virt-manager to bind the VCPUs to a physical CPUs or NUMA nodes.  virt-manager makes this very easy by clicking “Generate from host NUMA configuration”:

vcpupin

OK that’s great.  The guest is going to stick around on those odd-numbered cores.  On my system, the NUMA topology looks like this:

# lscpu|grep NUMA
NUMA node(s): 4
NUMA node0 CPU(s): 0,2,4,6,8,10
NUMA node1 CPU(s): 12,14,16,18,20,22
NUMA node2 CPU(s): 13,15,17,19,21,23
NUMA node3 CPU(s): 1,3,5,7,9,11

So virt-manager will confine the guest’s VCPUs to node 3.  You may think you’re all set now.  And you’re close and you can see the rainbow on the horizon.  You have significantly improved guest performance already by respecting physical NUMA topology, there is more to be done.  Inbound pony.

Earlier I described the concept of the vhost thread, which contains the network processing for it’s associated KVM guest.  We need to make sure that the vhost thread’s affinity matches the KVM guest affinity that we implemented with virt-manager.

At the moment, this feature is not exposed in virt-manager or virt-install, but it’s still very easy to do.  If your guest is named ‘rhel64’, and you want to bind it’s “emulator threads” (like vhost-net) all you have to do is: 

# virsh emulatorpin rhel64 1,3,5,7,9,11 --live
# virsh emulatorpin rhel64 1,3,5,7,9,11 --config

Now the vhost-net threads share a last-level-cache (LLC) with the VCPU threads.  Verify with:

# taskset -pc <PID_OF_KVM_GUEST>
# taskset -pc <PID_OF_VHOST_THREAD>

These should match.  Cache memory is many orders of magnitude faster than main memory, and the performance benefits of this NUMA/cache sharing is obvious…using netperf:

Avg TCP_RR (latency)
Before: 12813 trans/s
After: 14326 trans/s
% diff: +10.5%
Avg TCP_STREAM (throughput)
Before: 8856Mbps
After: 9413Mbps
% diff: +5.9%

So that’s a great performance improvement; just remember for now to run the emulatorpin stuff manually. Note that as I mentioned in previous blog posts, I always mis-tune stuff to make sure I did it right. The “before” numbers above are from the mis-tuned case 😉

Off topic…while writing this blog I was reminded of a really funny story I read on Eric Sandeen’s blog about open source ponies. Ha!

Generating arbitrary network packets using the pktgen kernel module

I am staring at a workload that is zillions upon zillions of very tiny packets, and each one is important.  They've got to get there fast.  As fast as possible.  Nagle:  you are not welcome here.  I am seeing some seemingly random jitter, and it's only on this one system.  <confused>

I need to take apart this stack piece by piece, and test each in isolation.   Let's start at the lowest level possible.  RHEL6 includes a kernel module called pktgen (modprobe pktgen).  This module allows you to create network packets, specify it's attributes and send them at the fastest possible rate with the least overhead.

Using pktgen, I was able to achieve over 3.3M packets per second on a 10GB Solarflare NIC.  These packets do not have any protocol TCP/UDP packet processing overhead.  You can watch the receivers netstat/IP counters, though.

Since these are synthetic packets, you have to give pktgen some basic information in order for the packets to be constructed with enough info to get there they're going.  Things like destination IP/MAC, the number of packets  and their size.  I tested tiny packets, 64bytes (because that's what this workload needs).  I also tested jumbo frames just to be sure I was doing it right.

This brings up a habit of mine worth mentioning; purposely mis-tuning your environment to validate your settings.  A sound practice!
To get to 3.3Mpps, I only had to make one key change.  Use a 10x factor for  clone_skb.  Anything less than 10 lead to fewer packets (a value of zero halved the pps throughput as compared to 10).  Anything more than 10 had no performance benefit, so I'm sticking with 10 for now.

I wrote a little helper script (actually modified something I found online)

./pktgen.sh <NIC_NAME> <CPU_CORE_NUMBER>

# ./pktgen.sh p1p1 4
Running pktgen with config:
---------------------------
NIC=p1p1
CPU=4
COUNT=count 100000000
CLONE_SKB=clone_skb 10
PKT_SIZE=pkt_size 60
DELAY=delay 0
MAX_BEFORE_SOFTIRQ=10000

Running...CTRL+C to stop

^C
Params: count 100000000  min_pkt_size: 60  max_pkt_size: 60
     frags: 0  delay: 0  clone_skb: 10  ifname: p1p1
     flows: 0 flowlen: 0
     queue_map_min: 0  queue_map_max: 0
     dst_min: 172.17.1.53  dst_max:
        src_min:   src_max:
     src_mac: 00:0f:53:0c:4b:ac dst_mac: 00:0f:53:0c:58:98
     udp_src_min: 9  udp_src_max: 9  udp_dst_min: 9  udp_dst_max: 9
     src_mac_count: 0  dst_mac_count: 0
     Flags:
Current:
     pkts-sofar: 17662390  errors: 0
     started: 2222764017us  stopped: 2228095026us idle: 40us
     seq_num: 17662391  cur_dst_mac_offset: 0  cur_src_mac_offset: 0
     cur_saddr: 0x330111ac  cur_daddr: 0x350111ac
     cur_udp_dst: 9  cur_udp_src: 9
     cur_queue_map: 0
     flows: 0
Result: OK: 5331009(c5330968+d40) nsec, 17662390 (60byte,0frags)
  3313141pps 1590Mb/sec (1590307680bps) errors: 0

^^ ~3.3 million packets per second.

Without the protocol and higher layer processing, the number 3.3M has somewhat limited value.  What it's testing is the kernel's TX path, the driver, NIC firmware and validating physical infrastructure.  This is useful for i.e. regression testing of drivers, validating NIC firmware or tuning the TX path for whatever particular packet-profile your application will drive.

I want to be clear -- that micro-benchmarks like this have their place.  But take care when designing benchmarks to ultimately include as much/all of your stack as possible in order to draw usable conclusions.  I stumbled on a quote from Linus Torvalds on this topic that I really liked:

"please don't ever benchmark things that don't make sense, and then use the numbers as any kind of reason to do anything. It's worse than worthless. It actually adds negative value to show "look ma, no hands" for things that nobody does. It makes people think it's a good idea, and optimizes the wrong thing entirely.
Are there actual real loads that get improved? I don't care if it means that the improvement goes from three orders of magnitude to just a couple of percent. The "couple of percent on actual loads" is a lot more important than "many orders of magnitude on a made-up benchmark".

Truth.

The Open-Source Advantage…through the eyes of Red Hat, DreamWorks and OpenStack

Let’s say you’re a big company in a competitive industry.  One who innovates and succeeds by creating software.  Not extending COTS, not adapting existing code.  Generating fresh, new code, at your full expense.  The value the company receives by investing in the creation of that software is competitive advantage, sometimes known as the profit-motive.

You’re an executive at this company.  Creating the software was your idea.  You are responsible for the ROI calculations that got the whole thing off the ground.

Your career may rest on the advantage the invention provides, and you are boldly considering opening up the code you’ve created.  But first you need to understand why so many of your peers (at company after company) have staked their careers behind open-sourcing what was once their company’s secret sauce.

Let’s look at some examples.  Since I work for Red Hat, I’ll mention a few of our own first.  But we’re a different animal, I’ve looked further into different  verticals to find powerful examples.

Over the last 5 years, Red Hat has open sourced software we’ve both built and acquired, such as RHEV-M (oVirt), CloudForms (Katello) and OpenShift (Origin).  Just as other companies derive value from our software, we also derive some value from individuals/companies extending software, generating powerful ecosystems of contributors both public and private.  For example, http://studiogrizzly.com/ is extending the open-source upstream for Red Hat’s Platform-as-a-Service called OpenShift Origin.

Both Rackspace and NASA also identified open-source as a way to build better software faster.  Their OpenStack project is a living, breathing example of how a project can be incubated within a closed ecosystem (closed in governance rather than code), grow beyond anyone’s imagination to scratch an itch that no one else can, and blossom into an incredible example of community driven innovation.  As a proof-point, this year the governance of OpenStack transitioned to a Foundation model.  I should mention that Red Hat maintains a seat on that Foundation board, contributes significant resources/code to the upstream project and productized an OpenStack enterprise distribution over the summer.

More recently, DreamWorks has decided to open-source an in-house tool they’ve developed called OpenVBD.

It’s well known that DreamWorks relies heavily on open-source software (Linux in particular).  But to have them contribute directly using The Open Source Way, truly deserves a closer look.  What could have led them to possibly eliminate any competitive advantage derived from OpenVBD ?

While I have no specific knowledge of DreamWorks’ particular situation, here are some ideas:

  • The industry moved on, and they’ve extracted most of the value already.
  • They are moving on to greener pastures, driving profit through new tools or techniques.
  • Maybe OpenVBD has taken on a life of it’s own, and although critical to business processes, it would benefit from additional developers.  But they’d rather pay artists and authors.

If I had to guess, it would be closest to:

  • The costs/maintenance burden for OpenVBD exceeds the value derived.  Set it free.
Now it’s the competitor’s move.  Will they simply study OpenVBD and take whatever pieces they were missing or did poorly ?  Will they jump into a standardization effort ?
The answer may lie in the competitor’s view on the first bullet above, and if they have a source of differentiation (aka revenue), outside of the purpose of OpenVBD.  If they do, standardizing tools will benefit both parties by eliminating duplicate effort/code, or possibly reducing/eliminating maintenance burden on internal tools.  And that will generate better software faster.
This speaks to a personal pet peeve of mine; duplicated effort.  Again and again I see extremely similar open-source tools popping up.  For argument’s sake, let’s say this means that the ecosystem of developers spent (# of projects * # man-hours) creating the software.  If they’d collaborated, it may have resulted in a single more powerful tool with added feature velocity, and potentially multiplied any corporate-backed funding.  That’s a powerful reason to attempt to build consensus before software.  But I digress…

 

 

processor.max_cstate, intel_idle.max_cstate and /dev/cpu_dma_latency

Customers often want the lowest possible latency for their application.  Whether it’s VOIP, weather modeling, financial trading etc.  For a while now, CPUs have had the ability to transition frequencies (P-states) based on load.  This is the called a CPU governor, and the user interface is the cpuspeed service.

For slightly less time, CPUs have been able to scale certain sections of themselves on or off…voltages up or down, to save power.  This capability is known as C-states.  The downside to the power savings that C-states provide is a decrease in performance, as well as non-deterministic performance outside of the application or operating system control.

Anyway, for years I have been seeing things like processor.max_cstate on the kernel cmdline.  This got the customer much better performance at higher power draw, a business decision they were fine with. After some time went on, people began looking at their datacenters and thinking how power was getting so expensive, they’d like to find a way to consolidate.  That’s code for virtualization.  But what about workloads so far unsuitable for virtualization, like those I’ve mentioned…those that should continue to run on bare metal, and further, maybe even specialized hardware ?

A clear need for flexibility:  sysadmins know that changes to the kernel cmdline require reboots.  But the desire to enable the paradigm of absolute performance only-when-I-need-it demands that this be a run-time tunable.  Enter the /dev/cpu_dma_latency “pmqos” interface.  This interface lets you specific a target latency for the CPU, meaning you can use it to indirectly control (with precision), the C-state residency of your processors.  For now, it’s an all-or-nothing affair, but stay tuned as there is work to increase the granularity of C-state residency control to per-core.

Now in 2011, Red Hatter Jan Vcelak wrote a handy script called pmqos-static.py that can enable this paradigm.  No more dependence on kernel cmdline.  On-demand, toggle C-states to your application’s desire.  Control C-states from your application startup/shutdown scripts.  Use cron to dial up the performance before business hours and dial it down after hours.  Significant power savings can come from this simple script, when compared to using the cmdline.

A few notes, before the technical detail.

1) When you set processor.max_cstate=0, the kernel actually silently sets it to 1.

drivers/acpi/processor_idle.c:1086:
 1086 if (max_cstate == 0)
 1087 max_cstate = 1;

2) RHEL6 has had this interface forever, but only recently do we have the pmqos-static.py script.

3) This script provides “equivalent performance” to kernel cmdline options, with added flexibility.

So here’s what I mean…this turbostat output on RHEL6.3, Westmere X5650. (note same behavior on SNB E5-2690):

Test #1: processor.max_cstate=0
pk cr CPU %c0 GHz TSC %c1 %c3 %c6 %pc3 %pc6
 2.37 3.06 2.67 0.03 0.13 97.47 0.00 67.31
Test #2: processor.max_cstate=1
pk cr CPU %c0 GHz TSC %c1 %c3 %c6 %pc3 %pc6
 0.04 2.16 2.67 0.04 0.55 99.37 4.76 88.00
Test #3: processor.max_cstate=0 intel_idle.max_cstate=0
pk cr CPU %c0 GHz TSC %c1 %c3 %c6 %pc3 %pc6
 0.02 2.20 2.67 99.98 0.00 0.00 0.00 0.00
Test #4: processor.max_cstate=1 intel_idle.max_cstate=0
pk cr CPU %c0 GHz TSC %c1 %c3 %c6 %pc3 %pc6
 0.02 2.29 2.67 99.98 0.00 0.00 0.00 0.00
Test #5: intel_idle.max_cstate=0
pk cr CPU %c0 GHz TSC %c1 %c3 %c6 %pc3 %pc6
 0.02 2.19 2.67 99.98 0.00 0.00 0.00 0.00
# rpm -q tuned
tuned-0.2.19-9.el6.noarch
Test #6: now with /dev/cpu_dma_latency set to 0 (via latency-performance  profile) and intel_idle.max_cstate=0.
The cmdline overrides /dev/cpu_dma_latency.
# tuned-adm profile latency-performance
pk cr CPU %c0 GHz TSC %c1 %c3 %c6 %pc3 %pc6
 0.01 2.32 2.67 99.99 0.00 0.00 0.00 0.00
Test #7: no cmdline options + /dev/cpu_dma_latency via
latency-performance profile.
# tuned-adm profile latency-performance
pk cr CPU %c0 GHz TSC %c1 %c3 %c6 %pc3 %pc6
 100.00 2.93 2.67 0.00 0.00 0.00 0.00 0.00

There is additional flexibility, too…let me illustrate:

# find /sys/devices/system/cpu/cpu0/cpuidle -name latency -o -name name | xargs cat
C0
0
NHM-C1
3
NHM-C3
20
NHM-C6
200

This shows you the exit latency (in microseconds) for various C-states on this particular Westmere (aka Nehalem/NHM). Each time the CPU transitions in between C-states, you get a latency hit of almost exactly those number of microseconds (which I can see in benchmarks). By default, an idle Westmere core sits in C6 (SNB sits in C7). To get that core up to C0, it takes 200us.

Here’s what I meant about flexibility. You can control exactly what C-state you want your CPUs in via /dev/cpu_dma_latency and via the pmqos-static.py script.  And all dynamically during runtime. cmdline options do not allow for this level of control, as I showed they override /dev/cpu_dma_latency.  Exhaustive detail about what to expect from each C-state can be found in Intel’s Architecture documentation.  Around page 35-52 or so…

Using the information I fixed out of /sys above…Set it to 200 and you’re in the deepest c-state:

# /usr/libexec/tuned/pmqos-static.py cpu_dma_latency=200
pk cr CPU %c0 GHz TSC %c1 %c3 %c6 %pc3 %pc6
 0.04 2.19 2.67 0.04 0.26 99.66 0.91 91.91

Set it to anything in between 20 and 199, and you get into C3:

# /usr/libexec/tuned/pmqos-static.py cpu_dma_latency=199
pk cr CPU %c0 GHz TSC %c1 %c3 %c6 %pc3 %pc6
 0.03 2.28 2.67 0.03 99.94 0.00 89.65 0.00

Set it to anything in between 1 and 19, and you get into C1:

# /usr/libexec/tuned/pmqos-static.py cpu_dma_latency=19
pk cr CPU %c0 GHz TSC %c1 %c3 %c6 %pc3 %pc6
 0.02 2.18 2.67 99.98 0.00 0.00 0.00 0.00

Set it to 0 and you get into C0. This is what latency-performance
profile does.

# /usr/libexec/tuned/pmqos-static.py cpu_dma_latency=0
pk cr CPU %c0 GHz TSC %c1 %c3 %c6 %pc3 %pc6
 100.00 2.93 2.67 0.00 0.00 0.00 0.00 0.00

Sandy Bridge chips also have C7, but the same rules apply.

You have to decide whether this flexibility buys you anything in order to justify rolling any changes across your environment.

Maybe just the understanding of this is how and why it works might be enough! 🙂

Moving away from useless kernel cmdline options is one less thing for you to maintain, although I realize you still have to enable tuned profile.  So, dial up the performance when you need it, and save power/money when you don’t!  Pretty cool!

How to deal with latency introduced by disk I/O activity…

The techniques used by Linux to get dirty pages onto persistent media have changed over the years.  Most recently the change was from a gang of threads called (pdflush) to a per-backing-device thread model.  Basically one-thread-per-LUN/mount.  While neither is perfect, the fact of the matter is that you shouldn’t really have to care about disk interference with your latency-sensitive app.  Right now, we can’t cleanly apply the normal affinity tuning to the flush-* bdi kthreads and thus cannot effectively shield the latency-sensitive app entirely.

I’m going to stop short of handing out specific tuning advice, because I have no idea what your I/O pattern looks like, and that matters.  A lot.  Suffice it to say that (just like your latency-sensitive application), you’d prefer more frequent smaller transfers/writes over less frequent, larger transfers (which are optimized for throughput).
Going a step further, you often hear about using tools like numactl or taskset to affinitize your application to certain cores, and chrt/nice to control the task’s priority and policy related to the scheduler.  These flusher threads are not easy to deal with.  We can’t apply the normal tuning using any of the above tools, because the flush-* threads are kernel threads, created using the kthread infrastructure.  bdi flush threads take a fundamentally different approach than other
kthreads, which are instantiated on boot (like migration), or module insertion time (like nfs). There’s no way to set a “default affinity mask” on kthreads, and kthreads are not subject to isolcpus.

Even up to the current upstream kernel version, the flush-* threads are started on-demand, (like when you mount a new filesystem), and then they go away after some idle time. When they come back, they have a new pid.  That behavior doesn’t mesh well with affinity tuning.

For example, in the case of nfsd kthreads, since they do not come and go after they are first instantiated, you can apply typical affinity tuning and get measurable performance gains.

For now:

  • Write as little as possible.  And/or write to (shared)memory, then flush it later.
    • Take good care of your needed data, though!  Memory contents go bye-bye in a “power event”.
  • Get faster storage like a PCI-based RAM/SSD disk
  • Reduce the amount of dirty pages kept in cache.
  • Increase the frequency at which dirty pages are flushed, so that there is less written each time.
Further reading:
http://lwn.net/Articles/352144/ (subscription required)
http://lwn.net/Articles/324833/ (subscription required)

Tracking userspace memory allocation with glibc-utils memusage

Will Cohen turned me on to a little helper tool called memusage, which is distributed with glibc.  The purpose of that tool is to trace memory allocation behavior of a process.

In RHEL, the memusage binary is part of the glibc-utils package.  There’s actually also a shared library called /usr/lib64/libmemusage.so that’s part of the base glibc package, which can be used via LD_PRELOAD.

memusage writes output to your terminal, as below:

It is also capable of writing memory allocation over time to a png file, for example:

Netperf is not a particularly memory-intensive benchmark for illustrating it’s usage, just wanted to describe the utility.  I’ll upload more interesting graphs when I run more loads with the library.

Thoughts on Open vSwitch, kernel bypass, and 400gbps Ethernet…

For the Red Hat Summit this year, I wrote a paper on the kernel-bypass technology from Solarflare, called OpenOnload.  From a performance standpoint it’s hard to argue with the results.

I was looking at code from Open vSwitch recently, and it dawned on me that there is an important similarity between Open vSwitch and OpenOnload; a similar 2-phase approach…let me explain.

Both have a “connection setup” operation where many of the well-known user-space utilities come into play (and some purpose-build like ovs-vsctl)…things like adjusting routing, MTU, interface statistics etc…And then what you could call an accelerated path, that’s used after the initial connection setup for passing bits to/from user-space, whether that be a KVM process or your matching engine.

In OpenOnload’s case, the accelerated path bypasses the linux kernel, avoiding kernel-space-user-space data copies (aka context switches) and thus lowering latency.  This technique is also called RDMA, has been around for decades, and there are quite a few vendors out there with analogous solutions.  Often there are optimized drivers, things like OFED and a whole bunch of other tricks, but that’s beside my point…

The price paid for achieving this lower latency is having to completely give up, or entirely re-implement lots of kernel goodies like what you’d expect out of netstat, ethtool and tcpdump.

In the case of Open vSwitch, there is a software “controller” (which decides what to do with a packet) and a data-path implemented in a kernel module that provides the best performance possible once the user-defined policy has been applied via the controller.  If you’re interested in Open vSwitch internals, here’s a nice presentation from Simon Horms.  I think the video is definitely worth a half hour!

Anyway, what do accelerated paths and kernel-bypass boil down to ?  Things like swap-over-NFS, NFS-root, proliferation of iSCSI/NFS filers and FUSE-based projects like Gluster, put network subsystem performance directly in the cross-hairs.  Most importantly, demands on the networking subsystem on all operating systems are pushing the performance boundaries of what the traditional protection ring concept can provide.

Developers go to great lengths to take advantage of the ring model, however it seems faster network throughput (btw is 400gbps ethernet the next step?) and lower latency requirements are recently more at odds than ever with the ring paradigm.

Linux and BSD’s decades-old niche of being excellent routing platforms will be tested (as it always is) by these future technologies and customer demand for them.  Looking forward to seeing how projects like OpenStack wire all of this stuff together!

Low Latency Performance Tuning Guide for Red Hat Enterprise Linux 6

Last month I wrote a paper for Red Hat customers called Low Latency Performance Tuning Guide for Red Hat Enterprise Linux 6 or LLPTGFRHEL6 for short 😉

It’s the product of significant research and my hands-on experiments into what configurations provide tangible benefit for latency-sensitive environments.  Although the traditional audience for this paper is the financial services industry, I have found that there are all sorts of latency-sensitive workloads out there.  From oil and gas to healthcare to the public sector and cloud, everyone wants the best performance out of their shiny new kit.

This paper started out as a formal response to many similar questions I was receiving from the field.  Initially a 1-2 page effort, within a day it had blown up to 14 pages of stuff from my mountain of notes.  Talk about boiling the ocean…although I was happy with the content, the formatting left a little to be desired so I pared it back to about 7 pages and linked out to other in-depth guides where it made sense…

I’m mostly happy with how it turned out…I know that customers were looking for this type of data (because they asked me over and over) and so I set out to conduct numerous experiments filling out each bullet point with hard data and zero hand-waving.  I wanted to explicitly corroborate or dispel certain myths that are floating around out there about performance impact of various knobs, so I tested each in isolation and reported my recommendations.

I do hope that this paper helps to guide administrators in their quest to realize ROI from both their hardware and software investments, please have a look and let me know what you think!

P.S.  are there any other performance domains, workloads, use-cases or environments that you’d like us to look at?  Someone mentioned high-bandwidth-high-latency (long-fat-pipe) experiments…would that be of interest?

It’s not how good you are, it’s how fast you get better…

My current gig at Red Hat puts me in an interesting position within the FOSS world.  I work on what’s called the Performance Team, part of the CTOs office.  Most of our efforts are what you’d expect a CTO Office to be doing…playing with the new stuff.

We’re often the initial evaluators (outside of development) of a feature or other piece of code.  My most significant (and completely hidden) contribution to FOSS is early adopter type feedback…hopefully while the code is still malleable, and the planets align in terms of product cycle.

Which brings me to my point…It’s not how good you are, it’s how fast you get better.  Here, “you” is FOSS.  Myriad articles have been written, code studies conducted etc, to prove the merit of the FOSS development model which is typically quantified in terms of “feature velocity” or bugs-per-LOC.  The feature velocity fire-hose is where I’m standing.

The Red Hat model of upstream-first, means that raw RFC-type code submissions are common, and ideas are fleshed out in the open.  I (and many other non-developers) observe this process and provide feedback or guidance.  It’s this iterative, public development process that is precisely (but not exclusively) the value that customers get from choosing an open source platform for their business.  It’s also the reason why I love my job.

Development peers do care about the quality @ initial submission.  I also care about initial code quality, though mostly in the functional sense…does it boot.  Further along in my personal workflow, I begin to get very concerned with the mechanics of the code itself.

  • How it’s designed.
  • Where are the long poles, and can they be mitigated?
  • What makes sense for defaults?
  • Do we need tracepoints — how do I observe the critical sections under load?
  • Finally, a personal goal…How do we avoid surprising the sysadmins, who are again further downstream, and the very lifeblood of a infrastructure provider like Red Hat.  The guys on the ground wired to a monitoring system, who we absolutely want as our “well-slept” allies 🙂

Cut from the same cloth, I take particular care in approaching my work from that perspective because I always hoped there was some developer out there who had my back.  As it turns out, there are countless people at Red Hat who sincerely care about the impact that our software has on our customer’s bottom line.  And countless more who’s job is “simply” to enable the first group.

In the recent linux.com series profiling linux kernel developers, a very common answer to the question on how to get involved, is to scratch your own itch.  A valid approach, and like any other community, the FOSS community naturally needs to be concerned with care and feeding of existing developers as well as inspiring new ones.  With that in mind, I tend to think of a hidden value we provide our customers; staffing developers to write code that scratches other itches, solves someone else’s problem, or otherwise enables the success of business.

Our industry has a “Linux Plumbers Conference”, possibly one of the most accurate analogies identifying the market we serve:  open source code enables differentiation-by-extension.  We’re the home improvement store for your neighborhood and our shelves are always fresh.