Going Full Send on my WFH A/V setup

I tried hard to make it work. But I could never get the results I wanted. The crisp video, clean audio, the BOKEH. Herein lies my adventures (full send, some might say) over the past ~2 months in getting a proper home office A/V setup. It’s not perfect yet, but I’m fairly happy with the improvement. I’ll close this write-up with some of my next steps / gaps.

Starting with some basic tenets:

  • Whatever I end up with has to augment all of my ingrained behaviors; I’m not going to relearn how to use a computer (change operating system) for the sake of this.
  • I’m stuck with the room I’ve got. The walls have odd angles which makes lighting difficult.
  • I don’t know anything about this, so there will be lots of trial, error, and Googling.

I started by scouring YouTube for videos about the ultimate camera setup for WFH. It turns out the center of knowledge in this area is Twitch streamers. There are also a few companies posting setups who are more business focused, feels like they’re using these videos to promote their consulting businesses (which are around video editing, or promoting small business). Twitch streamers also know a lot about Open Broadcaster Software, but I’ll get to that in a bit.


After seeing this blog come across my Twitter, I impulse-buy a GoPro Hero 8 Black. Turns out GoPro is completely ignoring Linux and their webcam firmware is really, really beta even on Mac and Windows. Returned it.

After watching a lot more of these videos, I started seeing a trend towards a particular Mirrorless DSLR camera, the Canon EOS M50. I’m trying to stay with models that folks recommend for WFH/streaming, and that I can find SOMETHING saying they support Linux / have anecdotal evidence of it working. So I bought a Canon M50. I had to buy it from a place I hadn’t shopped at before (Best Buy), because a lot of people are trying to up their WFH A/V game, making components scarce.

So I’ve got the camera. I also need a micro HDMI –> USB 3 cable, so I got one of those. Elsewhere in the YouTube blackhole, I came across the term “dummy battery”. This is a hollow battery that lets you plug the camera directly into wall power to avoid having to use real batteries and their runtime limitations. Canon dummy batteries were sold out everywhere, including Canon themselves, although I did place an order with them directly (their web purchasing experience is stuck in early 2000s). It was on backorder, so I eventually canceled that order and bought a knockoff dummy battery for 20% of the price of the real one. Was I worried that something 20% cheaper would instantly fry my camera? Yep. But I am in full send mode now.

So I have the camera, the HDMI cable, the dummy battery. I probably need a tripod, right? I think cameras need tripods. OK, I got a tripod. Not sure what I’ll use it for, but I can always return it. Turns out the tripod was a key investment in making sure the camera is stable, level and oriented properly for the right angle.

Next I probably need a memory card, right? Cameras need memory cards? OK, I’ll get a memory card. But which one? I’m planning to put down 4k video @ 60 fps so “sorting by least expensive” is probably not the right move. Turns out there is a whole disaster of categorization of memory card performance. I ended up reading, and re-reading this page a few times. What a nightmare. The least consumer-friendly thing since 802.11 spec naming. Anyway I ended up buying a 128GB class 10 card and it seems to be fine.

I then have to connect the camera to my computer. The videos suggest HDMI, but Canon has recently announced USB support for live-streaming. This blog in particular was referenced in a few videos. Lets try that. OK, now I am into the land of having to configure and load out of tree kernel modules. How do you do this again? It’s been a while. OK, got it. Whew. How badly do I want the BOKEH? This is getting dicey and doesn’t feel right.

Well, it actually works. But it is FAR from ideal. The fact that I’ve got to run gphoto2 on boot, and install all this goop to wire things together … there has to be a better way?

I began using this setup for the first time in real video calls (we use Google Meet where I work). The quality was infinitely improved (making sure my Meet config used 720p and after literally days of experimentation with camera settings). People immediately noticed the difference too. However, sometimes the camera would shut off? I noticed a trend of it shutting off after about a half hour? But I have the dummy battery. What’s going on?

I will spare you the suspense. If I’d found this page, I’d never have bought the Canon M50. It shuts off after a half hour on purpose. There’s no way around it. Also the Canon M50 is not compatible with Magic Lantern firmware. What is Magic Lantern? Something I don’t want to deal with. Gone are my days of CyanogenMod on my Palm Pre (by the way, the Palm Pre was better than any phone I’ve owned before or since, don’t @ me).

So, my Canon shuts off. That’s just about the worst flaw it could have. Back to Best Buy. But not before the 14 day return policy expires, so I can’t do it online, I now have to physically visit a Best Buy to haggle, or eBay it. Despite COVID situation, I decided to mask it and haggle. Luckily they took the camera back, yay. If you’re interested in why it turns off, I found this post which is probably right…yeesh!

Next, what camera should I go for? Elgato’s page made it super easy to find cameras of a similar price-point that didn’t have any glaring flaws.

After much more fact-checking, I decided to get a Sony A6100. This time Amazon had it in stock, and significantly cheaper than other sites (price gouging?). The Sony arrives and it is immediately more useful (because it doesn’t shut off). Incidentally, I had to buy a different dummy battery for the Sony. An off-brand, that has thus far not fried my camera. Tripod, memory card and HDMI cable were compatible.

Next, how to connect this to my computer. The Sony also supports USB, but I’m not happy with the quality. It also uses a lot of CPU. What solutions are there? After many a sleepless night on YouTube learning what a “capture card” is, I went to find the highest end model, and found an Elgato 4K60 PRO PCI capture card. Wow, this was a total waste of time. Not only does it have zero Linux support but it feels like Elgato is actively ignoring Linux across the board, which is possibly corroborated by my experience with their CamLink 4K as well(?). I end up searching my email for that phrase in case some other intrepid Red Hatter is further along than me.

It turns out that an engineer whom I hired ~4 years ago, and who I allegedly sit next to in the office (it’s been 6+ months, and I’m full sending my home office, maybe I’ll never see him again), wrote a post about capture cards in May 2020! I have learned to trust this person’s opinion on these sorts of niche devices, because he has described his enviable Plex setup (it has a dedicated air conditioner), his GPU bitcoin mining, Threadripper prowess, and other things to me over lunch which indicate he has a nack for understanding things at the level of detail that I need for my BOKEH. His post pointed to a Magewell USB Capture HDMI 4K Plus Device. In particular he had tested it on Fedora and noted how hands-off it was for setup. His complaint was noise about a built-in fan. After balking at the price, I decide that it can be returned, so I get one. It turns out he was right, and it works great! One thing though is that I haven’t heard the fan at all, which I guess is good. Thanks to Sebastian for this tip.

However its really expensive. And the YouTubers are telling me about Elgato CamLink 4k which is 1/3rd the price. I get a CamLink 4K and decide that if it works, I’ll return the Magewell to save the money. Hook up the CamLink, the kernel sees it as a uvc device, but I see nothing but a black screen in OBS and Google Meet. After an hours worth of effort and Google turning up several complaints on this topic (admittedly not a huge effort on my part), I decide to trade money for my weekends/evenings back, and stick with the Magewell. Sebastian was right again. Hat-tip.


On to Audio. Last year I hired an engineer who turned out to be in two metal bands. On video calls with him, he had what looked like a very serious A/V setup in his home office. If you’re into Dual-Violin US Folk Metal, maybe check them out. Anyway, this person gave me some guidance on audio and I ended up going with a Blue Yeti USB Mic. This is one aspect of this journey that worked on the first try. However I could not help but think maybe there’s a way to improve? At conferences, when they were in-person, presenters get a lavalier mic. I bought one and it wasn’t any better. Also it was annoying to have a cable dangling from my collar all day. Returned.

For years I’ve been using Bose active noise cancelling headphones (at the office which is an open-office disaster / cacophony). At home I also bought Bose ones, but the in-ear model. The only thing I don’t like is that they’re wired, so I’m tied to the desk. One thing I do like is that they’re wired, so I never have to worry about batteries like I do with the ones I have at the office. I also have a pair of noise cancelling Galaxy Buds (which I love). I decide to try those. Ahh, my workstation doesn’t have Bluetooth. Back to Amazon to get a cheap bluetooth dongle. And now the Galaxy Buds work. But the experience sucks, for a few reasons:

  • I have to worry about batteries
  • They disconnect if I walk down the hall
  • Pairing is less than ideal in Linux where I have ~5 audio devices
  • I notice a lot of CPU usage tied back to the Bluetooth device…not good.
  • I decide not to die on this hill, and stick with the Bose QuietComfort 20.

I have the Video and Audio basically squared away at this point. What’s next? Lighting.


The majority if howto videos indicate that if you don’t have proper lightning, it won’t matter your camera or software. I begin to research lighting, and found that Elgato Key Lights are a popular thing. You apparantly need two of them, they’re $200 each, and they’re out of stock. So, nope. I have a spare floor lamp and decide to use that. This is much better than the ceiling fan light I had which was casting a very scary shadow on my face ūüôā So the lamp is to my east-south-east, pointed towards the ceiling and I’m OK with the results. This is an area I may eventually want to improve, but maybe I’m nearing diminishing returns time-wise?


Conferences have gone virtual, and I have several presentations lined up, which are all typically recorded. So now I need to figure out how to record myself. According to YouTubers, I need to figure out what OBS stands for. The Open Broadcaster Software is (apparently) an open source streaming/recording application available for Windows, Mac and Linux. I am now watching EposVox masterclass on OBS. It’s complicated but not terrible to do basic stuff. You can see my first stab at integrating my efforts into a recorded presentation here.

After watching that video back, I have a few areas to improve:

  • I keep having to pause the recording to load the next section of content into my brain. I have to keep looking down to pause. There are apparently things called Elgato Stream Decks, which are a pad of hotkeys which are used by game streamers to automate certain operating system or OBS operations. OBS also supports hotkeys. Here is what my OBS canvas looks like including a few video sources and scenes:
  • I am not looking into the camera often enough. Yes, I have to look at my monitor to do demos and whatnot (expected), but I am also looking at my monitor to check my notes. I want to avoid that somehow. It turns out that Phone-based Teleprompters are a thing, and in-expensive. I bought one. It’s freaking cool. Mounts directly to the camera lens and displays the “script” overlaying the lens itself. So you are staring directly at the lens, and have a “Star Wars intro” style scrolling text of your content. Cannot recommend this product enough for professional delivery of recorded content. It even comes with a Bluetooth remote to control scrolling speed and start/stop. That dongle comes in handy again!
  • I want to involve whiteboards in my presentations. In the office, I have whiteboards available to me everywhere. I need one at home. But due to the size and shape of my office, I really don’t have a wall within camera-range to mount one. So I went with one on wheels. I haven’t used it in any presentations yet, but I’ve been using it for keeping notes and so far loving it.
  • I have to learn how to do basic video editing. After some Googling for the state of the art on Linux, I found Kdenlive which isn’t terrible to learn, after watching a few beginners videos.
  • I realize the audio is out of sync with the video. OBS let me insert a delay to help match then up. 300ms seems to be perfect.
  • In the original version of this video, the audio was super low. So I had to learn how to convert an mkv (OBS default) to an mp4, so I can work with the audio independently of the video, and boost the audio gain (by +20dB if you’re curious). Thanks to some quick tips from Langdon White I am able to achieve this. At this point my various experiments and YouTube deep dives are starting to pay off. I am smiling, finally ūüôā


Next Steps

  • For some reason, when I turn off the camera, the zoom level resets to 16mm. But I want it at 18mm. So every time I turn the camera power on, I have to dial the zoom back in manually. Not a huge deal since it’s just once a day.
  • CPU usage in Chrome…brings the computer to a crawl. My workstation has 16 cores and 64G RAM. Sigh…so now all my Google Meets occur in Firefox. Not too bad, just annoying when it comes to screensharing since I really do not want to use Firefox as my primary browser.
  • Lens: according to photography snobs, if I don’t get a better lens, they’ll throw me out of the subreddit. This will probably have to wait until after my winning lotto ticket shows up.
  • After talking with some coworkers who are also upleveling their WFH A/V setups and thus learning what OBS stands for, I come to find out that OBS has some noise filtering options built in. I could have used that to filter out some background noise (e.g. from my kids or my workstation fans).

Conclusion / Final Hardware list

So, in the end, my hardware and software setup as of this posting is:

I have to say, this has been a really fun project. It’s an area I had zero knowledge of going in – just a personal goal to improve my WFH A/V. It’s also an area of somewhat daunting complexity, hardware opinions (nerd fights), and an endless compatibility matrix. That’s part of why I went the route of buying stuff and returning it [1].

I hope this post helps someone who is looking to improve their home office video quality avoid newb mistakes and just get it done. Also, I do realize that there are likely cheaper options across the board. But at least you have the laundry list of stuff that worked for me, within my given constraints, and can possibly phase your purchases like I did over a couple months.

[1] always check the return policy ūüôā

List of Useful YouTube Channels


Tweaking my webcam setup (Logitech C930e, Fedora Linux, v4l)

I came really close to making some large purchases to improve my video call situation, and I may still do so, but I did spend some time and found a few quick wins maybe will help others:

  1. I use Fedora as my workstation.¬† Won’t change it.
  2. I have a Logitech C930e.¬† Logitech doesn’t publish anything on how to tweak the camera on Linux.¬† Figure out how to tweak it.
  3. I like (love) working in the dark.  So I never have any lights on.  That has to change.
  4. I have a window behind me in my home office.¬† Shutting the blinds is not enough.¬† Repositioning my desk won’t work in the space I’ve got.¬† Get a blackout curtain.
  5. My webcam is sitting on top of my monitor, dead center.¬† This makes it really awkward to look directly at.¬† It’s about 8″ above my eye-line.¬† I don’t think I’m going to change this. My eyeline has to remain at the center of the monitor or I get neck pain.

Here are the tweaks I made that do seem to have improved things:

  1. dnf install v4l2ucp v4l-utils
  2. v4l2-ctl --set-ctrl zoom_absolute=125 # this helps with the "know your frame" part https://www.youtube.com/watch?v=g2wH36xzs_M.  This camera has a really wide FoV, so this shrinks it down a bit.
  3. v4l2-ctl --set-ctrl tilt_absolute=-36000 # this helps tilt the center of the camera frame down towards where I'm sitting (8" below camera).
  4. v4l2-ctl --set-ctrl sharpness=150 # This seemed to help the most.  I tried a bunch of values and 150 is best for my office.
  5. Lighting:  Instead of having my desk lamp illuminate my keyboard, turn it 180degrees to bounce off the white ceiling.  Big improvement.
  6. Lighting:¬† You can’t work in the dark anymore.
  7. Auto-focus:¬† I have a TV just to my right.¬† When whatever’s on changes brightness, it makes the camera autofocus freak out.¬† I typically keep the TV muted.¬† Now I’ll pause it while on calls.
  8. Microphone:¬† I have an external USB mic (Blue Yeti).¬† Turns out I had it in the wrong position relative to how I’m sitting.¬† Thanks to a co-workers “webcam basics” slides¬† for that tip (confirmed in Blue Yeti docs).
  9. Despite that, after recording a video meeting of just myself I still didn’t like how the audio turned out.¬† So I bought an inexpensive lavalier microphone from Amazon, figured it’s worth a try.

One thing I cannot figure out how to do is bokeh.¬† I think that remains a gap between what I’ve got now and higher end gear.

Red Hat SRE 2020 Goals and Projects (and Hiring!)

Hey all, happy new year!

Been a quarter or so since my last post :-/ Wanted to share some updates info about the Service Delivery, SRE team at Red Hat for 2020!

Some of our top level goals:
  • Improve observability – tracing, log analysis
  • Improve reliability – load shedding, autoscaling
  • Launch a boatload of features
  • Establish mathematically proveable release criteria
  • Increased Capacity Planning and demand forecasting for production services
  • Widen availability to additional regions and cloud providers (have you built and supported production services on GCP?)

We’ve got several openings. They’re all REMOTE-FRIENDLY! Don’t worry about the specific titles – they’re a biproduct of how RH backend systems work.

If you think you check the majority of boxes on the job posting, and ESPECIALLY if you’ve done any of these things in the past…please ping us.  We’re actively hiring into all of these roles ASAP. 

So come have some fun with us.  Build and run cool shit.  Be at the forefront of operationalizing OpenShift 4.  Develop in go.  Release continuously.

Openings as of 19-Jan-2020

China (AWS China knowledge desired)

Senior SRE

SRE https://global-redhat.icims.com/jobs/75764/site-reliability-engineer/job?hub=7


North America, Israel
Principal SRE


Senior SRE https://us-redhat.icims.com/jobs/73518/senior-software-engineer/job?hub=7

Senior SRE https://global-redhat.icims.com/jobs/75022/senior-service-reliability-engineer—devops-and-ci/job?hub=7


Senior Security Software Engineer https://us-redhat.icims.com/jobs/68256/senior-security-software-engineer/job?hub=7

Choose Your Own Adventure: SRE Life at Red Hat

Red Hat does managed services. We do DevOps in the style of the Google SRE Book. We’re a decent sized and growing org that supports customers 24/7.¬† Our team is at the forefront of operationalizing OpenShift 4’s revolutionary set of capabilities, and are a way-to-market for many of Red Hat’s newest portfolio offerings.

What does a day-in-the-life of a Red Hat SRE look like…¬† You remember these, don’t you?


If you drop into one of our Slack channels, you might find some interesting discussions on

  • Building and maintaining CI/CD pipelines.
  • Writing Operators in golang to handle managed services use-cases.
  • Handling upgrades of managed OpenShift clusters.
  • Chatting with cloud provider technical teams on weird quirks of their API.
  • Debugging bizarre, nearly intractable production issues using the scientific method.
  • Ensuring our managed OpenShift platforms are the most secure offerings possible.
  • Patternfly-based front-end work:¬† https://cloud.redhat.com/openshift
  • We should probably write a library that lets teams version control their SLI/SLO/SLAs and generate Grafana dashboards from them.
    • Prometheus and Grafana and Thanos ftw.
  • Developing the microservices behind api.openshift.com.
    • Actually operating those microservices.

I hear you might be interested a job like this.  Let me know, and we can sync up about it!

Maybe Stop Sending Me Emails about Performance :-)

[I’ve been meaning to write this post for several months]

Earlier this year¬†I changed roles within Red Hat.¬† My new role is “OpenShift SaaS Architect”, and organizationally is part of Red Hat Service Delivery.

Service Delivery encompasses:

Basically, if you’ve had any interaction with OpenShift 4, you’ve likely consumed those services.

I’d been in my previous role for 7 years, and celebrated my 10th anniversary at Red Hat by being acquired by Big Blue.¬† My previous team (Red Hat Performance and Scale)¬†afforded me endless technical challenges, opportunities to travel, present, help shape product and build engineering teams from the ground up.¬† Perhaps most importantly, I had the opportunity to mentor as many Red Hatters as I possibly could.

Red Hat Service Delivery allows me to broaden my technical and architecture skill set to areas outside of performance, scale and optimization, while letting me apply the many hard-fought lessons from prior chapters in my career.

Hopefully $subject makes a bit more sense now.  Onward!

Building Grafana from source on Fedora

Here are the official docs for building Grafana from source.  And below are my notes on how to build Grafana, starting from a clean Fedora 27 Cloud image.

# Install Dependencies
curl https://dl.yarnpkg.com/rpm/yarn.repo > /etc/yum.repos.d/yarn.repo
sudo yum install golang yarn rubygems ruby-devel redhat-rpm-config rpm-build git -y
gem install fpm
sudo yarn install --pure-lockfile
npm install -g yarn & yarn install

Setup the go environment.

# go environment
mkdir ~/go
export GOPATH=~/go
export PATH=$PATH:$(go env GOPATH)/bin

Download the various repositories required to build.  Here you could also clone your fork/branch of Grafana into $GOPATH/src.

# Pull sources required to build
go get github.com/grafana/grafana golang.org/x/sync/errgroup github.com/codegangsta/cli 
cd $GOPATH/src/github.com/grafana/grafana
npm install

Now you can make any sort of local changes, or just build from HEAD.

# go run build.go setup # takes 45 seconds
$ time go run build.go build pkg-rpm # takes about 7 minutes

The build will spit out an RPM in a folder called dist:

Created package {:path=>"./dist/grafana-5.0.0-1517715437pre1.x86_64.rpm"}

Docker operations slowing down on AWS (this time it’s not DNS)

I’m CC’d on mails when things get slow, but never when things work as expected or are fast…oh well. ¬†Like an umpire in baseball, if we are doing our jobs, we are invisible.

Subject:  Docker operations slowing down

I reach for my trusty haiku for this type of thing:

Ah but in this scenario, it is something more…sinister (my word). ¬†What could be more sinister than DNS, you say? ¬†It’s the magical QoS system by which a cloud provider creatively rents you resources. ¬†The system that allows for any hand-wavy repackaging of compute or network or disk into a brand new tier of service…

Platinum. ¬†No, super Platinum. ¬†What’s higher than platinum? ¬†Who cares, we are printing money and customers love us because we have broken through their antiquated finance process. ¬†We will gladly overpay via OpEx just to avoid that circus.

But I digress…

In this scenario, I was working with one of our container teams folks who had a report of CI jobs failing and someone had debugged a bit and pinned the blame on docker.  I watch the reproducer run.  It is running

docker run --rm fedora date

in a tight loop. ¬†I watch as docker daemon gets through its 5000th loop iteration, and…still good to go. ¬†On average, ~3 seconds to start a container and delete it. ¬†Not too bad, certainly not something that a CI job shouldn’t be able to handle. ¬†I continue to stare at tmux and then it happens…WHAM! 82 seconds to start the last container. ¬†Ahh, good. ¬†Getting a reproducer is almost always the hardest part of the process. ¬†Once we have a tight debug loop, smart people can figure things out relatively quickly.

I am looking at top in another window, and I see systemd-udev at the top of the list…what the…

As much as I would love to blame DNS for this, I have a hunch this is storage related now, because the reproducer shouldn’t be doing anything on the network. ¬†Now I am running ps in a loop and grepping for ” D “. ¬†Why? ¬†Because that is the process state when a thread is waiting on I/O. ¬†I know this because of several terribly painful debugging efforts with multipath in 2010. ¬†Looking back, it may have been those situations that have made me run screaming from filesystem and disk performance issues ever since ūüôā

From man ps:

 Here are the different values that the s, stat and state output specifiers (header "STAT" or "S") will display to describe the state of a process:

 D uninterruptible sleep (usually IO)
 R running or runnable (on run queue)
 S interruptible sleep (waiting for an event to complete)
 T stopped by job control signal
 t stopped by debugger during the tracing
 W paging (not valid since the 2.6.xx kernel)
 X dead (should never be seen)
 Z defunct ("zombie") process, terminated but not reaped by its parent

Normally, processes oscillate between R and S, often imperceptibly (well, at least not something you see very often in top).  You can easily trace this with the systemtap script sleepingBeauties.stp if you really need to.  This script will print a backtrace of any thread that enters D state for a configurable amount of time.

Anyway here are the threads that are in D state.

root 426 0.4 0.0 0 0 ? D 16:10 0:08 [kworker/7:0]
root 5298 0.2 0.0 47132 3916 ? D 16:39 0:00 /usr/lib/systemd/systemd-udevd
root 5668 0.0 0.0 47132 3496 ? D 16:40 0:00 /usr/lib/systemd/systemd-udevd
root 24112 0.5 0.0 0 0 ? D 16:13 0:08 [kworker/u30:0]
root 5668 0.0 0.0 47132 3832 ? D 16:40 0:00 /usr/lib/systemd/systemd-udevd
root 5656 0.0 0.0 47132 3884 ? D 16:39 0:00 /usr/lib/systemd/systemd-udevd
root 29884 1.1 0.0 0 0 ? D 15:45 0:37 [kworker/u30:2]
root 5888 0.0 0.0 47132 3884 ? D 16:40 0:00 /usr/lib/systemd/systemd-udevd
root 5888 0.5 0.0 47132 3904 ? D 16:40 0:00 /usr/lib/systemd/systemd-udevd
root 5964 0.0 0.0 47132 3816 ? D 16:40 0:00 /usr/lib/systemd/systemd-udevd
root 29884 1.1 0.0 0 0 ? D 15:45 0:37 [kworker/u30:2]
root 5964 0.3 0.0 47132 3916 ? D 16:40 0:00 /usr/lib/systemd/systemd-udevd
root 5964 0.2 0.0 47132 3916 ? D 16:40 0:00 /usr/lib/systemd/systemd-udevd
root 24112 0.5 0.0 0 0 ? D 16:13 0:08 [kworker/u30:0]

That is interesting to me. ¬†udevd is in the kernel’s path for allocate/de-allocate storage devices. ¬†I am now convinced it is storage. ¬†kworker is a workqueue kernel thread that fires when the kernel’s writeback watermarks (dirty pages) are hit. ¬†For my extreme low latency work, I documented how to shove these in a corner in my Low Latency Tuning Guide for Red Hat Enterprise Linux 7.

I move over to another tmux pane and I try:

dd if=/dev/zero of=/root/50MB bs=1M count=10 oflag=sync

I know that if this does not complete in < 5 seconds, something is terribly hosed.  Aaaaaand it hangs.  This process now shows up in my ps loop looking for D state processes.  So I have it narrowed down.  Something is wrong with the storage on this VM, and it only shows up after 5000 containers are started (well, I am told it varies by a few thousand here and there).

This may seem like a tangent but I promise it is going somewhere:

Nearly two years ago, when we were first standing up openshift.com version 3 on AWS, we ran into a few eerily similar issues. ¬†I remember that our etcd cluster would suddenly start freaking out (that is a technical term). ¬†Leader elections, nodes going totally offline…And I remember working with our AWS contacts to figure it out. ¬†At the time it was a little less well-known, and today just by googling it appears fairly well understood. ¬†The issue with this reproducer turns out to be something called a BurstBalance. ¬†BurstBalance is AWS business logic interfering with all that is good and holy. ¬†If you purchase storage, you should be able to read and write from it, no?

As with all public cloud, you can do whatever you want…for a price. ¬†BurstBalance is the creation of folks who want you to get hooked on great performance (gp2 can run at 3000+ IOPS), but then when you start doing something more than dev/test and run into these weird issues, you’re already hooked and you have no choice but to pay more for a service that is actually usable. ¬†This model is seen throughout public cloud. ¬†For example, take the preemptible instances on GCE or the t2 instance family on AWS.

I have setup my little collectd->graphite->grafana dashboard that I use for this sort of thing. ¬†You can see things are humming along quite nicely for a while, and then…yeah.

Once the reproducer exhausts the gp2 volume’s BurstBalance, things go very, very badly. ¬†Why? ¬†Simple. ¬†Applications were not written to assume that storage would ever slow down like this. ¬†Issues in docker cascade back up the stack until finally a user complains that it took 5 minutes to start their pod.

The reason is that we have not paid our bounty to the cloud gods.

Here is BurstBalance and the magical AWS QoS/business logic in action.

You can see it looks a lot like my grafana graphs…quota is exhausted, and the IOPS drop to a trickle.

What would happen then if we did kneel at the alter of Bezos and pay him his tithe?  I will show you.

The reproducer is chugging along, until it slams into that magical AWS business logic. ¬†Some QoS system somewhere jumps for joy at the thought of earning even more money. ¬†This time, we will pay him his fee…for science.

You can see that our reproducer recovers (lower is better) once we flip the volume type to provisioned IOPS (io1)…this was done on the fly. ¬†We set the io1 volume to 1000 IOPS (mostly random choice…) which is why it is slightly higher after the recovery than it was before the issue occurred. ¬†gp2 can crank along really, really fast. ¬†That is, until…


The take aways from this debugging session are:

  • Regardless of cloud provider, you pay a premium for both performance and determinism.
  • If you think you are saving money up front, just wait until the production issues start rolling in which, conveniently, can easily be solved by simply clicking a little button and upgrading to the next tier. ¬†Actually, it is brilliant and I would do the same if I had the unicorn QoS system at my disposal, and was tasked with converting that QoS system into revenue.
  • I now must proactively monitor BurstBalance and flip volumes to io1 instead of let them hit the wall in production. Monitoring for this (per AWS documentation, use CloudWatch) is an additional fee appears to be included in their CloudWatch free tier.
  • Perhaps we flip all volumes to io1 proactively and then flip them back when the critical period is over.
  • One thing I ran out of time to verify is what happens to my BurstBalance if I flip to io1, then back to gp2? ¬†Is my BurstBalance reset? ¬†Probably not, but I haven’t done the leg work yet to verify.
  • We will do less I/O when using overlay2 (might just delay the inevitable).
  • All super critical things (like etcd) get io1 out of the gate. ¬†No funny business.



Juggling backing disks for docker on RHEL7, using atomic storage migrate

Quick article on how to use the atomic storage commands to swap out an underlying6852258 storage device used for docker’s graph storage.

  • I am currently using overlay2 for docker storage, and /var/lib/docker is currently on my root partition :-/
  • I want to add a 2nd disk just for docker storage.
  • I want to keep my images, rather than have to download them again.

I have a few images in my system:

# docker images
 docker.io/openshift/hello-openshift latest 305f93951299 3 weeks ago 5.635 MB
 docker.io/centos centos7 3bee3060bfc8 6 weeks ago 192.6 MB
 docker.io/monitoringartist/grafana-xxl latest 5a73d8e5f278 10 weeks ago 393.4 MB
 docker.io/fedora latest 4daa661b467f 3 months ago 230.6 MB
 docker.io/jeremyeder/c7perf latest 3bb51319f973 4 months ago 1.445 GB
 brew-pulp-docker01.redacted.redhat.com:8888/rhel7/rhel-tools latest 264d7d025911 4 months ago 1.488 GB
 brew-pulp-docker01.redacted.redhat.com:8888/rhel7 latest 41a4953dbf95 4 months ago 192.5 MB
 docker.io/busybox latest 7968321274dc 6 months ago 1.11 MB
 # df -h
 Filesystem Size Used Avail Use% Mounted on
 /dev/mapper/vg0-root 193G 162G 23G 88% /
 devtmpfs 16G 0 16G 0% /dev
 tmpfs 16G 0 16G 0% /dev/shm
 tmpfs 16G 804K 16G 1% /run
 tmpfs 16G 0 16G 0% /sys/fs/cgroup
 /dev/vdc1 100G 33M 100G 1% /var/lib/docker/overlay
 /dev/vda1 2.0G 549M 1.5G 28% /boot

All of docker’s storage right now consumes about 4GB. ¬†It’s important to verify this because the migrate commands we’re about to walk through require this much space to complete the migration:

# du -hs /var/lib/docker
 3.9G /var/lib/docker

By default, the atomic migrate commands will write to /var/lib/atomic, so whatever filesystem holds that directory will need at least (in my case) 4GB free.

The migration process has several phases:

  1. Export any containers and images.
  2. Allow user to adjust storage on the system.
  3. Allow user to adjust storage configuration of docker.
  4. Import containers and images back into the new docker graph storage.

I’m using a VM with spinning disks so this takes a little longer than it otherwise might, but let’s start the export:

# time atomic storage export
 Exporting image: 5a73d8e5f278
 Exporting image: 3bb51319f973
 Exporting image: 7968321274dc
 Exporting image: 3bee3060bfc8
 Exporting image: 4daa661b467f
 Exporting image: 264d7d025911
 Exporting image: 41a4953dbf95
 Exporting image: 305f93951299
 Exporting volumes
 atomic export completed successfully

real 1m57.159s
 user 0m1.094s
 sys 0m6.190s

OK that went oddly smoothly, let’s see what it actually did:

# find /var/lib/atomic/migrate

Seems reasonable…incidentally that info.txt just includes the name of the storage driver used at the time migrate was executed.

# du -hs /var/lib/atomic
3.8G /var/lib/atomic

OK let’s do the deed:

# atomic storage reset
 Docker daemon must be stopped before resetting storage

Oh, I guess that would make sense.

# systemctl stop docker
# atomic storage reset

OK, at this point /etc/sysconfig/docker-storage is reset to it’s default state, and I have nothing in my docker graph storage.

Because I want to continue to use overlay2, I will use the atomic storage modify command to make that so:

# atomic storage modify --driver overlay2
# cat /etc/sysconfig/docker-storage
 DOCKER_STORAGE_OPTIONS="--storage-driver overlay2 "

Things are looking good so far.

Now about adding more storage.

  • I have added a new virtual storage device to my VM called /dev/vdc1
  • I have partitioned and formatted it with XFS filesystem.
  • I have mounted it at /var/lib/docker and setup an fstab entry.
# lsblk
 vda 252:0 0 200G 0 disk
 ‚Ēú‚ĒÄvda1 252:1 0 2G 0 part /boot
 ‚ĒĒ‚ĒÄvda2 252:2 0 198G 0 part
 ‚Ēú‚ĒÄvg0-swap 253:0 0 2G 0 lvm [SWAP]
 ‚ĒĒ‚ĒÄvg0-root 253:1 0 196.1G 0 lvm /
 vdb 252:16 0 100G 0 disk
 ‚ĒĒ‚ĒÄvdb1 252:17 0 100G 0 part
 vdc 252:32 0 100G 0 disk
 ‚ĒĒ‚ĒÄvdc1 252:33 0 100G 0 part /var/lib/docker

At this point we are ready to restart docker and import the images from my previous storage. ¬†First let me verify that it’s OK.

# systemctl start docker
# docker info|grep -i overlay2
 Storage Driver: overlay2

Cool, so docker started up correctly and it has the overlay2 storage driver that I told it to use with the atomic storage modify command (from previous step).

Now for the import…

# time atomic storage import
 Importing image: 4daa661b467f
 ae934834014c: Loading layer [==================================================>] 240.3 MB/240.3 MB
 Loaded image: docker.io/fedora:latest
 Importing image: 3bee3060bfc8
 dc1e2dcdc7b6: Loading layer [==================================================>] 200.2 MB/200.2 MB
 Loaded image: docker.io/centos:centos7
 Importing image: 7968321274dc
 38ac8d0f5bb3: Loading layer [==================================================>] 1.312 MB/1.312 MB
 Loaded image: docker.io/busybox:latest
 Importing image: 264d7d025911
 827264d42df6: Loading layer [==================================================>] 202.3 MB/202.3 MB
 9ca8c628d8e7: Loading layer [==================================================>] 10.24 kB/10.24 kB
 a03f55f719da: Loading layer [==================================================>] 1.336 GB/1.336 GB
 Loaded image: brew-pulp-docker01.redacted.redhat.com:8888/rhel7/rhel-tools:latest
 Importing image: 305f93951299
 5f70bf18a086: Loading layer [==================================================>] 1.024 kB/1.024 kB
 c618fb2630cb: Loading layer [==================================================>] 5.637 MB/5.637 MB
 Loaded image: docker.io/openshift/hello-openshift:latest
 Importing image: 5a73d8e5f278
 8d4d1ab5ff74: Loading layer [==================================================>] 129.4 MB/129.4 MB
 405d1c3227e0: Loading layer [==================================================>] 3.072 kB/3.072 kB
 048845c41855: Loading layer [==================================================>] 277.2 MB/277.2 MB
 Loaded image: docker.io/monitoringartist/grafana-xxl:latest
 Importing image: 3bb51319f973
 34e7b85d83e4: Loading layer [==================================================>] 199.9 MB/199.9 MB
 ab7578fbc6c6: Loading layer [==================================================>] 3.072 kB/3.072 kB
 3e89505f5573: Loading layer [==================================================>] 58.92 MB/58.92 MB
 753668c55633: Loading layer [==================================================>] 1.169 GB/1.169 GB
 d778d7335b8f: Loading layer [==================================================>] 11.98 MB/11.98 MB
 5cd21edffb34: Loading layer [==================================================>] 45.1 MB/45.1 MB
 Loaded image: docker.io/jeremyeder/c7perf:latest
 Importing image: 41a4953dbf95
 Loaded image: brew-pulp-docker01.redacted.redhat.com:8888/rhel7:latest
 Importing volumes
 atomic import completed successfully
 Would you like to cleanup (rm -rf /var/lib/atomic/migrate) the temporary directory [y/N]n
 Please restart docker daemon for the changes to take effect

 real 1m23.951s
 user 0m1.391s
 sys 0m4.095s

Again went smoothly.  I opted not to have it automatically clean up /var/lib/atomic/migrate automatically because I want to verify a thing or two first.

Let’s see what’s on my new disk:

# df -h /var/lib/docker
Filesystem Size Used Avail Use% Mounted on
/dev/vdc1 100G 3.9G 97G 4% /var/lib/docker

OK that looks reasonable. ¬†Let’s start docker and see if things imported correctly:

# systemctl restart docker

# docker images
 docker.io/openshift/hello-openshift latest 305f93951299 3 weeks ago 5.635 MB
 docker.io/centos centos7 3bee3060bfc8 6 weeks ago 192.6 MB
 docker.io/monitoringartist/grafana-xxl latest 5a73d8e5f278 10 weeks ago 393.4 MB
 docker.io/fedora latest 4daa661b467f 3 months ago 230.6 MB
 docker.io/jeremyeder/c7perf latest 3bb51319f973 4 months ago 1.445 GB
 brew-pulp-docker01.redacted.redhat.com:8888/rhel7/rhel-tools latest 264d7d025911 4 months ago 1.488 GB
 brew-pulp-docker01.redacted.redhat.com:8888/rhel7 latest 41a4953dbf95 4 months ago 192.5 MB
 docker.io/busybox latest 7968321274dc 6 months ago 1.11 MB

Images are there.  Can I run one?

# docker run --rm fedora pwd

Indeed I can.  All seems well.

This utility is very handy in scenarios where you want to do some surgery on the backend storage, but do not want to throw away/download images and containers.  I could envision using this utility when

  • Moving from one graph driver to another. ¬†Note that we have SELinux support coming to overlay2 in RHEL 7.4.
  • Perhaps you have a lot of images or containers and slow internet.

Either way, this process was about as smooth as it could be…and a very clean UX, too.

Building KDAB hotspot ‘perf’ visualization tool on Fedora

As any respectable software performance person knows, perf is your best (only?) friend. For example, perf report -g has shined a light into the deepest, darkest corners of debugging territory. ¬†Since you asked, it can happily run in a container, too (albeit requiring elevated privileges, but we’re debugging here…).

Typically console formatted output is fine for grokking perf reports, but having recently become addicted to go’s pprof visualization (dot format), handy flame graphs, and on the morbid occassion, VTune, I started looking around for a way to more clearly understand a particular perf recording.

Googling turned up an interesting QT-based tool called hotspot by a company called KDAB.  Screenshots indicate it might be worth kicking the tires.

After some bouncing around figuring out Fedora equivalent package names, I was able to quickly build and run hotspot.  I ran a quick perf record to see if it was going to work at all:

$ sudo perf record --call-graph dwarf sleep 10
$ ./bin/hotspot ./perf.data

And voila…


Folks at KDAB even included a built-in flame graph:


Interface is clean, bug-free and useful.  Trying to load a large perf.data file was a bit ugly and RAM-intensive; I would likely stick to command-line parsing for those.  Or, as we do in pbench, reduce the collection frequency to 100Hz and take bite-sized samples over the life of the test.

nsinit: per-container resource monitoring of Docker containers on RHEL/Fedora

The use-case for per-application resource counters

Administrators of *NIX-based systems are quite accustomed to viewing resource counters strewn throughout the system, in places like /proc, /sys and more recently /cgroup or /sys/fs/cgroup. ¬†With the release of RHEL6 came widespread enterprise adoption of Control Groups (cgroups), which had been implemented steadily over a series of years, and vetted both there as well as in Fedora (RHEL’s upstream).

Implementing cgroups not only let sysadmins carve up a single OS into multiple logical partitions, it also bought them per-cgroup counters that the kernel maintains. ¬†That’s in addition to common use-cases such as quality of service guarantees or charge-back.

Docker’s unique twist

With the recent uptick in adoption of Linux containers (Docker encapsulates several mature technologies into an impressive usability package), administrators might be wondering where the per-container resource counters are. ¬†We’re in luck! ¬†Since Docker heavily relies on Cgroups, many of the counters that sysadmins are familiar with “just work”. ¬†They could benefit from some usability improvements, but if you’re comfortable spelunking through the cgroup VFS, you can dig them out fairly easily.

I should note that the specific hierarchy and commands below are specific to RHEL and Fedora, so you might have to customize some paths or package names for your system.

In the most recent versions of Fedora, engineers have begun building and shipping a binary called ‘nsinit‘, which is part of libcontainer, which is the “execution driver” for Docker. ¬†nsinit is a very powerful debugging utility that lets sysadmins not only view per-container resource counters, but also view the container’s runtime configuration and “jump into” a running container.

How to use the nsinit utility

First you should grab a copy from Fedora, or build it yourself. ¬†Building it yourself is an unnecessarily complicated exercise; so I’m glad they started building it for Fedora so you can just do:

# yum install --enablerepo=updates-testing golang-github-docker-libcontainer

$ rpm -qf `which nsinit`

# nsinit
 nsinit - A new cli application

 nsinit [global options] command [command options] [arguments...]


 exec execute a new command inside a container
 init runs the init process inside the namespace
 stats display statistics for the container
 config display the container configuration
 nsenter init process for entering an existing namespace
 pause pause the container's processes
 unpause unpause the container's processes
 help, h Shows a list of commands or help for one command

I’ll cover the most useful of nsinit’s capabilities; config, stats and exec.

Note:  nsinit currently requires that you run it while you're inside the container's state directory.  So from here on, all commands assume you're in there.

So, something like this:

# docker ps -q

# CID=`docker ps -q`
# cd /var/lib/docker/execdriver/native/$CID*
# ll
total 8
-rw-r-xr-x. 1 root root 3826 Sep  1 20:11 container.json
-rw-r--r--. 1 root root  114 Sep  1 20:11 state.json

Those files are plain-text readable, although not very human-readable.  nsinit pretty-prints these files.  For example, an abridged verison of the output of nsinit config (full version here).  Note that you can get much of this info (but not all) from docker inspect.

# nsinit config

 "mount_config": {
 "mounts": [
 "type": "bind",
 "source": "/var/lib/docker/init/dockerinit-1.1.1",
 "destination": "/.dockerinit",
 "private": true
 "type": "bind",
 "source": "/etc/resolv.conf",
 "destination": "/etc/resolv.conf",
 "private": true
 "mount_label": "system_u:object_r:svirt_sandbox_file_t:s0:c631,c744"
 "hostname": "4caad5492898",
 "environment": [
 "namespaces": {
 "NEWIPC": true,
 "NEWNET": true,
 "NEWNS": true,
 "NEWPID": true,
 "NEWUTS": true
 "capabilities": [
 "networks": [
 "type": "loopback",
 "address": "",
 "gateway": "localhost",
 "mtu": 1500
 "type": "veth",
 "bridge": "docker0",
 "veth_prefix": "veth",
 "address": "",
 "gateway": "",
 "mtu": 1500
 "cgroups": {
 "name": "4caad5492898f1a4230353de15e2acfc05809c69d05ec7289c6a14ef6d57b195",
 "parent": "docker",
 "allowed_devices": [
 "process_label": "system_u:system_r:svirt_lxc_net_t:s0:c631,c744",
 "restrict_sys": true

The stats mode is far more interesting. ¬†nsinit reads cgroup counters for CPU and memory usage. ¬†The network statistics come from /sys/class/net/<EthInterface>/statistics. ¬†From here you can see how much memory your application is using, chart it’s growth, watch CPU utilization, cross-check data from other tools, etc.

 "network_stats": {
 "rx_bytes": 180568,
 "rx_packets": 89,
 "tx_bytes": 28316,
 "tx_packets": 92
 "cgroup_stats": {
 "cpu_stats": {
 "cpu_usage": {
 "total_usage": 985559718,
 "percpu_usage": [
 "usage_in_kernelmode": 510000000,
 "usage_in_usermode": 440000000
 "throlling_data": {}
 "memory_stats": {
 "usage": 27992064,
 "max_usage": 29020160,
 "stats": {
 "active_anon": 4411392,
 "active_file": 3149824,
 "cache": 22278144,
 "hierarchical_memory_limit": 9223372036854775807,
 "hierarchical_memsw_limit": 9223372036854775807,
 "inactive_anon": 0,
 "inactive_file": 19128320,
 "mapped_file": 3723264,
 "pgfault": 94783,
 "pgmajfault": 25,
 "pgpgin": 19919,
 "pgpgout": 13902,
 "rss": 4460544,
 "rss_huge": 2097152,
 "swap": 0,
 "total_active_anon": 4411392,
 "total_active_file": 3149824,
 "total_cache": 22278144,
 "total_inactive_anon": 0,
 "total_inactive_file": 19128320,
 "total_mapped_file": 3723264,
 "total_pgfault": 94783,
 "total_pgmajfault": 25,
 "total_pgpgin": 19919,
 "total_pgpgout": 13902,
 "total_rss": 4460544,
 "total_rss_huge": 2097152,
 "total_swap": 0,
 "total_unevictable": 0,
 "unevictable": 0
 "failcnt": 0
 "blkio_stats": {}

nsenter is commonly used to run a command inside an existing container, something like

# nsenter -m -u -n -i -p -t 19119 bash

Where 19119 is the PID of a process in the container.  Ugly.  nsinit makes this slightly easier (at least IMHO):

# nsinit exec cat /etc/hostname
# nsinit exec bash
bash-4.2# exit

nsinit’s capabilities and reported statistics are incredibly useful when debugging the implementation of QoS for each container, implementing/verifying resource-ceilings/guarantees, and for a more complete understanding of what your containers are doing.

This area is fast-moving…I did want to call out two other important developments, which should ultimately have more broad applicability than nsinit.

Google has published a project called cAdvisor that provides a basic web interface, but more importantly an API for higher layers (such as Kubernetes) to use.

Red Hat has proposed container support for Performance Co-Pilot, a system-level performance monitoring utility in RHEL7, along with goals of teaching many other tools about containers.