Hi, I’m Jeremy Eder, a Distinguished Engineer at Red Hat. My role reports to the VP of AI Engineering where I work on cutting edge strategies and projects. In 2025, I led the launch of llm-d, a distributed inferencing project started by Red Hat and Google, and designed to run on Kubernetes. This was the , the headline feature of Red Hat OpenShift AI 3.0. In late spring, I began working on introducing agentic development patterns to Red Hat AI engineers.
The llm-d project involved leading a globally distributed 100+ person team carved out to take the Neural Magic roadmap to the Red Hat Summit stage. The team achieved that in ~7 weeks. From there, I decided the most important follow-up would be figuring out how to keep vLLM busy – workloads are necessary to maintain the momentum we’d just built.

So my current project is figuring out how AI, particularly codegen, will impact Red Hat engineering. I write on GenAI, AI adoption and share my technical status frequently.
Overall, my focus is on identifying technologies to bring to market, recommend the optimal order, and navigating the complexities of partnerships, open source technology, law, ethics, and competition. To achieve this, I work directly with researchers and product teams to manage the flow of innovation.
Somewhat uniquely, I approach the majority of my work through system dynamics methods,. and believe a holistic approach to problem solving delivers the most impactful, durable results.
In this role, I:
- Bootstrapped Red Hat’s PyTorch team from scratch. Red Hat is now the #5 contributo to PyTorch. My goal for them was top 15! Little Red Hat, who just began their AI journey is once again punching above our weight, I planned how Red Hat could join the PyTorch community in a productive way, wrote the staffing plan, worked with our Open Source Program Office to line up Platinum sponsorships (money makes the world go ’round) and visited Meta to present Red Hat’s position on WheelNext. Represented Red Hat and IBM on the PyTorch Governing Board and Technical Advisory Council (ended Dec 2025).
- Led a cross-functional steering committee of senior engineering and product executives to accelerate AI innovation between research and product. My role is to systematically quantify the impact of each advancement, and work with researchers to refine our projects.
- Spearheaded the creation of Red Hat’s Open Source AI strategy, aligning engineering, research, and product development with the broader AI ecosystem. I still have the whiteboard picture.
- Led Red Hat’s transition to Kubeflow, enabling scalable and reproducible ML workflows on Kubernetes. https://www.redhat.com/en/blog/open-source-ai-red-hat-our-journey-kubeflow-community
Recent work
- I created ambient-code.ai, a blog to track my codegen R&D. What I ended up deciding to create is a platform for AI-assisted software development. The differentiator is an async work pattern that leverages Kubernetes to schedule codegen tasks like any other HPC job. This platform is slowly having its feature set delivered by Anthropic.
- My blog post Your Codebase Is Probably Fighting Claude” went viral in the AI developer community in November 2025, attracting attention from companies like Anthropic, Microsoft, and leading AI research labs.
- An accompanying tool, AgentReady, evaluates research-backed attributes that determine how well a codebase works with AI coding agents. The core thesis: codegen LLMs do better when they have proper context. To quantify this I use a benchmark called tbench (superseded now by tbench2). This harness was created to compare coding agents against each other given real world coding examples. I wired that into AgentReady to quantify it’s recommendations and rebalance them as technology progresses.
- I led Red Hat’s Open Source AI strategy, aligning engineering, research, and product development with the broader AI ecosystem. I led Red Hat’s transition to Kubeflow, enabling scalable and reproducible ML workflows on Kubernetes. A capstone to this project was delivering a Keynote at Kubeflow Summit 2024.
Earlier at Red Hat
- I helped to build, and led the launch of [Red Hat OpenShift Service on AWS](https://aws.amazon.com/rosa/) (ROSA), the first non-AWS service ever listed directly in the AWS console. I led budgeting, staffing, and architectural best practices while working closely with Product leadership to translate business requirements into engineering execution.
- From 2015-2019, I led the team that made Kubernetes suitable for performance-sensitive workloads, particularly AI, telecommunications, and financial services. I was responsible for publishing performance and scalability data for the first public 2000+ node Kubernetes cluster in the CNCF. Later, I developed techniques for running ultra low-latency financial simulations on Kubernetes. You can see some of that work here.
- One of the projects I’m most proud of is bootstrapping the Kubernetes Resource Management Working Group and Network Plumbing Working Group. This was very early Kubernetes days, and it afforded me the opportunity to work directly with NVIDIA, Google, and Intel engineers to deliver GPU support on RHEL and OpenShift. Without this specific work, the creation of OpenDataHub.io and its downstream product Red Hat OpenShift AI may not have been possible.
- In both 2018 and 2019, I presented at NVIDIA GTC on Red Hat’s partnership with NVIDIA around the DGX platform including Ian Buck announcing RHEL on DGX support during one of the keynotes. At the time, this was a big win for little Red Hat.
Before AI and containers
- I spent years in the financial services space, focusing on extreme low latency architecture design, tuning, and jitter analysis. I helped build and launch the International Securities Exchange (acquired by Nasdaq) trading platform using RHEL6, the Real-time Kernel patchset, and Infiniband/RDMA network transport. I spent a ton of time in Chicago during this time as well – primarily supporting the Citadel Securities, Chicago Mercantile Exchange and Jump Trading.
- I represented Red Hat at the Securities Technologies Analysis Center (STAC) and created the first STAC-N1 Stock Trading benchmark on OpenShift/Kubernetes. I developed and published benchmark results for STAC-N (kernel-bypass networking) and STAC-A1 (risk calculation using GPUs). I
- I wrote multiple performance tuning guides for RHEL6 and RHEL7, with primary stakeholders being high-frequency trading firms and stock exchanges. I worked directly with hardware partners like Mellanox, Solarflare, and Intel to bring prototype hardware support into the upstream Linux kernel and RHEL. I walked away from this type of work after attending the first DockerCon in SF, forever changed.
Recognition
- I received Red Hat’s Chairman’s Award in 2014 and remain an active mentor for many engineers across the company. I’ve published multiple patents on container infrastructure, including “Sharing filesystems between containers” and “Method and system for coordination of inter-operable infrastructure as a service (IaaS) and platform as a service (PaaS) systems.” That last one is from 2012 and I wish I did something with it
- I completed the MIT Sloan Executive Certificate in Strategy and Innovation program from November 2020 to June 2022 and continue to upskill regularly through MIT.
How I think about leadership
- Strong relationships are built on trust, mutual understanding, and reciprocity. Do what you say. Be up front, even if it’s bad news. Over-communicate, particularly in teams that are geographically dispersed.
- In every collection of humans, there are doers and “non-doers”. Find the doers as soon as possible and prove yourself as one of them.
- As a leader, your force multiplier is steering, not rowing.
- Micromanagement is a complete waste of time. Select the right people for the job and give them room to get on with it.
- Think of the end at the beginning. Pre-mortems are a great way to save time. Invest in planning, but avoid over-planning. Set aside quiet time to think.
- There is too much flying around to bother trying to keep mental notes. Write everything down.
- Have an opinion. Don’t stay silent. People, particularly good leads and managers, want to hear what you have to say. Don’t wait for complete information. It will never come.
Connect
My LinkedIn is over here. I’m open to conversations about AI infrastructure, platform engineering leadership, and building products that make complex systems accessible to everyone.