We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Staff Software Engineer , Anywhere Cloud - AI Systems & Runtimes

Cloudera, Inc.
United States, Texas, Austin
515 Congress Avenue (Show on map)
Apr 02, 2026

Business Area:

Engineering

Seniority Level:

Mid-Senior level

Job Description:

At Cloudera, we empower people to transform complex data into clear and actionable insights. With as much data under management as the hyperscalers, we're the preferred data partner for the top companies in almost every industry. Powered by the relentless innovation of the open source community, Cloudera advances digital transformation for the world's largest enterprises.

Ready to take cloud innovation to the next level? Join Cloudera's Anywhere Cloud team and help deliver a true "build your own pipeline, bring your own engine" experience - enabling data and AI workloads to run anywhere, without friction or vendor lock-in.

We bring the best of public cloud - cost efficiency, scalability, elasticity, and agility - to wherever data lives: public clouds, private data centers, and the edge. Powered by Kubernetes, our hybrid architecture separates compute and storage to maximize flexibility and optimize infrastructure usage.

This isn't just cloud management - it's about building a consistent, secure, and compliant cloud experience that gives organizations full access to all their data, anywhere.

With the acquisition of Taikun, we're simplifying Kubernetes and cloud management even further, creating a unified, scalable, future-ready platform. If you're passionate about Kubernetes - not just using it, but building it at the core, managing workloads across hybrid clouds and data centers, and obsessing over performance and DevOps - this is where you belong.

We are seeking a Staff Software Engineer to lead the architecture and delivery of our cloudnative AI platform. In this highimpact role, you will bridge the gap between cuttingedge AI research and productiongrade Kubernetes environments. You will build the "nervous system" of our AI stack-optimizing how we run and manage opensource models (Llama, Qwen, etc.) using K8snative patterns like Custom Resources (CRDs) and Operators, enabling agentic AI to thrive, and designing integration patterns that let our product teams and customers consume AI capabilities seamlessly.

As a Staff Software Engineer, you will:

  • Enterprise AI Services: Design and implement elegant, scalable application services (Go/Node.js) that wrap AI capabilities for enterprise use.

  • K8s-Native AI Orchestration: Lead the deployment of inference servers (vLLM, Triton) using KServe, KubeRay, or Knative to ensure serverless-style scaling for AI workloads.

  • Developer Velocity: Build internal tooling, SDKs, and "AI Gateways" that enhance team agility and simplify the integration of Foundation Models (Llama, GPT) into product features.

  • RAG & Prompt Engineering: Architect robust Retrieval-Augmented Generation (RAG) pipelines and prompt management services that integrate seamlessly with vector databases and enterprise data sources.

  • Cross-Functional Collaboration: Partner with UI engineers, UX designers, and Product Management to ensure the AI platform is not just powerful, but highly usable for internal developers.

  • Infrastructure & Security: Ensure AI workloads are secure, multi-tenant, and optimized for GPU resource scheduling (MIG, fractional GPUs) within Kubernetes.

We're excited about you if you have:

  • Bachelor's degree with 6+ years of software engineering experience (or equivalent Masters/PhD tenure), with at least 2+ years focused on AI/ML systems.

  • Expert proficiency in Python (for AI ecosystem) and strong competence in a systems language like Go or Rust/C++ (for high-performance serving layers).

  • Deep understanding of LLM deployment challenges and runtimes (e.g., vLLM, ONNX, TorchServe, Triton). Familiarity with quantization techniques (AWQ, GPTQ) to optimize model size/speed.

  • Experience building complex workflows using tools like LangChain or LlamaIndex, and deploying them on containerized infrastructure (Docker/Kubernetes).

  • Ability to navigate the rapidly changing AI landscape, filtering hype from practical engineering solutions, and driving technical alignment across teams.

You May Also Have:

  • Model Fine-Tuning: Experience with efficient fine-tuning techniques (PEFT, LoRA/QLoRA) on custom datasets.

  • GPU Optimization: Familiarity with CUDA programming or profiling GPU performance (Nsight systems).

  • Open Source: Contributions to open-source AI projects (HuggingFace transformers, vLLM, etc.).

Why this role matters:

This is more than cloud management, it's about building the foundation for a consistent, secure, and compliant cloud experience that gives organizations 100% access to 100% of their data, anywhere.

With the recent acquisition of Taikun, we are simplifying Kubernetes and cloud management even further, creating a platform that is unified, scalable, and future-ready.

If you are passionate about Kubernetes, not just using it but building it at the core managing workloads across hybrid clouds and datacenters and obsessed with performance, devops, etc. this is where you belong.

This role is not eligible for immigration sponsorship.

What you can expect from us:

  • Generous PTO Policy

  • Support work life balance with Unplugged Days

  • Flexible WFH Policy

  • Mental & Physical Wellness programs

  • Phone and Internet Reimbursement program

  • Access to Continued Career Development

  • Comprehensive Benefits and Competitive Packages

  • Paid Volunteer Time

  • Employee Resource Groups

EEO/VEVRAA

#LI-BV1

#LI-HYBRID

Applied = 0

(web-bd9584865-zfcbd)