Question
FULL_TIME
5-10

Solutions Architect - CPU and LPU

4/3/2026

You will drive the adoption of NVIDIA CPU and LPU-based AI infrastructure by designing and optimizing heterogeneous AI workloads for customers. This involves building proof-of-concepts, reference architectures, and providing technical leadership to solve complex performance bottlenecks.

Working Hours

40 hours/week

Company Size

10,001+ employees

Language

English

Visa Sponsorship

No

About The Company
Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the metaverse. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry.
About the Role

aNVIDIA’s Solutions Architect team is looking for a software-focused Solutions Architect to drive adoption of next-generation AI infrastructure across NVIDIA CPU platforms and LPU-based inference systems. This role will focus on NVIDIA CPUs, including Grace, Vera, and future CPU generations, and on LPU platforms and LPX-class systems used to accelerate large language model inference and other latency-sensitive generative AI workloads. We are looking for someone who understands that AI efficiency is a full-stack challenge spanning model architecture, runtime, compiler, serving framework, host software, memory movement, and workload partitioning across CPU, GPU, and LPU.

As a Solutions Architect, you will be the first line of technical expertise between NVIDIA and our customers for CPU- and LPU-centric AI system design. You will help customers understand how NVIDIA CPUs and LPU-based systems can improve the efficiency, latency, throughput, and total cost of their AI workloads, especially when deployed alongside NVIDIA GPUs in heterogeneous production environments. Your work will range from proof-of-concept development and software stack optimization to technical leadership with customer architects, engineering teams, and senior decision makers. You will engage directly with developers, ML engineers, researchers, platform architects, and IT leaders to identify bottlenecks, design optimization strategies, and build deployable reference architectures. You will also work closely with NVIDIA engineering, product, and field teams to translate customer needs into platform feedback, solution patterns, and roadmap inputs.

What you’ll be doing:

  • Evangelize NVIDIA CPU platforms, including Grace, Vera, and future generations, as well as LPU-based systems and LPX-class platforms, with a strong focus on AI software stacks and workload efficiency.

  • Help customers design and optimize AI workloads across CPU, GPU, and LPU, improving latency, throughput, utilization, and overall cost efficiency.

  • Analyze and tune LLM and generative AI pipelines across serving, runtime, memory, I/O, batching, scheduling, and orchestration layers.

  • Build proof-of-concepts, reference architectures, and technical guidance in partnership with Engineering, Product, and Sales teams.

  • Establish trusted technical relationships with customer architects, infrastructure teams, and senior leaders, becoming a strategic advisor for heterogeneous AI system design.

What we need to see:

  • MS or PhD in Computer Science, Engineering, Mathematics, Physics, or a related field, or equivalent experience, plus 5+ years in AI systems, infrastructure, performance engineering, or solution architecture.

  • Strong understanding of modern CPU architecture, Linux systems, and software performance tuning, along with hands-on experience in AI inference for LLM, generative AI, or agentic AI workloads.

  • Experience optimizing heterogeneous systems involving CPU and accelerators, with familiarity in frameworks such as PyTorch, Triton, TensorRT-LLM, vLLM, or ONNX Runtime.

  • Strong programming, problem-solving, and communication skills, with the ability to work effectively with both technical teams and senior customer stakeholders.

Ways to stand out from the crowd:

  • Experience with NVIDIA CPU platforms such as Grace, Grace Hopper, or Arm64 server environments, and familiarity with LPU-based systems or other low-latency inference accelerators.

  • Deep expertise in LLM inference optimization, serving architecture, and workload placement across CPU, GPU, and LPU.

  • Experience building customer-facing proof-of-concepts and measuring AI efficiency through latency, throughput, cost per token, power, or utilization.

  • Familiarity with NVIDIA AI software and platform technologies.

NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-looking and talented people in the world working with us. If you are creative, autonomous, and excited about helping customers build highly efficient AI platforms across CPU, GPU, and LPU technologies, we want to hear from you.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. We highly value diversity in our current and future employees and do not discriminate on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status, or any other characteristic protected by law.

Key Skills
Solutions ArchitectureCPU ArchitectureAI InfrastructurePerformance EngineeringLLM InferenceGenerative AIPyTorchTritonTensorRT-LLMvLLMONNX RuntimeLinuxSoftware OptimizationHeterogeneous ComputingArm64
Categories
TechnologyEngineeringSoftwareData & AnalyticsConsulting
Apply Now

Please let NVIDIA know you found this job on InterviewPal. This helps us grow!

Apply Now
Prepare for Your Interview

We scan and aggregate real interview questions reported by candidates across thousands of companies. This role already has a tailored question set waiting for you.

Elevate your application

Generate a resume, cover letter, or prepare with our AI mock interviewer tailored to this job's requirements.