This job listing has expired and the position may no longer be open for hire.

Principal Infrastructure Performance and Development Engineer at Nvidia Corporation

Posted in General Business 30+ days ago.

Type: Full-Time
Location: Santa Clara, California





Job Description:

Joining NVIDIA's AI Efficiency Team means contributing to the infrastructure that powers our leading-edge AI research. This team focuses on optimizing efficiency and resiliency of ML workloads, as well as developing scalable AI infrastructure tools and services. Our objective is to deliver a stable, scalable environment for NVIDIA's AI researchers, providing them with the necessary resources and scale to foster innovation. We're transforming the way Deep Learning applications run on tens of thousands of GPUs. Join our team of experts and help us build a supercharged AI platform that maximizes efficiency, resilience, and Model FLOPs Utilization (MFU). In this position you will be collaborating with a diverse team that cuts across many areas of Deep Learning HW/SW stack in building a highly scalable, fault tolerant and optimized AI platform.

What you will be doing:


  • Build tools and frameworks that provide real time application performance metrics that can be correlated with system metrics


  • Develop automation frameworks that empower applications to thoughtfully predict and overcome system/infrastructure failures, ensuring fault tolerance.


  • Collaborate with software teams to pinpoint performance bottlenecks. Design, prototype, and integrate solutions that deliver demonstrable performance gains in production environments.


  • Adapt and enhance communication libraries to seamlessly support innovative network topologies and system architectures.


  • Design or adapt optimized storage solutions to boost Deep Learning efficiency, resilience, and developer productivity.



What We Need to See:

  • BS/MS/PhD (or equivalent experience) in Computer Science, Electrical Engineering or a related field.


  • Proven experience in least one of the following area:


10+ years of experience in analyzing and improving performance of training applications using PyTorch or similar framework

10+ years of experience with building distributed software applications

10+ years of experience in building storage solutions for Deep Learning applications

10+ years of background in building automated fault tolerant distributed applications

5+ years building tools for bottleneck analysis and automation of fault tolerance in distributed environments.


  • Strong background in parallel programming and distributed systems


  • Experience analyzing and optimizing large scale distributed applications.


  • Excellent verbal and written communication skills



Ways To Stand Out From The Crowd:

  • Deep understanding of HPC and distributed system architecture with emphasis on RDMA


  • Hands on working experience in more than one of the above areas especially with performance analysis and profiling of Deep Learning workloads.


  • Comfortable navigating and working with the PyTorch codebase.


  • Proven understanding of CUDA and GPU architecture



The base salary range is 272,000 USD - 419,750 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits. NVIDIA accepts applications on an ongoing basis.

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.





More jobs in Santa Clara, California


Chegg, Inc.

Chegg, Inc.

Cepheid
More jobs in General Business


Kendo Brands

Kendo Brands

Kendo Brands