Distributed LLM Inference Engineer

Posted 4 Days Ago
Be an Early Applicant
San Francisco, CA
170K-237K Annually
Mid level
Artificial Intelligence • Cloud • Machine Learning
Accelerate development and production of any AI app, on any cloud, at any scale — with Ray.
The Role
As a Distributed LLM Inference Engineer, you will enhance systems for high-performance ML inference at scale, integrating Ray Data and optimizing processes for cost-effective AI infrastructure solutions while collaborating with product teams and the open-source community.
Summary Generated by Built In

About Anyscale:


At Anyscale, we're on a mission to democratize distributed computing and make it accessible to software developers of all skill levels. We’re commercializing Ray, a popular open-source project that's creating an ecosystem of libraries for scalable machine learning. Companies like OpenAIUberSpotifyInstacartCruise, and many more, have Ray in their tech stacks to accelerate the progress of AI applications out into the real world.


With Anyscale, we’re building the best place to run Ray, so that any developer or data scientist can scale an ML application from their laptop to the cluster without needing to be a distributed systems expert.


Proud to be backed by Andreessen Horowitz, NEA, and Addition with $250+ million raised to date.


About the role


As a Distributed LLM Inference Engineer, you will help systems and optimizations that push the boundaries of performance for inference at large scale. This is an incredibly critical role to Anyscale as it allows us to provide market leading performance and price point for AI infrastructure. 

As part of this role, you will

  • Iterate very quickly with product teams to ship the end to end solutions for Batch and Online inference at high scale which will be used by Customers of Anyscale
  • Work across the stack integrating Ray Data and LLM engine providing optimizations across the stack to provide low cost solutions for large scale ML inference. 
  • Integrate with Open source software like VLLM, work closely with the community to adopt these techniques in Anyscale solutions, and also contribute improvements to open source. 
  • Follow the latest state-of-the-art in the open source and the research community, implementing and extending best practices 

We'd love to hear from you if you have

  • Familiarity with running ML inference at large scale with high throughput
  • Familiarity with deep learning and deep learning frameworks (e.g. PyTorch)
  • Solid understanding of distributed systems, ML inference challenges

Bonus points!

  • ML Systems knowledge
  • Experience using Ray Data
  • Work closely with community on LLM engines like vLLM, TensorRT-LLM
  • Contributions to deep learning frameworks (PyTorch, TensorFlow)
  • Contributions to deep learning compilers (Triton, TVM, MLIR)
  • Prior experience working on GPUs / CUDA 

Compensation

  • At Anyscale, we take a market-based approach to compensation. We are data-driven, transparent, and consistent. The target salary for this role is $170,112 ~ $247,000. As the market data changes over time, the target salary for this role may be adjusted.

  • This role is also eligible to participate in Anyscale's Equity and Benefits offerings, including the following:

  • Stock Options
  • Healthcare plans, with premiums covered by Anyscale at 99%
  • 401k Retirement Plan
  • Wellness stipend
  • Education stipend
  • Paid Parental Leave
  • Flexible Time Off
  • Commute reimbursement
  • 100% of in-office meals covered

Anyscale Inc. is an Equal Opportunity Employer. Candidates are evaluated without regard to age, race, color, religion, sex, disability, national origin, sexual orientation, veteran status, or any other characteristic protected by federal or state law. 


Anyscale Inc. is an E-Verify company and you may review the Notice of E-Verify Participation and the Right to Work posters in English and Spanish

Top Skills

Cuda
Mlir
PyTorch
Ray
TensorFlow
Triton
Tvm
The Company
San Francisco, CA
0 Employees
Hybrid Workplace
Year Founded: 2019

What We Do

Distributed computing made simple. Anyscale enables developers of all skill levels to easily build AI applications that run at any scale, from a laptop to a data center.

Similar Jobs

Anyscale Logo Anyscale

Distributed LLM Inference Engineer

Artificial Intelligence • Software
San Francisco, CA, USA
115 Employees
170K-237K Annually

Datadog Logo Datadog

Director, TS Business Systems Engineering

Artificial Intelligence • Cloud • Software • Cybersecurity
Hybrid
San Francisco, CA, USA
5000 Employees
188K-275K Annually

Datadog Logo Datadog

Product Solutions Architect (Logs) - US West

Artificial Intelligence • Cloud • Software • Cybersecurity
Hybrid
San Francisco, CA, USA
5000 Employees
152K-222K Annually

Datadog Logo Datadog

Senior Network Engineer

Artificial Intelligence • Cloud • Software • Cybersecurity
Hybrid
San Francisco, CA, USA
5000 Employees
144K-184K Annually

Similar Companies Hiring

RunPod Thumbnail
Software • Infrastructure as a Service (IaaS) • Cloud • Artificial Intelligence
San Francisco, CA
53 Employees
EliseAI Thumbnail
Real Estate • Natural Language Processing • Machine Learning • Healthtech • Artificial Intelligence
San Francisco, CA
165 Employees
Altana Thumbnail
Software • Machine Learning • Artificial Intelligence
San Francisco, CA
200 Employees

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account