Crusoe is building the World’s Favorite AI-first Cloud infrastructure company. We’re pioneering vertically integrated, purpose-built AI infrastructure solutions trusted by Fortune 500 companies to power their most advanced AI applications. Crusoe is redefining AI cloud infrastructure, with a mission to align the future of computing with the future of the climate. Our AI platform is recognized as the "gold standard" for reliability and performance. Our data centers are optimized for AI workloads and are powered by clean, renewable energy.
Be part of the AI revolution with sustainable technology at Crusoe. Here, you'll drive meaningful innovation, make a tangible impact, and join a team that’s setting the pace for responsible, transformative cloud infrastructure.
About the Role
As a Senior/Staff Software Engineer on the Managed AI team at Crusoe, you'll have a pivotal role in shaping the architecture and scalability of our next-generation AI inference platform. You will lead the design and implementation of core systems for our AI services, including resilient fault-tolerant queues, model catalogs, and scheduling mechanisms optimized for cost and performance. This role gives you the opportunity to build and scale infrastructure capable of handling millions of API requests per second across thousands of customers.
From day one, you'll own critical subsystems for managed AI inference, helping to serve large language models (LLMs) to a global audience. As part of a dynamic, fast-growing team, you’ll collaborate cross-functionally, influence the long-term vision of the platform, and contribute to cutting-edge AI technologies. This is a unique opportunity to build a high-performance AI product that will be central to Crusoe's business growth.
A Day In the Life
As a Senior/Staff Software Engineer in the Managed AI team, you’ll play a crucial role in building the infrastructure to serve artificial neural networks and in the near term, large language models (LLMs) at scale. You’ll own the design and implementation of key subsystems for resiliency and quality of service. You will build model catalogs, billing systems, dynamic pricing models, and have the opportunity of going deep into the model deployment stack for cost-optimized scheduling. Each day, you’ll collaborate with a small but growing team of engineers to build scalable cloud-based solutions that can handle millions of requests per second.
You’ll work closely with cross-functional teams, including product management and business strategy, to develop a customer-facing API that serves real-world AI models. Every day will present an opportunity to influence the long-term vision and architectural decisions, from the first lines of code to full-scale implementation. You’ll also be prototyping rapidly, optimizing performance on GPUs, and ensuring high availability as part of the MVP development. Whether it’s contributing to open-source AI frameworks or diving into low-level performance optimizations, your contributions will directly impact both the company’s growth and the product’s success.
You Will Thrive In This Role If You Have:
-
You have a strong background in distributed systems design and implementation, with proven experience in early-stage projects and tight deadlines.
-
You are passionate about building scalable AI infrastructure and have experience with cloud-based services that can handle millions of requests.
-
You enjoy problem-solving around performance optimizations, particularly when it comes to AI inference on GPU-based systems.
-
You have a proactive and collaborative approach, with the ability to work autonomously while engaging with a rapidly growing team.
-
You have strong communication skills, both written and verbal, and can translate complex technical challenges into understandable terms for cross-functional teams.
-
You’re excited about working in a fast-paced environment, contributing to a new product category, and having a tangible influence on the long-term vision of the AI platform.
-
You are passionate about open-source contributions and AI inference frameworks like VLLM, with a desire to push the boundaries of performance and scalability.
-
You are keen on customer-facing product development, with a desire to build user-friendly APIs that integrate real-world feedback for continuous improvement.
Preferred Qualifications
-
Must-Have:
-
Advanced degree in Computer Science, Engineering, or a related field.
-
Demonstrable experience in distributed systems design and implementation.
-
Proven track record of delivering early-stage projects under tight deadlines.
-
Expertise in using cloud-based services, such as, elastic compute, object storage, virtual private networks, managed database, etc
-
Experience in Generative AI (Large Language Models, Multimodal).
-
Experience with container runtimes (e.g., Kubernetes) and microservices architectures.
-
Experience using REST APIs and common communication protocols, such as gRPC.
-
Demonstrated experience in the software development cycle and familiarity with CI/CD tools.
-
-
Nice-to-Have:
-
Proficiency in Golang or Python for large-scale, production-level services.
-
Familiarity with AI infrastructure, including training, inference, and ETL pipelines.
-
Contributions to open-source AI projects such as VLLM or similar frameworks.
-
Performance optimizations on GPU systems and inference frameworks.
-
Growth Opportunities
-
Shape the foundation of a cutting-edge, customer-facing AI inference platform.
-
Become a technical leader in performance optimization and AI infrastructure.
-
Collaborate with partners like Intel and NVIDIA on pushing the limits of AI performance.
-
Contribute to open-source AI frameworks and gain visibility in the AI community.
-
Take on leadership roles as the team scales, with opportunities to mentor junior engineers and influence the product roadmap.
Benefits
-
Hybrid work schedule
-
Industry competitive pay
-
Restricted Stock Units in a fast growing, well-funded technology company
-
Health insurance package options that include HDHP and PPO, vision, and dental for you and your dependents
-
Employer contributions to HSA accounts
-
Paid Parental Leave
-
Paid life insurance, short-term and long-term disability
-
Teladoc
-
401(k) with a 100% match up to 4% of salary
-
Generous paid time off and holiday schedule
-
Cell phone reimbursement
-
Tuition reimbursement
-
Subscription to the Calm app
-
MetLife Legal
-
Company paid commuter benefit; $50 per pay period
Compensation Range
Compensation will be paid in the range of $183,000 - $250,000. Restricted Stock Units are included in all offers. Compensation to be determined by the applicants knowledge, education, and abilities, as well as internal equity and alignment with market data.
Crusoe is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, disability, genetic information, pregnancy, citizenship, marital status, sex/gender, sexual preference/ orientation, gender identity, age, veteran status, national origin, or any other status protected by law or regulation.
Top Skills
What We Do
Crusoe is on a mission to eliminate routine flaring of natural gas and reduce the cost of cloud computing. We are passionate about our goals to help the oil industry operate more efficiently, achieve better relationships with communities and regulators, and improve environmental performance. Crusoe repurposes otherwise wasted energy to fuel the growing demand for computational power in the expanding digital economy.
Why Work With Us
Crusoe has five core values with each value grounded in a set of actionable practices. The combination of philosophical values and actionable practices creates a decision-making framework for each employee to achieve success at Crusoe.
Gallery
Crusoe Energy Systems Offices
Hybrid Workspace
Employees engage in a combination of remote and on-site work.
Our hybrid policy allows employees to work from home two days a week, and to work in-person at our Denver or Arvada location three days a week.