Nebius logo

Nebius

Senior ML Engineer (Token Factory) at Nebius

Amsterdam, Netherlands; Berlin, Germany; Israel; London, United Kingdom; Prague, Czech Republic; Remote - EuropeFull-timeRemoteMLPosted 15 days ago

About the Role

<div class="content-intro"><p><strong>About Nebius:</strong></p> <p>Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.</p> <p>Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.</p> <p>Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&amp;D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&amp;D.</p></div><p><strong>The role</strong></p> <p>Token Factory is a part of Nebius Cloud, one of the world's largest GPU clouds, running tens of thousands of GPUs. We are building a high-performance inference and fine-tuning platform designed to push foundation models to their hardware limits. Our mission is to maximize throughput, minimise latency, and optimise cost-per-token across tens of thousands of GPUs.</p> <p>&nbsp;</p> <p><strong>Some directions we are currently working on, and which you can be a part of:</strong></p> <ul> <li>Inference Optimization: Identifying LLM inference bottlenecks to drive production speedups. Squeezing the maximum performance for a wide range of LLM architectures at scale (e.g., GPT-OSS, Kimi K2.5, DeepSeek V3.1/V3.2, GLM-5).</li> <li>Inference engines support: Implement novel speculative decoding architectures, optimise&nbsp;components of various LLM designs (dense/MoE, autoregressive/parallel), and contribute to open-source inference engines.</li> <li>Low Precision Training &amp; Inference: Design and productionise low-precision (FP8, NVFP4/MXFP4) training and inference pipelines with measurable gains in throughput and cost-efficiency.</li> </ul> <p>&nbsp;</p> <p><strong>We expect you to have:</strong></p> <ul> <li>A profound understanding of theoretical foundations of machine learning and transformer architecture.</li> <li>Experience profiling GPU workloads using Nsight, PyTorch profiler, or similar tools</li> <li>Understanding of GPU memory hierarchy and compute/memory tradeoffs</li> <li>Familiarity with important ideas in LLM space, such as MHA, RoPE, KV-cache, Flash Attention, and quantisation</li> <li>&nbsp;Understanding of performance aspects of large neural network training (sharding strategies, custom kernels, hardware features etc.)</li> <li>&nbsp;Strong software engineering skills (we mostly use Python)</li> <li>Deep experience with modern deep learning frameworks</li> <li>Proficiency in contemporary software engineering approaches, including CI/CD, version control and unit testing</li> <li>Strong communication and leadership abilities</li> </ul> <p>&nbsp;</p> <p><strong>Nice to have:</strong></p> <ul> <li>Experience working with open-source inference engines (vLLM, SGLang, TensorRT-LLM), including contributions</li> <li>Experience with kernel languages or DSLs such as Triton, Cute, CUTLASS, CUDA</li> <li>A track record of building and delivering products (not necessarily ML-related) in a dynamic startup-like environment.</li> <li>Strong engineering skills, including experience in developing large distributed systems or high-load web services.</li> <li>Open-source projects that showcase your engineering prowess</li> <li>&nbsp;Excellent command of the English language, alongside superior writing, articulation, and communication skills.</li> </ul> <p>&nbsp;</p> <p>&nbsp;</p><div class="content-conclusion"><p><strong>Benefits &amp; Perks:</strong></p> <ul> <li>Competitive compensation</li> <li>Career growth and learning opportunities</li> <li>Flexibility and work-life balance</li> <li>Collaborative and innovative culture</li> <li>Opportunity to work on impactful AI projects</li> <li>International environment and talented teams</li> </ul> <p><strong>What's it like to work at Nebius:</strong></p> <p>Fast moving&nbsp;- Bold thinking&nbsp;- Constant growth&nbsp;- Meaningful impact&nbsp;- Trust and real ownership&nbsp;- Opportunity to shape the future of AI&nbsp;</p> <p><strong>Equal Opportunity Statement:</strong></p> <p>Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.</p> <p>Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.&nbsp;</p> <p>If you need accommodations during the application process, please let us know.</p></div>