
Research Scientist – Controlled 3D Generation at Stability AI
RemoteFull-timeRemoteResearchPosted about 2 months ago
About the Role
<p><strong>Research Scientist – Controlled 3D Generation</strong></p>
<p><strong>Location:</strong> Remote</p>
<p><strong>About the Role</strong></p>
<p>We’re seeking a Research Scientist passionate about <strong>3D generation, flow matching, and diffusion models</strong>. You’ll help advance the frontier of controllable 3D content creation—building models that generate consistent, editable, and physically grounded 3D assets and scenes.</p>
<p><strong>What You’ll Do</strong></p>
<ul>
<li>Conduct cutting-edge research on <strong>flow-matching, diffusion, and score-based methods</strong> for 3D generation and reconstruction.</li>
<li>Design and implement scalable training pipelines for controllable 3D generation (meshes, Gaussians, NeRFs, voxels, implicit fields).</li>
<li>Develop techniques for <strong>conditioning and control</strong> (text, sketch, pose, camera, physics) and multi-view consistency.</li>
<li>Analyse model behaviour through ablations, visualisations, and quantitative metrics.</li>
<li>Collaborate with cross-disciplinary research, graphics, and infrastructure teams to translate research into production-ready systems.</li>
<li>Publish results at top-tier venues and work with interns.</li>
</ul>
<p><strong>What You Bring</strong></p>
<ul>
<li>PhD (or equivalent experience) in Machine Learning, Computer Vision, or Computer Graphics.</li>
<li>Published work on <strong>diffusion, flow-matching, or score-based generative models</strong> (2D or 3D).</li>
<li>Strong engineering and problem-solving abilities: experience with <strong>PyTorch, JAX, or CUDA-level optimisation</strong>.</li>
<li>Understanding of <strong>3D representations</strong> (meshes, Gaussians, signed-distance fields, volumetric grids, implicit networks).</li>
<li>Solid grasp of <strong>geometry processing, multi-view consistency, and differentiable rendering</strong>.</li>
<li>Ability to scale experiments efficiently and communicate complex results clearly.</li>
</ul>
<p><strong>Bonus / Preferred</strong></p>
<ul>
<li>Experience generating <strong>coherent 3D scenes</strong> with multiple interacting objects, lighting, and spatial layout.</li>
<li>Familiarity with <strong>scene-level control</strong> (object placement, camera path, simulation, or text-to-scene composition).</li>
<li>Knowledge of <strong>video-to-3D</strong>, <strong>image-to-scene</strong>, or <strong>4D temporal generation</strong>.</li>
<li>Background in <strong>physically-based rendering</strong>, <strong>simulation</strong>, or <strong>world-model architectures</strong>.</li>
<li>Track record of impactful publications or open-source releases.</li>
</ul>
<p><strong>Equal Employment Opportunity:</strong></p>
<p>We are an equal opportunity employer and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, veteran status, disability or other legally protected statuses.</p>
Related Roles
Multimodal Generative AI Researcher
Stability AI
RemoteRemoteSolutions Engineer
Stability AI
United States or CanadaSenior Site Reliability Engineer
Stability AI
United StatesGlobal Director of Partnerships
Stability AI
United StatesGenerative AI Inference Engineer
Stability AI
United StatesSales Development Representative
Stability AI
United States