🚀 ML Research Scientist

Hiring now — limited positions available!

SEMRON GmbH

💰 Earn $60.000 – $80.000 / year
  • 📍 Location: Dresden
  • đź“… Posted: Oct 21, 2025

About the Role

As an ML Research Engineer at SEMRON, you will design the algorithms and quantization schemes that unlock efficient, high-accuracy inference on our analog in-memory compute platform. Your work will bridge cutting-edge quantization research, mathematical modeling, and hardware-aware algorithm design, ensuring that deep neural networks execute with maximal accuracy and throughput on our custom silicon.

What you will do:

  • Research and develop novel analog-aware quantization methods (PTQ and QAT) tailored to in-memory compute constraints
  • Design mathematically principled matrix-vector multiplication algorithms that exploit sparsity, noise resilience, and non-idealities to improve hardware efficiency
  • Collaborate with analog hardware engineers to define algorithmic requirements and guide co-development of compute primitives

What you should bring in:

  • PhD or equivalent research experience in machine learning, applied mathematics, or a related field
  • Strong understanding of quantization, model optimization , and numerical methods for DNNs
  • Proficiency in Python and PyTorch , with the ability to rapidly prototype and evaluate research ideas
  • A research mindset: curiosity, rigor, and the ability to explore and discard ideas efficiently

Helpful but not required:

  • Contributions to quantization libraries or novel compression methods
  • Publications in top-tier ML venues (NeurIPS, ICLR, ICML, etc.)
  • Familiarity with analog computation challenges (noise, nonlinearity, limited precision, etc.) and the ability to abstract them into robust algorithms
  • Experience collaborating with hardware teams or formulating algorithm-hardware co-design strategies

Why us?

We’re building at the intersection of math, hardware, and machine learning, pushing the boundaries of what's possible in compute. If you’ve implemented your own MVM kernels just to see what happens, trained quantized models for fun, or love thinking deeply about efficiency, sparsity, and how to make models run faster and better, you’ll feel right at home. As a small, technical team, early work defines the future of the stack, and we treat it that way. You\'ll own critical pieces of what we build, with equity to match. No hierarchy, no bureaucracy, just ideas, experiments, and real impact. You’ll grow as fast as you can grow.

#J-18808-Ljbffr
👉 Apply Now

Hurry — interviews are being scheduled daily!