Join a European deep-tech leader in quantum and AI, working on cutting-edge solutions to make AI faster, greener, and more accessible. Design and develop new techniques to compress Large Language Models using quantum-inspired technologies.
Key Highlights
Technical Skills Required
Benefits & Perks
Job Description
We are a European deep-tech leader in quantum and AI, backed by major global strategic investors and strong EU support. Our groundbreaking technology is already transforming how AI is deployed worldwide β compressing large language models by up to 95% without losing accuracy and cutting inference costs by 50β80%.
Joining us means working on cutting-edge solutions that make AI faster, greener, and more accessible β and being part of a company often described as a βquantum-AI unicorn in the making.β
We offer
- Competitive annual salary.
- Two unique bonuses: signing bonus at incorporation and retention bonus at contract completion.
- Relocation package (if applicable).
- Fixed-term contract ending in June 2026.
- Hybrid role and flexible working hours.
- Be part of a fast-scaling Series B company at the forefront of deep tech.
- Equal pay guaranteed.
- International exposure in a multicultural, cutting-edge environment.
As a Machine Learning Engineer, you will
- Design and develop new techniques to compress Large Language Models based on quantum-inspired technologies to solve challenging use cases in various domains.
- Conduct rigorous evaluations and benchmarks of model performance, identifying areas for improvement, and fine-tuning and optimising LLMs for enhanced accuracy, robustness, and efficiency.
- Build LLM based applications such as RAG and AI agents.
- Use your expertise to assess the strengths and weaknesses of models, propose enhancements, and develop novel solutions to improve performance and efficiency.
- Act as a domain expert in the field of LLMs, understanding domain-specific problems and identifying opportunities for quantum AI-driven innovation.
- Design, train and deliver custom deep learning models for our clients
- Work in diverse areas beyond LLM, e.g., computer vision.
- Maintain comprehensive documentation of LLM development processes, experiments, and results.
- Share your knowledge and expertise with the team to foster a culture of continuous learning, guiding junior members of the team in their technical growth and helping them develop their skills in LLM development.
- Participate in code reviews and provide constructive feedback to team members.
- Stay up to date with the latest advancements and emerging trends in LLMs and recommend new tools and technologies as appropriate.
Required Qualifications
- Bachelor's, Master's or Ph.D. in Artificial Intelligence, Computer Science, Data Science, or related fields.
- 2+ years of hands-on experience with designing, training or fine-tuning deep learning models, preferably working with transformer or computer vision models.
- 2+ year of hands-on experience using transformer models, with excellent command of libraries such as HuggingFace Transformers, Accelerate, Datasets, etc."
- Solid mathematical foundations and theoretical understanding of deep learning algorithms and neural networks, both training and inference.
- Excellent problem-solving, debugging, performance analysis, test design, and documentation skills.
- Strong understanding with the fundamentals of GPU architectures and and LLM hardware/ software infrastructures.
- Excellent programming skills in Python and experience with relevant libraries (PyTorch, HuggingFace, etc.).
- Experience with cloud platforms (ideally AWS), containerization technologies (Docker) and with deploying AI solutions in a cloud environment
- Excellent written and verbal communication skills, with the ability to work collaboratively in a fast-paced team environment and communicate complex ideas effectively.
- Previous research publications in deep learning or any tech field is a plus
- Fluent in English
Preferred Qualifications
- Experience running large-scale workloads in high-performance computing (HPC) clusters.
- Experience in handling large datasets and ensuring data quality.
- Experience with inference and deployment environments (TensorRT, vLLM, etc.).
- Experience in accuracy evaluation of LLMs (OpenLLM Leaderboard).
- Experience building and evaluating RAG systems.
- Experience in building non-LLM deep learning applications, e.g., computer vision, audio or signal processing.
- Familiarity with AI ethics and responsible AI practices.
- Experience in DevOps/MLOps practices in deep learning product development.