All active PyTorch roles based in Amsterdam.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
At Together.ai, we are building state-of-the-art infrastructure to enable efficient and scalable inference for large language models (LLMs). Our mission is to optimize inference frameworks, algorithms, and infrastructure, pushing the boundaries of performance, scalability, and cost-efficiency.
We are seeking an Inference Frameworks and Optimization Engineer to design, develop, and optimize distributed inference engines that support multimodal and language models at scale. This role will focus on low-latency, high-throughput inference, GPU/accelerator optimizations, and software-hardware co-design, ensuring efficient large-scale deployment of LLMs and vision models.
This role offers a unique opportunity to shape the future of LLM inference infrastructure, ensuring scalable, high-performance AI deployment across a diverse range of applications. If you're passionate about pushing the boundaries of AI inference, we’d love to hear from you!
Must-Have:
Nice-to-Have:
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.
We offer competitive compensation, startup equity, health insurance and other competitive benefits. The US base salary range for this full-time position is: $160,000 - $230,000 + equity + benefits. Our salary ranges are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
Ready to apply?
Apply to Together AIWhy work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
Customer experience at Nebius GPU AI Cloud involves tackling customers’ challenges and directly impacting their success by solving real-world AI and ML problems at massive GPU cloud scale. You’ll not only resolve issues, but play a key role in shaping clients’ business success by optimizing their AI solutions. Working with advanced GPUs such as H200, B200 and GB200, as well as modern ML frameworks, you’ll influence the development of the Nebius AI Cloud and gain experience at the intersection of infrastructure and AI. With minimal bureaucracy, you’ll have the freedom to innovate, take ownership and drive change. Opportunities for growth are abundant in this vibrant and supportive professional community.
We are seeking a highly skilled and customer-focused leader to join our team as a Cloud Solutions Architect Lead. As a team leader, you will play a pivotal role in customer success across EU region, both managing the efforts of Cloud Solutions Architects team (technical and process excellence, talent development) and building processes alongside your stakeholders. Player-coach mentality and proven track record of managing high expertise teams is a must.
You’re welcome to work from our office in Amsterdam or remotely from any EU country.
Your responsibilities will include:
We expect you to have:
It will be an added bonus if you have:
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
We are looking for a Customer Engineer to support key and strategic Nebius GPU Cloud services customers. In this role, you will be a trusted technical advisor, helping clients design, deploy, and scale AI solutions while managing large-scale GPU workloads involving hundreds to thousands of GPUs. You will also collaborate with sales and product teams to drive growth and enhance customer satisfaction.
You’re welcome to work remotely from Europe.
Your responsibilities will include:
We expect you to have:
It will be an added bonus if you have:
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
Nebius seeks a Key Customers Solutions Architect to support key and strategic Nebius GPU Cloud services customers. In this role, you will be a trusted technical advisor, helping clients design, deploy, and scale AI solutions while managing large-scale GPU workloads involving hundreds to thousands of GPUs. You will also collaborate with sales and product teams to drive growth and enhance customer satisfaction.
You’re welcome to work remotely from Europe.
Your responsibilities will include:
We expect you to have:
It will be an added bonus if you have:
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
This is Adyen
Adyen provides payments, data, and financial products in a single solution for customers like Meta, Uber, H&M, and Microsoft - making us the financial technology platform of choice. At Adyen, everything we do is engineered for ambition.
For our teams, we create an environment with opportunities for our people to succeed, backed by the culture and support to ensure they are enabled to truly own their careers. We are motivated individuals who tackle unique technical challenges at scale and solve them as a team. Together, we deliver innovative and ethical solutions that help businesses achieve their ambitions faster.
Machine Learning Scientist
Adyen is looking for a Machine Learning Scientist to join our team in Amsterdam, a person sitting at the cornerstone of algorithms, mathematics, and engineering, who can solve problems by designing and implementing production-ready machine learning solutions. You will be responsible for building, developing and deploying algorithms that power data products at Adyen.
We are currently hiring for the following teams:
Insights - Diagnostics: The Insights team is at the core of this platform, providing the world’s largest merchants with the data and analytics they need to optimize their payment performance. Within this, our Proactive Diagnostics initiative acts as a proactive guard for merchant revenue, closing the loop between detecting an anomaly and providing a clear path to rectification. We operate at the intersection of Big Data and actionable intelligence. By leveraging Adyen’s global payment flow, we apply advanced statistical models and Causal Inference to not only detect performance drops but to explain the "why" behind them. We are looking for a Machine Learning Engineer to help us architect the next generation of our diagnostic engine.
Regulatory Reporting Technology:
Adyen's Regulatory Reporting Tech team is seeking a Machine Learning Scientist to join us in Amsterdam. You will help in further automating and scaling our global regulatory reporting framework to keep Adyen compliant across all markets. Together with the team, you will implement key technical solutions to streamline our regulatory reporting operations. If you have a strong machine learning background and love solving complex problems, we want to hear from you.
In this role, you will:
Who You Are:
Our Diversity, Equity and Inclusion commitments
Our unique approach is a product of our diverse perspectives. This diversity of backgrounds and cultures is essential in helping us maintain our momentum. Our business and technical challenges are unique, and we need as many different voices as possible to join us in solving them - voices like yours. No matter who you are or where you’re from, we welcome you to be your true self at Adyen.
Studies show that women and members of underrepresented communities apply for jobs only if they meet 100% of the qualifications. Does this sound like you? If so, Adyen encourages you to reconsider and apply. We look forward to your application!
What’s next?
Ensuring a smooth and enjoyable candidate experience is critical for us. We aim to get back to you regarding your application within 5 business days. Our interview process tends to take about 4 weeks to complete, but may fluctuate depending on the role. Learn more about our hiring process here. Don’t be afraid to let us know if you need more flexibility.
This role is based out of our Amsterdam office. We are an office-first company and value in-person collaboration; we do not offer remote-only roles.
Ready to apply?
Apply to AdyenThis is Adyen
At Adyen, we’re engineered for ambition. We empower our teams with the culture and support they need to own their careers. The people of Adyen are motivated problem-solvers who tackle unique technical challenges at scale, delivering innovative and ethical solutions to help the world’s best businesses achieve their ambitions faster, and we’re looking for a motivated Senior Machine Learning Engineer to join our team in Amsterdam.
Senior Machine Learning Engineer
The Customer Risk team is at the front line of this platform, building the next-generation systems required to assess and mitigate risk in real time. They are responsible for keeping our platform safe, while maintaining a seamless experience for our legitimate global merchants.
They operate at the critical intersection of high-stakes security and massive scale. By leveraging Adyen’s global payment flow, they are building a greenfield risk engine from scratch that moves beyond traditional detection to sophisticated, real-time entity assessment. We are looking for our first Senior Machine Learning Engineer to lead this effort and help architect the future of risk at Adyen.
In this role, you will:
Who You Are:
Our Diversity, Equity and Inclusion commitments
Our unique approach is a product of our diverse perspectives. This diversity of backgrounds and cultures is essential in helping us maintain our momentum. Our business and technical challenges are unique, and we need as many different voices as possible to join us in solving them - voices like yours. No matter who you are or where you’re from, we welcome you to be your true self at Adyen.
Studies show that women and members of underrepresented communities apply for jobs only if they meet 100% of the qualifications. Does this sound like you? If so, Adyen encourages you to reconsider and apply. We look forward to your application!
What’s next?
Ensuring a smooth and enjoyable candidate experience is critical for us. We aim to get back to you regarding your application within 5 business days. Our interview process tends to take about 4 weeks to complete, but may fluctuate depending on the role. Learn more about our hiring process here. Don’t be afraid to let us know if you need more flexibility.
This role is based in Amsterdam. Our culture is built on the foundation of in-person collaboration, where our teams work side-by-side to solve unique challenges and accelerate growth.
Ready to apply?
Apply to AdyenWhy work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
We seek an experienced Specialist Solutions Architect to support AI-focused customers leveraging Nebius services. In this role, you will be a trusted advisor, collaborating with clients to design scalable AI solutions, resolve technical challenges and manage large-scale AI deployments involving hundreds to thousands of GPUs.
You’re welcome to work on-site in Amsterdam or remotely from any other EU country.
Your responsibilities will include:
We expect you to have:
It will be an added bonus if you have:
Preferred tooling:
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
Compensation
We offer competitive compensation packages based on experience.
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
The role
At Nebius, we’re building a next-generation AI compute platform for large-scale ML training and inference — from a few nodes to thousands of GPUs.
We’re looking for a Technical Product Manager to own product direction for Soperator — our Slurm-on-Kubernetes control plane for GPU clusters.
In this role, you will shape how ML engineers and research teams run, scale, and optimize distributed workloads in production.
If you care about systems that combine performance, reliability, and developer experience at the frontier of AI infrastructure, this role is for you.
Your responsibilities will include:
• Own the full user journey across Soperator clusters: Slurm workflows, dashboards, alerts/notifications, node lifecycle, and training/inference capacity management.
• Define product direction end-to-end: problem discovery → solution design → delivery → adoption.
• Lead deep customer discovery through interviews, usage analytics, and workload analysis to uncover high-impact opportunities.
• Drive execution across platform teams: compute, networking, storage, observability, IAM and etc.
• Translate frontier ML and infrastructure ideas into practical product capabilities for real-world GPU clusters.
• Define success metrics, prioritize roadmap decisions with data, and ensure measurable customer/business impact.
• Lead the open-source strategy and execution for Soperator: shape public roadmap themes, prioritize OSS-facing capabilities, and ensure strong adoption in the community.
We expect you to have:
• 3–5+ years in Product Management, ML infrastructure/MLOps, distributed systems, or cloud platform engineering.
• Strong technical depth in distributed systems, cloud infrastructure, or ML platforms.
• Hands-on familiarity with large-scale ML training and orchestration tools (e.g., Slurm, Kubernetes, Ray).
• Track record of shipping technically complex products with multiple engineering teams.
• Strong communication and stakeholder management across engineering, research, and customers.
• Experience with product analytics, data-informed prioritization, and experimentation.
• High ownership, high learning velocity, and comfort operating in fast-moving AI infrastructure environments.
It will be an added bonus if you have:
• Experience with GPU platforms and HPC primitives: InfiniBand/RDMA, topology-aware scheduling, high-throughput storage.
• Practical understanding of modern ML training stacks: PyTorch, DeepSpeed, FSDP/ZeRO, NCCL.
• Familiarity with efficiency and reliability metrics: Goodput, MFU, failure modes, preemption handling, health checks.
• Exposure to large-scale LLM training/inference systems.
• Experience in observability, performance tuning, or SRE/reliability engineering.
• Customer-facing technical experience (solutioning, support, architecture advisory).
About Nebius
Nebius AI is an AI cloud platform with one of the largest GPU capacities in Europe. Launched in November 2023, the Nebius AI platform provides high-end, training-optimized infrastructure for AI practitioners. As an NVIDIA preferred cloud service provider, Nebius AI offers a variety of NVIDIA GPUs for training and inference, as well as a set of tools for efficient multi-node training.
Nebius AI owns a data center in Finland, built from the ground up by the company’s R&D team and showcasing our commitment to sustainability. The data center is home to ISEG, the most powerful commercially available supercomputer in Europe and the 16th most powerful globally (Top 500 list, November 2023).
Nebius’s headquarters are in Amsterdam, Netherlands, with teams working out of R&D hubs across Europe and the Middle East.
Nebius AI is built with the talent of more than 500 highly skilled engineers with a proven track record in developing sophisticated cloud and ML solutions and designing cutting-edge hardware. This allows all the layers of the Nebius AI cloud – from hardware to UI – to be built in-house, distictly differentiating Nebius AI from the majority of specialized clouds: Nebius customers get a true hyperscaler-cloud experience tailored for AI practitioners. We’re growing and expanding our products every day.
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
We are seeking a highly skilled Systems Engineer (Cloudmeter) to join our team to support benchmarking of GPU platforms for machine learning and AI workloads. You will play a critical role in evaluating the performance of GPU-based hardware for various deep learning and AI frameworks, enabling data-driven decisions for platform optimization and next-generation hardware development.
In this position, your responsibility will be to:
We expect you to have:
Ways to stand out from the crowd:
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
We’re looking for a Senior HPC Cluster Engineer to join our team and play a key role in the development of our cutting-edge hyperscaler platform. The GPU & InfiniBand team is responsible for enhancing and optimizing the core components of our Cloud platform, with a specific focus on GPU computing, InfiniBand networks, and the KVM/QEMU stack. You’ll work closely with hardware virtualization and device emulation technologies, ensuring high performance and security in multi-GPU, HPC environments. The role involves analyzing, troubleshooting, and improving infrastructure to support new hardware, fine-tuning system performance, and automating fault detection and resolution in a complex system.
In this position, you will be responsible for:
We expect you to have:
It would be a plus if you have:
We conduct coding interviews as part of the process.
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
The role
Token Factory is a part of Nebius Cloud, one of the world's largest GPU clouds, running tens of thousands of GPUs. We are building a high-performance inference and fine-tuning platform designed to push foundation models to their hardware limits. Our mission is to maximize throughput, minimise latency, and optimise cost-per-token across tens of thousands of GPUs.
Some directions we are currently working on, and which you can be a part of:
We expect you to have:
Nice to have:
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
Software engineers and AI agents alike suffer from the same problem: finding that one person or place that will answer their tough, specific question. Many solutions promise to solve this with similarity search in vector databases. Unfortunately, finding the answer is often a puzzle with pieces to be collected across a myriad of contradictory sources and cannot be solved without surgical search and careful reasoning.
Spectrum collects data from an organization's code, docs, and issues, and organizes knowledge in a unified ontology that AI agents can efficiently search through and reason over. We aim to revolutionize the semantic layer space for software-building organizations and move beyond specs that fall out of sync with code, introducing a living spec – one that's extracted from the whole system and used to keep it aligned. Spectrum is meant to be the single source of truth for all product and architectural knowledge.
A resident of JetBrains' startup incubator, Spectrum enjoys startup speed and autonomy, and is backed by 25 years of developer tooling expertise. We are looking for a Senior ML Researcher to develop the core methods that make Spectrum possible – novel approaches to temporal ontology extraction, contradiction detection, and semantic alignment across heterogeneous software artifacts. You will help define and execute the research agenda, while also collaborating with JetBrains Research and external academic advisors.
*Some benefits may vary depending on location.
#LI-DNI
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
About Telnyx
Telnyx is an industry leader that's not just imagining the future of global connectivity—we're building it. From architecting and amplifying the reach of a private, global, multi-cloud IP network, to bringing hyperlocal edge technology right to your fingertips through intuitive APIs, we're shaping a new era of seamless interconnection between people, devices, and applications.
We're driven by a desire to transform and modernize what's antiquated, automate the manual, and solve real-world problems through innovative connectivity solutions. As a testament to our success, we're proud to stand as a financially stable and profitable company. Our robust profitability allows us not only to invest in pioneering technologies but also to foster an environment of continuous learning and growth for our team.
Our collective vision is a world where borderless connectivity fuels limitless innovation. By joining us, you can be part of laying the foundations for this interconnected future. We're currently seeking passionate individuals who are excited about the opportunity to contribute to an industry-shaping company while growing their own skills and careers.
The Impact You'll Drive
As a Senior ML Engineer (Speech Synthesis), you’ll be a founding member of the team building Telnyx’s next-generation speech synthesis systems. This is a greenfield opportunity — you’ll define the stack, architecture, and best practices for training and deploying state-of-the-art multilingual text-to-speech (TTS) models that power our voice AI agents.
You’ll build everything from distributed training pipelines to inference services that generate ultra-low-latency, lifelike voices across dozens of languages. Your work will bridge research and production — shaping how millions of people experience real-time conversational AI.
What You’ll Work On
What You’ll Work With
What We’re Looking For
Why Telnyx
You’ll be joining a company where voice, infrastructure, and AI converge. Telnyx is building the foundation for real-time, intelligent global communications — and your work on multilingual TTS will be at the core of that vision.
#LI-KG1
#LI-REMOTE
Ready to apply?
Apply to Telnyx
Dataiku is the Platform for AI Success, the enterprise orchestration layer for building, deploying, and governing AI. In a single environment, teams design and operate analytics, machine learning, and AI agents with the transparency, collaboration, and control enterprises require. Sitting above data platforms, cloud infrastructure, and AI services, Dataiku connects the full enterprise AI stack — empowering organizations to run AI across multi-vendor environments with centralized governance.
The world’s leading companies rely on Dataiku to operationalize AI and run it as a true business performance engine delivering measurable value. For more, visit the Dataiku blog, LinkedIn, X, and YouTube.
Why Engineering at Dataiku?
Dataiku’s on-premise, cloud, or SaaS-deployed platform connects many data science technologies, and our technology stack reflects our commitment to quality and innovation. We integrate the best of data and AI tech, selecting tools that truly enhance our product. From the latest LLMs to our dedication to open source communities, you'll work with a dynamic range of technologies and contribute to the collective knowledge of global tech innovators. You can find out even more about working in Engineering at Dataiku by taking a look here.
Here are some useful links so you preview what we do at Dataiku: Dataiku's Key Capabilities ; Dataiku's Github, you can also take a look at the Gallery, a public instance showcasing some example projects (note editing is very limited and will be regularly reset).
Our product is called Dataiku DSS which stands for Dataiku Data Science Studio. If you’d like to know more about it, you can watch the demo here or try the free version here.
How you’ll make an impact
This position is either onsite/hybrid from our Amsterdam office or full remote from any part of Netherlands.
As a Fullstack Engineer, you’ll contribute to building Dataiku DSS core features by joining one of the following themes:
What you need to be successful
Ready to apply?
Apply to Dataiku
This is Adyen
Adyen provides payments, data, and financial products in a single solution for customers like Meta, Uber, H&M, and Microsoft - making us the financial technology platform of choice. At Adyen, everything we do is engineered for ambition.
For our teams, we create an environment with opportunities for our people to succeed, backed by the culture and support to ensure they are enabled to truly own their careers. We are motivated individuals who tackle unique technical challenges at scale and solve them as a team. Together, we deliver innovative and ethical solutions that help businesses achieve their ambitions faster.
Machine Learning Scientist
Adyen is looking for a Machine Learning Scientist to join our team in Amsterdam, a person sitting at the cornerstone of algorithms, mathematics, and engineering, who can solve problems by designing and implementing production-ready machine learning solutions. You will be responsible for building, developing and deploying algorithms that power data products at Adyen.
We are currently hiring for the following teams:
Insights - Diagnostics: The Insights team is at the core of this platform, providing the world’s largest merchants with the data and analytics they need to optimize their payment performance. Within this, our Proactive Diagnostics initiative acts as a proactive guard for merchant revenue, closing the loop between detecting an anomaly and providing a clear path to rectification. We operate at the intersection of Big Data and actionable intelligence. By leveraging Adyen’s global payment flow, we apply advanced statistical models and Causal Inference to not only detect performance drops but to explain the "why" behind them. We are looking for a Machine Learning Engineer to help us architect the next generation of our diagnostic engine.
Regulatory Reporting Technology:
Adyen's Regulatory Reporting Tech team is seeking a Machine Learning Scientist to join us in Amsterdam. You will help in further automating and scaling our global regulatory reporting framework to keep Adyen compliant across all markets. Together with the team, you will implement key technical solutions to streamline our regulatory reporting operations. If you have a strong machine learning background and love solving complex problems, we want to hear from you.
In this role, you will:
Who You Are:
Our Diversity, Equity and Inclusion commitments
Our unique approach is a product of our diverse perspectives. This diversity of backgrounds and cultures is essential in helping us maintain our momentum. Our business and technical challenges are unique, and we need as many different voices as possible to join us in solving them - voices like yours. No matter who you are or where you’re from, we welcome you to be your true self at Adyen.
Studies show that women and members of underrepresented communities apply for jobs only if they meet 100% of the qualifications. Does this sound like you? If so, Adyen encourages you to reconsider and apply. We look forward to your application!
What’s next?
Ensuring a smooth and enjoyable candidate experience is critical for us. We aim to get back to you regarding your application within 5 business days. Our interview process tends to take about 4 weeks to complete, but may fluctuate depending on the role. Learn more about our hiring process here. Don’t be afraid to let us know if you need more flexibility.
This role is based out of our Amsterdam office. We are an office-first company and value in-person collaboration; we do not offer remote-only roles.
Ready to apply?
Apply to AdyenPicnic is turning online grocery shopping from a digital catalogue into an intelligent, personal experience. The Consumer ML team is at the heart of that — building the systems behind personalised recommendations & search, promotional pricing, and fraud detection for millions of customers.
As Tech Lead, you’ll lead a team of ML Engineers through its next growth phase. You’ll develop the people, set direction across the team’s ML domains, and shape our customers’ experience with AI & ML.
💡 Make a difference
You’ll work in an awesome startup environment with the freedom to drive your own projects and create a visible impact.
Our fully electric vehicles and sustainable business model mean you’ll also be contributing to making the world a better place!
🫱🏼🫲🏾 Teamwork makes the dream work
With more than 80 nationalities across 3 countries, you’ll be part of a diverse company with plenty of cool stuff to get involved with, from board game evenings to after-work drinks to our company ski trip and more!
🍎 You are what you eat
You’ll get freshly prepared, healthy lunches and snacks (with plenty of vegetarian, vegan, and halal options). Coffee snob? Don’t worry, our amazing Picnic barista has you covered.
🚴🏽 Stay healthy
Mental health is important. As well as having the option to speak with Picnic colleagues who act as confidential advisors, our collaboration with OpenUp gives you easy access to professional psychologists, along with workshops and materials.
There are plenty of sports communities and events to get involved with, from tennis to yoga, to climbing!
🔋 Attractive package
We offer competitive compensation and a pension plan that looks out for your future self, as well as 25 vacation days per year, so you can recharge your batteries.
🌍 Benefits for expats
It can be daunting starting a new job AND moving to a new country. That’s why we offer lots of support for our many expat colleagues, if you want to find our relocation benefits, see here.
Ready to apply?
Apply to Picnic
At JetBrains, code is our passion. Ever since we started back in 2000, we have been striving to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
We’re looking for a Research Engineer who will own the training stack and model architecture for our Mellum LLM family. Your job is easier said than done: make training faster, cheaper, and more stable at a large scale. You’ll profile, design, and implement changes to the training pipeline – from architecture to custom GPU kernels, as needed.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
At JetBrains, code is our passion. Ever since we started back in 2000, we have been striving to make the world’s most robust and effective developer tools. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
We are working on an ambitious new platform that provides AI capabilities to all JetBrains products. Our platform is based on models developed in-house for writing and coding assistance, as well as integration with our strategic partners.
We are looking for a Research Engineer who can contribute to training foundation models for coding tasks. You’ll be working on developing Large Language Models from scratch and deploying them into production environments where they will be accessible by end users across the globe.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
At JetBrains, code is our passion. Ever since we started, back in 2000, we’ve been striving to make the strongest, most effective developer tools on earth. Today, AI-powered assistance and agents are becoming a core part of how developers work in our IDEs.
We’re building multi-step coding agents that can understand large codebases, plan changes, call tools, and iterate with the user. As a Research Engineer in the Agentic Models team, you’ll be responsible for the models, training loops, and evaluation pipelines that power these agents.
You’ll work at the intersection of SFT and RL-style post-training, and product-driven evaluation, using our distributed GPU and MapReduce clusters to ship models into JetBrains products.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
At JetBrains, code is our passion. Ever since we started, back in 2000, we've been striving to make the strongest, most effective developer tools on earth. Today, AI-powered coding agents are becoming a core part of how developers write Kotlin – and we want to make sure they write it well.
The Kotlin AI Value Stream team is responsible for how AI agents understand, generate, and improve Kotlin code across all platforms: Android, Kotlin Multiplatform, server-side, web, desktop, and others. We build the evaluation infrastructure, error analysis tools, and post-training pipelines that measure and improve agent behavior on real Kotlin developer tasks.
As a Research Engineer on this team, you'll own the end-to-end loop: Analyze how agents fail on Kotlin → build evals that capture those failures → research and implement methods to fix them → measure the improvement. Your work will directly shape how millions of developers experience Kotlin through AI coding agents.
Build tools for agentic error analysis
Build evaluation pipelines
Research methods for improving agent and model behavior on Kotlin
Build public Kotlin benchmarks
Don't check every box? That's okay – if you're excited about this work and bring strong fundamentals, we'd love to hear from you. We're happy to talk and provide the training you need to grow into the role.
*Some benefits may vary depending on location.
#LI-DNI
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
As a Machine Learning Engineer, you will play a pivotal role in building systems that drive the training and deployment of large-scale ML models across our global operations. You'll collaborate with leading researchers, hardware experts, and software engineers to build robust solutions that maximize the potential of GPU acceleration, distributed computing, and the latest open-source tools. Your work will influence our trading strategies by accelerating experimentation cycles that foster continuous innovation and refinement.
This is a unique opportunity to solve problems at the intersection of advanced machine learning and trading, where your contributions will shape the future of IMC’s technology and trading capabilities.
Your Core Responsibilities:
Your Skills and Experience:
#LI-DNP
The Base Salary range for the role is included below. Base salary is only one component of total compensation; all full-time, permanent positions are eligible for a discretionary bonus and benefits, including paid leave and insurance. Please visit Benefits - US | IMC Trading for more comprehensive information.
About Us
IMC is a global trading firm powered by a cutting-edge research environment and a world-class technology backbone. Since 1989, we’ve been a stabilizing force in financial markets, providing essential liquidity upon which market participants depend. Across our offices in the US, Europe, Asia Pacific, and India, our talented quant researchers, engineers, traders, and business operations professionals are united by our uniquely collaborative, high-performance culture, and our commitment to giving back. From entering dynamic new markets to embracing disruptive technologies, and from developing an innovative research environment to diversifying our trading strategies, we dare to continuously innovate and collaborate to succeed.
Ready to apply?
Apply to IMC
At IMC, we believe technology is the foundation of our competitive edge — and machine learning is increasingly central to how we trade. Over the past few years, we've been steadily building our machine learning capabilities: developing infrastructure, growing our in-house GPU cluster, deploying models into production, and partnering closely with quant researchers and traders to generate real impact. Now we’re expanding the team, scaling our systems, and accelerating the application of deep learning in our research and execution workflows. We're looking for a Principal Machine Learning Engineer to help shape the next phase of our platform — influencing architecture, driving best practices, and solving high-leverage problems. You’ll work alongside researchers and technologists to design the systems that power experimentation, training, and deployment of ML models — and help set the direction for how machine learning is done at IMC as we scale. If you’ve built ML infrastructure at scale elsewhere and are looking for a role where your ideas will genuinely help shape our firm’s future — we’d love to hear from you.
Your Core Responsibilities:
Your Skills and Experience:
Why This Role:
#LI-DNP
The Base Salary range for the role is included below. Base salary is only one component of total compensation; all full-time, permanent positions are eligible for a discretionary bonus and benefits, including paid leave and insurance. Please visit Benefits - US | IMC Trading for more comprehensive information.
About Us
IMC is a global trading firm powered by a cutting-edge research environment and a world-class technology backbone. Since 1989, we’ve been a stabilizing force in financial markets, providing essential liquidity upon which market participants depend. Across our offices in the US, Europe, Asia Pacific, and India, our talented quant researchers, engineers, traders, and business operations professionals are united by our uniquely collaborative, high-performance culture, and our commitment to giving back. From entering dynamic new markets to embracing disruptive technologies, and from developing an innovative research environment to diversifying our trading strategies, we dare to continuously innovate and collaborate to succeed.
Ready to apply?
Apply to IMC
WHO WE ARE 🌍
We help creators get more out of every conversation with Instagram-focused automations and support for other channels like Messenger, WhatsApp, and TikTok. The result? Better engagement, more sales, and real, sustainable growth.
With a diverse team of 350+ people spread across three continents, we’re building the leading Chat Marketing platform that is used — and loved — by more than 1.5 million customers worldwide.
WHO WE'RE LOOKING FOR 🌟
Manychat’s AI Core team turns ML and LLM ideas into production features used in millions of conversations every day. You will work across applied research, prototyping, and production delivery, partnering closely with our Python Platform team and Product to validate ideas quickly and ship high-impact AI workflows.
This role suits someone who loves to build fast, experiment, and take ownership of models, agents, and workflows end to end.
WHAT YOU’LL DO 🤖
TO SHINE IN THIS ROLE 💥
Machine Learning & LLM Expertise
WHAT WE OFFER 🤗
We care deeply about your growth, well-being, and comfort:
Manychat is an Equal Opportunity Employer. We’re committed to building a diverse and inclusive team. We do not discriminate against qualified employees or applicants because of race, color, religion, gender identity, sex, sexual preference, sexual identity, pregnancy, national origin, ancestry, citizenship, age, marital status, physical disability, mental disability, medical condition, military status, or any other characteristic protected by local law or ordinance.
This commitment is also reflected through our candidate experience. If you have individual needs that may require an accommodation during the interview process, please indicate this in your application. We will do our best to provide assistance throughout your interview process to ensure you’re set up for success.
With my application, I accept the Manychat Privacy Policy.
Ready to apply?
Apply to Manychat Featured JobsWHO WE ARE 🌍
We help creators get more out of every conversation with Instagram-focused automations and support for other channels like Messenger, WhatsApp, and TikTok. The result? Better engagement, more sales, and real, sustainable growth.
With a diverse team of 350+ people spread across three continents, we’re building the leading Chat Marketing platform that is used — and loved — by more than 1.5 million customers worldwide.
WHO WE'RE LOOKING FOR 🌟
Manychat’s AI Core team turns ML and LLM ideas into production features used in millions of conversations every day. You will work across applied research, prototyping, and production delivery, partnering closely with our Python Platform team and Product to validate ideas quickly and ship high-impact AI workflows.
This role suits someone who loves to build fast, experiment, and take ownership of models, agents, and workflows end to end.
WHAT YOU’LL DO 🤖
TO SHINE IN THIS ROLE 💥
Machine Learning & LLM Expertise
WHAT WE OFFER 🤗
We care deeply about your growth, well-being, and comfort:
Manychat is an Equal Opportunity Employer. We’re committed to building a diverse and inclusive team. We do not discriminate against qualified employees or applicants because of race, color, religion, gender identity, sex, sexual preference, sexual identity, pregnancy, national origin, ancestry, citizenship, age, marital status, physical disability, mental disability, medical condition, military status, or any other characteristic protected by local law or ordinance.
This commitment is also reflected through our candidate experience. If you have individual needs that may require an accommodation during the interview process, please indicate this in your application. We will do our best to provide assistance throughout your interview process to ensure you’re set up for success.
With my application, I accept the Manychat Privacy Policy.
Ready to apply?
Apply to Manychat
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.