All active MLOps roles based in Germany.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
The role
At Nebius, we’re building a next-generation AI compute platform for large-scale ML training and inference — from a few nodes to thousands of GPUs.
We’re looking for a Technical Product Manager to own product direction for Soperator — our Slurm-on-Kubernetes control plane for GPU clusters.
In this role, you will shape how ML engineers and research teams run, scale, and optimize distributed workloads in production.
If you care about systems that combine performance, reliability, and developer experience at the frontier of AI infrastructure, this role is for you.
Your responsibilities will include:
• Own the full user journey across Soperator clusters: Slurm workflows, dashboards, alerts/notifications, node lifecycle, and training/inference capacity management.
• Define product direction end-to-end: problem discovery → solution design → delivery → adoption.
• Lead deep customer discovery through interviews, usage analytics, and workload analysis to uncover high-impact opportunities.
• Drive execution across platform teams: compute, networking, storage, observability, IAM and etc.
• Translate frontier ML and infrastructure ideas into practical product capabilities for real-world GPU clusters.
• Define success metrics, prioritize roadmap decisions with data, and ensure measurable customer/business impact.
• Lead the open-source strategy and execution for Soperator: shape public roadmap themes, prioritize OSS-facing capabilities, and ensure strong adoption in the community.
We expect you to have:
• 3–5+ years in Product Management, ML infrastructure/MLOps, distributed systems, or cloud platform engineering.
• Strong technical depth in distributed systems, cloud infrastructure, or ML platforms.
• Hands-on familiarity with large-scale ML training and orchestration tools (e.g., Slurm, Kubernetes, Ray).
• Track record of shipping technically complex products with multiple engineering teams.
• Strong communication and stakeholder management across engineering, research, and customers.
• Experience with product analytics, data-informed prioritization, and experimentation.
• High ownership, high learning velocity, and comfort operating in fast-moving AI infrastructure environments.
It will be an added bonus if you have:
• Experience with GPU platforms and HPC primitives: InfiniBand/RDMA, topology-aware scheduling, high-throughput storage.
• Practical understanding of modern ML training stacks: PyTorch, DeepSpeed, FSDP/ZeRO, NCCL.
• Familiarity with efficiency and reliability metrics: Goodput, MFU, failure modes, preemption handling, health checks.
• Exposure to large-scale LLM training/inference systems.
• Experience in observability, performance tuning, or SRE/reliability engineering.
• Customer-facing technical experience (solutioning, support, architecture advisory).
About Nebius
Nebius AI is an AI cloud platform with one of the largest GPU capacities in Europe. Launched in November 2023, the Nebius AI platform provides high-end, training-optimized infrastructure for AI practitioners. As an NVIDIA preferred cloud service provider, Nebius AI offers a variety of NVIDIA GPUs for training and inference, as well as a set of tools for efficient multi-node training.
Nebius AI owns a data center in Finland, built from the ground up by the company’s R&D team and showcasing our commitment to sustainability. The data center is home to ISEG, the most powerful commercially available supercomputer in Europe and the 16th most powerful globally (Top 500 list, November 2023).
Nebius’s headquarters are in Amsterdam, Netherlands, with teams working out of R&D hubs across Europe and the Middle East.
Nebius AI is built with the talent of more than 500 highly skilled engineers with a proven track record in developing sophisticated cloud and ML solutions and designing cutting-edge hardware. This allows all the layers of the Nebius AI cloud – from hardware to UI – to be built in-house, distictly differentiating Nebius AI from the majority of specialized clouds: Nebius customers get a true hyperscaler-cloud experience tailored for AI practitioners. We’re growing and expanding our products every day.
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
We are seeking a customer-obsessed Solutions Architect to lead technical engagement with Nebius customers across Europe. This role is focused on presales excellence, strong technical storytelling, and driving adoption of Nebius AI Cloud across a wide range of industry segments.
As a Solutions Architect, you will act as a trusted technical partner to customers, helping them understand, evaluate, and adopt Nebius for their AI/ML and high-performance computing workloads. You will combine deep technical expertise with strong communication skills to clearly articulate Nebius’ value proposition and differentiate our platform in a competitive market.
You’re welcome to work remotely from Europe.
Your responsibilities will include:
We expect you to have:
It will be an added bonus if you have:
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
We seek an experienced Specialist Solutions Architect to support AI-focused customers leveraging Nebius services. In this role, you will be a trusted advisor, collaborating with clients to design scalable AI solutions, resolve technical challenges and manage large-scale AI deployments involving hundreds to thousands of GPUs.
You’re welcome to work on-site in Amsterdam or remotely from any other EU country.
Your responsibilities will include:
We expect you to have:
It will be an added bonus if you have:
Preferred tooling:
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
Token Factory is a part of Nebius Cloud, one of the world’s largest GPU clouds, running tens of thousands of GPUs. We are building an inference platform that makes every kind of foundation model — text, vision, audio, and emerging multimodal architectures — fast, reliable, and effortless to deploy at massive scale. To deliver on that promise, we need an engineer who can make the platform behave flawlessly under extreme load and recover gracefully when the unexpected happens.
In this role you will own the reliability, performance, and observability of the entire inference stack. Your day starts with designing and refining telemetry pipelines — metrics, logs, and traces that turn hundreds of terabytes of signal into clear, actionable insight. From there you might tune Kubernetes autoscalers to squeeze more efficiency out of GPUs, craft Terraform modules that bake resilience into every new cluster, or harden our request-routing and retry logic so even transient failures go unnoticed by users. When incidents do arise, you’ll rely on the automation and runbooks you helped create to detect, isolate, and remediate problems in minutes, then drive the post-mortem culture that prevents recurrence. All of this effort points toward a single goal: scaling the platform smoothly while hitting aggressive cost and reliability targets.
Success in the role calls for deep fluency with Kubernetes, Prometheus, Grafana, Terraform, and the craft of infrastructure-as-code. You script comfortably in Python or Bash, understand the nuances of alert design and SLOs for high-throughput APIs, and have spent enough time in production to know how distributed back-ends fail in the real world. Experience shepherding GPU-heavy workloads — whether with vLLM, Triton, Ray, or another accelerator stack — will serve you well, as will a background in MLOps or model-hosting platforms. Above all, you care about building self-healing systems, thrive on debugging performance from kernel to application layer, and enjoy collaborating with software engineers to turn reliability into a feature users never have to think about.
If the idea of safeguarding the infrastructure that powers tomorrow’s multimodal AI energizes you, we’d love to hear your story.
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
Customer experience
Customer experience at Nebius AI Cloud involves tackling customers’ challenges and directly impacting their success by solving real-world AI and ML problems at massive GPU cloud scale. You’ll not only resolve issues, but play a key role in shaping clients’ business success by optimizing their AI solutions.
Working with advanced GPUs such as H200, B200 and GB200, as well as modern ML frameworks, you’ll influence the development of the Nebius AI Cloud and gain experience at the intersection of infrastructure and AI. With minimal bureaucracy, you’ll have the freedom to innovate, take ownership and drive change. Opportunities for growth are abundant in this vibrant and supportive professional community.
We are looking for a Partner Solutions Architect to serve as the technical interface between Nebius and our strategic technology partners, ranging from data platforms and MLOps vendors to AI frameworks and ISVs that build on GPU infrastructure.
This is a hands-on engineering and integration role. You will design and develop integrated solutions, build reference implementations, enable partner engineering teams, and ensure joint customers succeed with combined offerings. You will influence Nebius’ product roadmap and drive deep technical collaboration across partner ecosystems.
You’re welcome to work from our office in Amsterdam or remotely from any EU country.
Your responsibilities will include:
We expect you to have:
It will be an added bonus if you have:
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
We are seeking a highly skilled and customer-focused professional to join our team as a Solutions Architect specializing in Cloud infrastructure and MLOps. As a Cloud Solutions Architect, you will play a pivotal role in designing and implementing cutting-edge solutions for our clients, leveraging cloud technologies for ML/AI teams and becoming a trusted technical advisor for building their pipelines.
You’re welcome to work from any DACH country (Germany, Austria, Switzerland).
Your responsibilities will include:
We expect you to have:
It will be an added bonus if you have:
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
At Wolt, we create technology that brings joy, simplicity and earnings to the neighborhoods of the world. In 2014 we started with delivery of restaurant food. Now we’re building the delivery of (almost) everything and you’ll find us in over 500 cities in 30 countries around the world. In 2022 we joined forces with DoorDash and together we keep on dreaming big and expanding across the globe.
Working at Wolt isn’t always easy, but it’s definitely exciting. Here you’ll learn more, build more, and ship more than in most other companies. You’ll be challenged a lot, but also have a lot of fun on the way. So, if you’re a self-starter with drive and entrepreneurial spirit, this could be the ride of your life.
We are looking for a Staff Data Scientist to join Wolt’s mission in Finland, Sweden, or Germany!
The Selection Analytics Team at Wolt owns the data and infrastructure that helps all 32 of our country teams build a portfolio of high-quality restaurant and retail stores. Our overarching mission is to provide customers with the best selection by enabling sales teams across all markets to acquire the best merchants.
To do this, we work directly with country sales teams to help them identify and prioritize the highest potential merchant candidates. We also work with Wolt leadership by providing metrics to track selection quality, upon which they develop strategic initiatives and goals. This is a lean and mighty team that owns and operates the engine for merchant acquisition globally.
As a senior member of the Selection Analytics team, your role involves leveling up our approach to identifying and prioritizing better merchants. You’ll achieve this by:
This is also a rare, high-exposure experience where you will then use your findings to work directly with country leadership to help them uncover growth opportunities. When you join this team, you will be driving action from the frontlines, rather than just crunching numbers and creating dashboards.
The position will be filled as soon as we find the right person, so make sure to apply as soon as you realize you really, really want to join us!
For any further questions about the position, you can turn to the Product+ Talent Acquisition Partner - Zhanna Filintseva (zhanna.filintseva@wolt.com)
We’re committed to growing and empowering a more inclusive community within our company, industry, and cities. That’s why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel.
Ready to apply?
Apply to Wolt - English
Share this job
We’re the team behind Latent Diffusion, Stable Diffusion, and FLUX—foundational technologies that changed how the world creates images and video. We’re creating the generative models that power how people make images and video—tools used by millions of creators, developers, and businesses worldwide. Our FLUX models are among the most advanced in the world, and we’re just getting started.
Headquartered in Freiburg, Germany with a growing presence in San Francisco, we’re scaling fast while staying true to what makes us different: research excellence, open science, and building technology that expands human creativity.
You'll design, deploy, and maintain the ML infrastructure backbone that makes frontier AI research possible. This isn't abstract systems work—every decision you make directly impacts whether a multi-week training run succeeds, whether inference stays fast enough for production, whether researchers can iterate quickly or wait hours for resources.
You'll be the person who:
You've built and managed ML infrastructure at scale and understand that supporting AI research is fundamentally different from traditional cloud infrastructure. You've been paged because a training run failed. You've debugged why storage became the bottleneck. You know the difference between infrastructure that works in demos and infrastructure that works when researchers depend on it for months-long experiments.
You likely have:
We'd be especially excited if you:
We’re a distributed team with real offices that people actually use. Depending on your role, you’ll either join us in Freiburg or SF at least 2 days a week (or one full week every other week), or work remotely with a monthly in-person week to stay connected. We’ll cover reasonable travel costs to make this possible. We think in-person time matters, and we’ve structured things to make it accessible to all. We’ll discuss what this will look like for the role during our interview process.
Everything we do is grounded in four values:
If this sounds like work you’d enjoy, we’d love to hear from you.
Base Annual Salary: $180,000–$300,000 USD
We're based in Europe and value depth over noise, collaboration over hero culture, and honest technical conversations over hype. Our models have been downloaded hundreds of millions of times, but we're still a ~50-person team learning what's possible at the edge of generative AI.
Ready to apply?
Apply to Black Forest Labs
Share this job
Principal data engineers at Thoughtworks, are strategic leaders who spearhead data engineering initiatives, tackle complex business challenges and uncover transformative insights. They possess a deep understanding of a client's business ecosystem and partner with executives to align technology strategies with business objectives. By contextualizing emerging trends and Thoughtworks' exploration, they expand the impact of data engineering within the client organization.
They draw upon their profound expertise in developing modern data architectures and infrastructure for the management of data applications.
Effective collaboration is paramount, as data engineers adeptly convey their discoveries to both technical and non-technical stakeholders. They stay abreast of industry advancements, ensure data quality and security, and provide mentorship to junior team members.
At Thoughtworks, data engineers leverage their deep technical knowledge to solve complex business problems, making a significant impact on client success.
There is no one-size-fits-all career path at Thoughtworks: however you want to develop your career is entirely up to you. But we also balance autonomy with the strength of our cultivation culture. This means your career is supported by interactive tools, numerous development programs and teammates who want to help you grow. We see value in helping each other be our best and that extends to empowering our employees in their career journeys.
At Thoughtworks, we use AI tools to support our recruitment team with administrative tasks such as drafting communications, scheduling interviews and writing job descriptions.
Crucially, our AI tools do not screen, assess, rank or make hiring decisions. Every application is reviewed by our team and all selection decisions are made exclusively by our interviewers and hiring managers.
We are committed to fairness and responsible AI. We actively manage our AI systems by testing, monitoring for biased outcomes and implementing mitigation measures. We hold our third-party vendors to these same high standards through a rigorous governance process. For additional information, please see our full Thoughtworks AI Policy for Recruitment.
Thoughtworks is a dynamic and inclusive community of bright and supportive colleagues who are revolutionizing tech. As a leading technology consultancy, we’re pushing boundaries through our purposeful and impactful work. For 30+ years, we’ve delivered extraordinary impact together with our clients by helping them solve complex business problems with technology as the differentiator. Bring your brilliant expertise and commitment for continuous learning to Thoughtworks. Together, let’s be extraordinary.
#LI-Onsite
Ready to apply?
Apply to Thoughtworks
Share this job
Principal data engineers at Thoughtworks, are strategic leaders who spearhead data engineering initiatives, tackle complex business challenges and uncover transformative insights. They possess a deep understanding of a client's business ecosystem and partner with executives to align technology strategies with business objectives. By contextualizing emerging trends and Thoughtworks' exploration, they expand the impact of data engineering within the client organization.
They draw upon their profound expertise in developing modern data architectures and infrastructure for the management of data applications.
Effective collaboration is paramount, as data engineers adeptly convey their discoveries to both technical and non-technical stakeholders. They stay abreast of industry advancements, ensure data quality and security, and provide mentorship to junior team members.
At Thoughtworks, data engineers leverage their deep technical knowledge to solve complex business problems, making a significant impact on client success.
There is no one-size-fits-all career path at Thoughtworks: however you want to develop your career is entirely up to you. But we also balance autonomy with the strength of our cultivation culture. This means your career is supported by interactive tools, numerous development programs and teammates who want to help you grow. We see value in helping each other be our best and that extends to empowering our employees in their career journeys.
At Thoughtworks, we use AI tools to support our recruitment team with administrative tasks such as drafting communications, scheduling interviews and writing job descriptions.
Crucially, our AI tools do not screen, assess, rank or make hiring decisions. Every application is reviewed by our team and all selection decisions are made exclusively by our interviewers and hiring managers.
We are committed to fairness and responsible AI. We actively manage our AI systems by testing, monitoring for biased outcomes and implementing mitigation measures. We hold our third-party vendors to these same high standards through a rigorous governance process. For additional information, please see our full Thoughtworks AI Policy for Recruitment.
Thoughtworks is a dynamic and inclusive community of bright and supportive colleagues who are revolutionizing tech. As a leading technology consultancy, we’re pushing boundaries through our purposeful and impactful work. For 30+ years, we’ve delivered extraordinary impact together with our clients by helping them solve complex business problems with technology as the differentiator. Bring your brilliant expertise and commitment for continuous learning to Thoughtworks. Together, let’s be extraordinary.
#LI-Onsite
Ready to apply?
Apply to Thoughtworks
Share this job
Principal data engineers at Thoughtworks, are strategic leaders who spearhead data engineering initiatives, tackle complex business challenges and uncover transformative insights. They possess a deep understanding of a client's business ecosystem and partner with executives to align technology strategies with business objectives. By contextualizing emerging trends and Thoughtworks' exploration, they expand the impact of data engineering within the client organization.
They draw upon their profound expertise in developing modern data architectures and infrastructure for the management of data applications.
Effective collaboration is paramount, as data engineers adeptly convey their discoveries to both technical and non-technical stakeholders. They stay abreast of industry advancements, ensure data quality and security, and provide mentorship to junior team members.
At Thoughtworks, data engineers leverage their deep technical knowledge to solve complex business problems, making a significant impact on client success.
There is no one-size-fits-all career path at Thoughtworks: however you want to develop your career is entirely up to you. But we also balance autonomy with the strength of our cultivation culture. This means your career is supported by interactive tools, numerous development programs and teammates who want to help you grow. We see value in helping each other be our best and that extends to empowering our employees in their career journeys.
At Thoughtworks, we use AI tools to support our recruitment team with administrative tasks such as drafting communications, scheduling interviews and writing job descriptions.
Crucially, our AI tools do not screen, assess, rank or make hiring decisions. Every application is reviewed by our team and all selection decisions are made exclusively by our interviewers and hiring managers.
We are committed to fairness and responsible AI. We actively manage our AI systems by testing, monitoring for biased outcomes and implementing mitigation measures. We hold our third-party vendors to these same high standards through a rigorous governance process. For additional information, please see our full Thoughtworks AI Policy for Recruitment.
Thoughtworks is a dynamic and inclusive community of bright and supportive colleagues who are revolutionizing tech. As a leading technology consultancy, we’re pushing boundaries through our purposeful and impactful work. For 30+ years, we’ve delivered extraordinary impact together with our clients by helping them solve complex business problems with technology as the differentiator. Bring your brilliant expertise and commitment for continuous learning to Thoughtworks. Together, let’s be extraordinary.
#LI-Onsite
Ready to apply?
Apply to Thoughtworks
Share this job
Veeam is the Data and AI Trust Company, specializing in helping organizations ensure their data and AI are fully understood, secured, and resilient to enable the acceleration of safe AI at scale. As the market leader in both data resilience and data security posture management, Veeam is built for the convergence of identity, data, security, and AI risk. Headquartered in Seattle with offices in more than 30 countries, Veeam protects over 550,000 customers worldwide, who trust Veeam to keep their businesses running. Join us as we go fearlessly forward together, growing, learning, and making a real impact for some of the world’s biggest brands.
The ideal candidate will be naturally collaborative, articulate, extremely organized, have a solid technical understanding of Veeam products, and motivated by maximizing customer success and outcomes. Soft skills combined with technical skills are key in this role. You will partner with Customer Success Engineers (CSEs) and Account Executives (AEs) to drive customer outcomes across security-related products and use cases within the Veeam Data Platform (VDP). You’ll lead readiness checks, data modeling, and risk conversations with CISO/CIO stakeholders, while monitoring telemetry and maturity against the Veeam Data Resilience Maturity Model (DRMM) to optimize posture and identify expansion opportunities.
#LI-MP1
Please note that any personal data collected from you during the recruitment process will be processed in accordance with our Recruiting Privacy Notice.
The Privacy Notice sets out the basis on which the personal data collected from you, or that you provide to us, will be processed by us in connection with our recruitment processes.
By applying for this position, you consent to the processing of your personal data in accordance with our Recruiting Privacy Notice.
By submitting your application, you acknowledge that the information provided in your job application and any supporting documents is complete and accurate to the best of your knowledge. Any misrepresentation, omission, or falsification of information may result in disqualification from consideration for employment or, if discovered after employment begins, termination of employment.
Ready to apply?
Apply to Veeam Software
Dataiku is the Platform for AI Success, the enterprise orchestration layer for building, deploying, and governing AI. In a single environment, teams design and operate analytics, machine learning, and AI agents with the transparency, collaboration, and control enterprises require. Sitting above data platforms, cloud infrastructure, and AI services, Dataiku connects the full enterprise AI stack — empowering organizations to run AI across multi-vendor environments with centralized governance.
The world’s leading companies rely on Dataiku to operationalize AI and run it as a true business performance engine delivering measurable value. For more, visit the Dataiku blog, LinkedIn, X, and YouTube.
Technical Account Managers are highly experienced Architects who bring both deep technical expertise and strong interpersonal skills to support a small portfolio of strategic clients (approximately 3 per TAM). They must be quick on their feet and able to put a positive spin on challenging customer situations, both in the boardroom with a client CTO and while sharing the command line with a client admin. They must be effective technically, both as communicators and doers. They must be capable of managing and maintaining a client relationship, while keeping a tight organizational watch over the technical aspects of their accounts.
How you'll make an impact
What you'll need to be successful
How you'll stand out
What does the hiring process look like? #LI-Hybrid #LI-AN1
#LI-JR1
Ready to apply?
Apply to Dataiku
About Buynomics:
Buynomics is the next-generation SaaS company in pricing and product optimization. It employs large-scale simulation technologies to digitize customers and predict their purchase behavior - in any environment. Our clients use Buynomics to make data-driven decisions about their current and new products, overall portfolio structure, pricing, promotions, trade terms, and more, 100x faster and more precisely than every other solution.
Our mission is to enable transparent, customer-centric commercial decisions. We’re building the operating system for commercial strategy, providing companies with a single source of truth for all market-facing questions related to pricing, products, and promotions.
About the role:
Buynomics is building a modular, service-oriented platform to power advanced commercial decision-making for enterprise customers.
As we expand from multi-tenant SaaS into enterprise single-tenant deployments, we are looking for a Lead Platform Engineer to design and build our multi-cloud Kubernetes foundation.
This role focuses on:
You will operate as a senior technical leader, defining the deployment architecture that enables our next-generation platform to run reliably across AWS, Azure, and GCP.
Why this role:
What you'll do:
What you'll need:
Nice-to-Haves:
What we offer:
To ensure a smooth and efficient process and that no applications are overlooked, please apply directly to the role. While we'd love to respond to everyone who reaches out, applying ensures that your application is properly tracked and considered.
Buynomics is proud to be an equal-opportunity employer. This means that we don’t discriminate based on race or ethnic origin, color, the language(s) you speak, where you (or your parents) are from, or whether or not you consider yourself to have a disability. Neither will your age, gender, gender identity, sexual orientation, religion, beliefs, or political opinions play a part in your application with us. We’re a diverse team in so many ways, and we love it that way.
#LI-Remote
GDPR Data Privacy Notice
By applying for a position at Buynomics, you consent to the processing of your personal data for recruitment purposes. Buynomics will handle your information in accordance with applicable data protection laws, including the GDPR. Your data may be processed through our applicant tracking system, Greenhouse, and used solely to assess your application. We will retain your data only as long as necessary for recruitment purposes or as required by law. You may access, rectify, or request deletion of your data by contacting careers@buynomics.com
Ready to apply?
Apply to Buynomics
Share this job
Software engineers and AI agents alike suffer from the same problem: finding that one person or place that will answer their tough, specific question. Many solutions promise to solve this with similarity search in vector databases. Unfortunately, finding the answer is often a puzzle with pieces to be collected across a myriad of contradictory sources and cannot be solved without surgical search and careful reasoning.
Spectrum collects data from an organization’s code, docs, and issues, and organizes knowledge in a unified ontology that AI agents can efficiently search through and reason over. We aim to revolutionize the semantic layer space for software-building organizations and move beyond specs that fall out of sync with code, introducing a living spec – one that’s extracted from the whole system and used to keep it aligned. Spectrum is meant to be the single source of truth for all product and architectural knowledge.
Spectrum is a resident of JetBrains' startup incubator, with startup speed and autonomy, and backed by 25 years of developer tooling expertise. We are looking for a top-class ML Engineer who will help us shape the future of software development. You will own our AI and ML engineering stack and help define the research agenda for our team. Your technical vision and design decisions will directly shape the product and determine its success.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
Software engineers and AI agents alike suffer from the same problem: finding that one person or place that will answer their tough, specific question. Many solutions promise to solve this with similarity search in vector databases. Unfortunately, finding the answer is often a puzzle with pieces to be collected across a myriad of contradictory sources and cannot be solved without surgical search and careful reasoning.
Spectrum collects data from an organization's code, docs, and issues, and organizes knowledge in a unified ontology that AI agents can efficiently search through and reason over. We aim to revolutionize the semantic layer space for software-building organizations and move beyond specs that fall out of sync with code, introducing a living spec – one that's extracted from the whole system and used to keep it aligned. Spectrum is meant to be the single source of truth for all product and architectural knowledge.
A resident of JetBrains' startup incubator, Spectrum enjoys startup speed and autonomy, and is backed by 25 years of developer tooling expertise. We are looking for a Senior AI/ML Engineer to build and evolve the ML-powered systems at the heart of our product.
#LI-DNI
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
A career driven by you. Join AutoScout24 Group and experience life in the fast lane. We’re not just leading the market; we’re creating it. We’re constantly innovating, evolving and transforming things for our customers. We’re driven, ambitious and determined to do our best work, together. For a career that doesn’t stand still, where you can put your ideas in the driving seat and change the way the world moves, join our international community.
As Principal Data Scientist, you will define and drive the data science strategy across AutoScout24, with a strong focus on AI, machine learning, and Generative AI. You will work closely with product, engineering, and business leaders to turn data into scalable products that create clear business impact. You will lead senior data scientists and act as a technical role model across the company.
WHAT YOU’LL DO
YOU COME WITH
A culture of growth. With over 50 nationalities represented, we’re proud to welcome people of all backgrounds to our growing community, and we’re always finding new ways to help people feel welcomed and included. Here, you’ll have the tools, training and support you need to work in a way that suits you, learn new skills and reach your potential.
For a career where you can drive our business and shape your future, apply now.
Ready to apply?
Apply to AutoScout24
Share this job
A career driven by you. Join AutoScout24 Group and experience life in the fast lane. We’re not just leading the market; we’re creating it. We’re constantly innovating, evolving, and transforming things for our customers. We’re driven, ambitious, and determined to do our best work together. For a career that doesn’t stand still, where you can put your ideas in the driving seat and change the way the world moves, join our international community.
As Senior GenAI Data Scientist, you’ll join our Data Science organisation and drive the development of next-generation AI solutions across the company. You will take ownership of building agentic GenAI systems, integrating knowledge into models, and setting up strong evaluation and monitoring frameworks that ensure we deliver reliable, scalable, and high-value AI products. This position is permanent and located in Munich (Germany) or Toronto (Canada).
What you’ll do:
What you’ll bring:
A culture of growth. With over 50 nationalities represented, we’re proud to welcome people of all backgrounds to our growing community, and we’re always finding new ways to help people feel welcomed and included. Here, you’ll have the tools, training and support you need to work in a way that suits you, learn new skills and reach your potential.
For a career where you can drive our business and shape your future, apply now.
Ready to apply?
Apply to AutoScout24
Location: We are flexible! Come and join us either in Hamburg, Berlin or Barcelona.
Freenow by Lyft empowers smarter mobility decisions helping people to move freely and cities to thrive.
The Freenow marketplace is a real-time, two-sided platform connecting millions of riders and drivers. Its efficiency and scalability are powered by a sophisticated ecosystem of Machine Learning systems spanning key domains such as Pricing, Allocation, and Marketplace Payment&Risk.
In this role, you’ll help advance our MLOps practices, build the services and the infrastructure that scales ML models from research to real-time production and observe how they impact the daily lives of our users and our business.
Be ready to work in a multinational, diverse, highly motivated and collaborative team of passionate developers who strive for excellence and like to have fun doing it. Are you ready for your next ride?
YOUR DAILY ADVENTURES WILL INCLUDE:
Our Techstack: Python | Airflow | Java | Kubernetes | Kafka | Databricks |AWS & more
TO BE SUCCESSFUL IN THIS ROLE:
|
|
|
|
|
|
|
|
|
|
|
Plus more local benefits depending on your work location! |
Freenow is an equal opportunity employer and we consider qualified applicants regardless of race, religion, national origin, gender, gender identity, sexual orientation, disability or age.
We want you to grow and evolve, bring your true self to work.
Freenow by Lyft empowers smarter mobility decisions, helping people to move freely and cities to thrive. Through our multi-mobility app, we feature broad options for everyone across 9 European markets and over 180 cities. Millions of passengers can access services including taxis, private hire vehicles, carsharing, car rental, e-scooters, e-bikes, e-mopeds, and public transport within a single app.
In July 2025 Freenow was acquired by Lyft, a global mobility platform, which connects riders and drivers for billions of rides across North America and Europe. Together, Freenow and Lyft are creating a more caring and connected world, with transportation for everyone. We are a global, diverse, highly motivated, and collaborative team that strives for excellence and likes to have fun. Ready for your next ride?
Ready to apply?
Apply to FREENOWShare this job
Dataiku is the Platform for AI Success, the enterprise orchestration layer for building, deploying, and governing AI. In a single environment, teams design and operate analytics, machine learning, and AI agents with the transparency, collaboration, and control enterprises require. Sitting above data platforms, cloud infrastructure, and AI services, Dataiku connects the full enterprise AI stack — empowering organizations to run AI across multi-vendor environments with centralized governance.
The world’s leading companies rely on Dataiku to operationalize AI and run it as a true business performance engine delivering measurable value. For more, visit the Dataiku blog, LinkedIn, X, and YouTube.
Why Engineering at Dataiku?
Dataiku’s on-premise, cloud, or SaaS-deployed platform connects many data science technologies, and our technology stack reflects our commitment to quality and innovation. We integrate the best of data and AI tech, selecting tools that truly enhance our product. From the latest LLMs to our dedication to open source communities, you'll work with a dynamic range of technologies and contribute to the collective knowledge of global tech innovators. You can find out even more about working in Engineering at Dataiku by taking a look here.
Here are some useful links so you preview what we do at Dataiku: Dataiku's Key Capabilities ; Dataiku's Github, you can also take a look at the Gallery, a public instance showcasing some example projects (note editing is very limited and will be regularly reset).
Our product is called Dataiku DSS which stands for Dataiku Data Science Studio. If you’d like to know more about it, you can watch the demo here or try the free version here.
How you’ll make an impact
This position is either onsite/hybrid from our Berlin office or full remote from any part of Germany.
As a Fullstack Engineer, you’ll contribute to building Dataiku DSS core features by joining one of the following themes:
What you need to be successful
Ready to apply?
Apply to Dataiku
Share this job
CSQ227R43
We’re hiring for multiple roles within our Professional Services team. Depending on experience and scope, this position may be offered as Senior Solutions Consultant or Resident Solutions Architect
You may know this role as a Big Data Solutions Architect, Analytics Architect, Data Platform Architect, or Technical Consultant. The final title will align to your experience, technical depth, and customer-facing ownership.
As a Data and AI Solution Architect (Professional Services) (Internal Title - Resident Solutions Architect) in our Professional Services team you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data. RSAs are billable and know how to complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead.
The impact you will have:
What we look for:
About Databricks
Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.
Our Commitment to Diversity and Inclusion
At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.
Compliance
If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Ready to apply?
Apply to Databricks
Share this job
About Us:
Sauce Labs is the world’s largest full-lifecycle, test automation platform, and the company behind Selenium. Trusted by 80% of the world’s top ten largest financial institutions and over 300,000 enterprise users, Sauce Labs provides the only AI platform capable of turning business intent into autonomous testing and quality assurance. With a proprietary dataset of 8.7 billion test runs, Sauce Labs empowers the Fortune 2000 to bridge the gap between AI-driven code generation and enterprise-grade software quality. Learn more at saucelabs.com.
We are seeking an innovative and experienced AI Architect to join our engineering leadership team. This is a strategic role that will be instrumental in designing and building the next generation of AI-powered features for our continuous testing platform. You will be responsible for architecting scalable and robust AI solutions that transform how our customers gain insights from their test data and production environments, and how they create tests.
Nice to Haves:
We are a hybrid workplace that recognizes the importance of flexibility while valuing in-person collaboration and relationship building. As a result, Saucers located near an office location must be able and willing to come into the office.
Please note our privacy terms when applying for a job at Sauce Labs.
Sauce Labs is proud to be an Equal Opportunity employer and values diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender identity/expression/status, sexual orientation, age, marital status, veteran status or disability status.
Security responsibilities at Sauce
At Sauce, we will commit to supporting the health and safety of employees and properties, partnering with internal stakeholders to learn and act on ever-evolving security protocols and procedures. You’ll be expected to fully comply with all policies and procedures related to security at the department and org wide level and exercise a ‘security first’ approach to how we design, build & run our products and services.
Ready to apply?
Apply to Sauce Labs Inc.Share this job
At JetBrains, code is our passion. Ever since we started, back in 2000, we’ve been striving to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
Today, AI-powered assistance and agents are becoming a core part of how developers work in our IDEs. The ML Workflows Engineering team is dedicated to removing infrastructure challenges, streamlining machine learning operations (MLOps), and enabling teams to focus on the innovative work that matters most – building impactful ML models and intelligent agents. As part of the team, you'll play a key role in designing tools, automation, and pipelines that make machine learning development seamless and intuitive.
By integrating cutting-edge MLOps practices and engineering excellence, we aim to maximize productivity and remove the complexity of ML infrastructure so that our teams can push the boundaries of what’s possible in AI.
#LI-HYBRID
#LI-MR1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
Recognized on the 2025 Forbes Cloud 100 list, ClickHouse is one of the most innovative and fast-growing private cloud companies. With more than 3,000 customers and ARR that has grown over 250 percent year over year, ClickHouse leads the market in real-time analytics, data warehousing, observability, and AI workloads.
The company’s sustained, accelerating momentum was recently validated by a $400M Series D financing round. Over the past three months, customers including Capital One, Lovable, Decagon, Polymarket, and Airwallex have adopted the platform or expanded existing deployments. These customers join an established base of AI innovators and global brands such as Meta, Cursor, Sony, and Tesla.
We’re on a mission to transform how companies use data. Come be a part of our journey!
The Connectors team is the bridge between ClickHouse and the broader data ecosystem. We build and maintain the integrations that make ClickHouse accessible to millions of developers, data practitioners, and AI agents worldwide from high-level data visualization plugins (Tableau, PowerBI, Superset, Metabase) to connectors for data frameworks (Apache Spark, Flink, Kafka Connect, Fivetran), orchestration platforms, and AI tooling.
Our work directly shapes how companies process massive datasets: real-time analytics platforms ingesting millions of events per second, observability systems monitoring global infrastructure, and increasingly, the AI-powered data applications redefining how teams work with data. We collaborate closely with the open-source community, internal teams, and enterprise users to ensure ClickHouse integrations set the standard for performance, reliability, and developer experience.
As a Senior Software Engineer specializing in Python and the Data Ecosystem, you'll be a core contributor owning and evolving critical parts of ClickHouse's data engineering ecosystem. This role sits at the intersection of high-performance database engineering and developer experience. You'll craft tools that enable Data Engineers and Data Scientists to harness ClickHouse's speed and scale in the frameworks they already use.
We're looking for someone who has lived the Data Engineer or Data Scientist experience firsthand. The data practitioner's world is shifting rapidly: databases are no longer just query targets, but they're becoming active participants in AI-powered workflows, serving as vector stores for RAG pipelines, backends for LLM-powered agents, and real-time feature stores for ML inference. You understand these workflows not from the outside, but because you've operated within them. You don't just build integrations, you bring product-level insight into what we should build and why.
You'll own the full lifecycle of key Python integrations, driving architecture, performance, and feature direction across:
ClickHouse's columnar architecture and query performance make it exceptionally well-positioned in this new landscape. Your job is to make that potential real: building the robust, production-ready connectors that make ClickHouse the natural choice when data practitioners design their next-generation AI and data systems.
Bonus points for:
Eligible locations:
For roles based in the United States, the typical starting salary range for this position is listed above. In certain locations, such as the San Francisco Bay Area and the New York City Metro Area, a premium market range may apply, as listed.
These salary ranges reflect what we reasonably and in good faith believe to be the minimum and maximum pay for this role at the time of posting. The actual compensation may be higher or lower than the amounts listed, and the ranges may be subject to future adjustments.
An individual’s placement within the range will depend on various factors, including (but not limited to) education, qualifications, certifications, experience, skills, location, performance, and the needs of the business or organization.
If you have any questions or comments about compensation as a candidate, please get in touch with us at paytransparency@clickhouse.com.
Culture - We All Shape It
As part of a rapidly scaling start up, you will be instrumental in shaping our culture.
Are you interested in finding out more about our culture? Learn more about our values here. Check out our blog posts or follow us on LinkedIn to find out more about what’s happening at ClickHouse.
Equal Opportunity & Privacy
ClickHouse provides equal employment opportunities to all employees and applicants and prohibits discrimination and harassment of any type based on factors such as race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Please see here for our Privacy Statement.
Ready to apply?
Apply to ClickHouse
Share this job
At JetBrains, code is our passion. Ever since we started back in 2000, we have been striving to make the world’s most robust and effective developer tools. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
We are working on an ambitious new platform that provides AI capabilities to all JetBrains products. Our platform is based on models developed in-house for writing and coding assistance, as well as integration with our strategic partners.
We are looking for a Research Engineer who can contribute to training foundation models for coding tasks. You’ll be working on developing Large Language Models from scratch and deploying them into production environments where they will be accessible by end users across the globe.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
At JetBrains, code is our passion. Ever since we started back in 2000, we have been striving to make the world’s most robust and effective developer tools. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
JetBrains is evolving beyond standalone developer tools toward a unified, AI-native platform for software development.
AI is no longer just an assistant inside the editor – it is becoming an active participant in how software is planned, built, reviewed, and operated across teams and organizations. This shift introduces new challenges that cannot be solved at the level of individual tools alone: governance, security, cost control, observability, and coordinated work between humans and autonomous agents.
Our goal is to build a platform that enables companies to adopt AI in software development in a structured, scalable, and economically efficient manner without locking them into closed ecosystems. This platform will serve as the execution and governance layer for AI-driven development, deeply integrated with developer tools but designed to work across teams, products, and environments.
We are looking for an experienced ML leader who has created products with an ML backbone, weaving together research, technical excellence, and strong product focus.
We are seeking a professional who excels in three key areas: technology, product vision, and business operations. This role involves extensive cooperation with products across the company – both AI-native and just integrating AI.
#LI-KP1 #LI-HYBRID
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
About us
At Ada, we envision a world where everyone gets the healthcare they need. Yet, millions remain undiagnosed and untreated due to the complexity of identifying rare and underrecognized conditions, even when effective treatments exist
We’re changing that. Powered by leading AI and built by a team of physicians and clinical scientists, Ada helps people get answers faster, identifying those at risk and guiding them to the right care. Every successful journey makes us smarter and able to help more people.
If you’re passionate about transforming healthcare and ensuring no one goes undiagnosed, join us and be part of what’s next.
About the role
As our Senior AI Engineer, you will play a key role in advancing the development, validation, and supporting tooling of our patented hybrid AI architecture. Our core technology combines Probabilistic Graphical Models (PGMs) with Generative AI to deliver explainable, transparent, and clinically robust medical reasoning. In this role, you will help shape how these components evolve and scale, ensuring they meet high standards for performance, reliability, and regulatory readiness.
Reporting to our Director of AI, you will operate at the intersection of medical intelligence and software engineering—translating complex requirements into sophisticated technical solutions. You’ll collaborate closely with AI researchers, engineers, and medical experts, contributing to both the strategic direction of the platform and the hands-on implementation of production-grade systems.
This is a remote role, open to candidates based in the UK or Germany.
What you’ll do
What you’ll bring
What’s awaiting you at Ada
At Ada, inclusivity isn't just a goal – it's our foundation.
As a proud equal opportunity employer, we embrace diversity in all its forms. We encourage applications from every corner of society, regardless of race, color, religion, sex, pregnancy status, national origin, age, physical and mental abilities, marital status, sexual orientation, gender identity, gender expression, genetic information, and any other characteristic protected by law.
Join us in shaping a more inclusive tomorrow.
Please note that we do not accept applications sent via email. All applications need to be completed using the online form.
All your information will be kept confidential according to GDPR guidelines.
*Any commercial activities concerning this advertisement will not be appreciated.
Ready to apply?
Apply to Ada Health GmbHShare this job
Intrinsic is an AI robotics group at Google aiming to reimagine the potential of industrial robotics. Our team believes that advances in AI, perception and simulation will redefine what’s possible for industrial robotics in the near future – with software and data at the core.
Our mission is to make industrial robotics intelligent, accessible, and usable for millions more businesses, entrepreneurs, and developers. We are a dynamic team of engineers, roboticists, designers, and technologists who are passionate about unlocking the creative and economic potential of industrial robotics.
As a software engineer focused on MLOps and deep learning infrastructure, you will design and build the foundational systems that empower robots with advanced machine learning capabilities. You will work within a cross-functional team of engineers and researchers to develop infrastructure that streamlines the training, evaluation, and deployment of large-scale AI models. Your efforts will provide accessible tools for injecting machine learning techniques into the Intrinsic stack, managing compute resources across cloud and on-premise environments, and ensuring that complex model lifecycles are efficient, reliable, and scalable for real-world industrial applications.
At Intrinsic, we are proud to be an equal opportunity workplace. Employment at Intrinsic is based solely on a person's merit and qualifications directly related to professional competence. Intrinsic does not discriminate against any employee or applicant because of race, creed, color, religion, gender, sexual orientation, gender identity/expression, national origin, disability, age, genetic information, veteran status, marital status, pregnancy or related condition (including breastfeeding), or any other basis protected by law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Intrinsic’s policy to comply with all applicable national, state and local laws pertaining to nondiscrimination and equal opportunity.
If you have a disability or special need that requires accommodation, please contact us at: candidate-support@intrinsic.ai.
Ready to apply?
Apply to Intrinsic
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.