All active MLOps roles based in Amsterdam.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
We seek an experienced Specialist Solutions Architect to support AI-focused customers leveraging Nebius services. In this role, you will be a trusted advisor, collaborating with clients to design scalable AI solutions, resolve technical challenges and manage large-scale AI deployments involving hundreds to thousands of GPUs.
You’re welcome to work on-site in Amsterdam or remotely from any other EU country.
Your responsibilities will include:
We expect you to have:
It will be an added bonus if you have:
Preferred tooling:
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
The role
At Nebius, we’re building a next-generation AI compute platform for large-scale ML training and inference — from a few nodes to thousands of GPUs.
We’re looking for a Technical Product Manager to own product direction for Soperator — our Slurm-on-Kubernetes control plane for GPU clusters.
In this role, you will shape how ML engineers and research teams run, scale, and optimize distributed workloads in production.
If you care about systems that combine performance, reliability, and developer experience at the frontier of AI infrastructure, this role is for you.
Your responsibilities will include:
• Own the full user journey across Soperator clusters: Slurm workflows, dashboards, alerts/notifications, node lifecycle, and training/inference capacity management.
• Define product direction end-to-end: problem discovery → solution design → delivery → adoption.
• Lead deep customer discovery through interviews, usage analytics, and workload analysis to uncover high-impact opportunities.
• Drive execution across platform teams: compute, networking, storage, observability, IAM and etc.
• Translate frontier ML and infrastructure ideas into practical product capabilities for real-world GPU clusters.
• Define success metrics, prioritize roadmap decisions with data, and ensure measurable customer/business impact.
• Lead the open-source strategy and execution for Soperator: shape public roadmap themes, prioritize OSS-facing capabilities, and ensure strong adoption in the community.
We expect you to have:
• 3–5+ years in Product Management, ML infrastructure/MLOps, distributed systems, or cloud platform engineering.
• Strong technical depth in distributed systems, cloud infrastructure, or ML platforms.
• Hands-on familiarity with large-scale ML training and orchestration tools (e.g., Slurm, Kubernetes, Ray).
• Track record of shipping technically complex products with multiple engineering teams.
• Strong communication and stakeholder management across engineering, research, and customers.
• Experience with product analytics, data-informed prioritization, and experimentation.
• High ownership, high learning velocity, and comfort operating in fast-moving AI infrastructure environments.
It will be an added bonus if you have:
• Experience with GPU platforms and HPC primitives: InfiniBand/RDMA, topology-aware scheduling, high-throughput storage.
• Practical understanding of modern ML training stacks: PyTorch, DeepSpeed, FSDP/ZeRO, NCCL.
• Familiarity with efficiency and reliability metrics: Goodput, MFU, failure modes, preemption handling, health checks.
• Exposure to large-scale LLM training/inference systems.
• Experience in observability, performance tuning, or SRE/reliability engineering.
• Customer-facing technical experience (solutioning, support, architecture advisory).
About Nebius
Nebius AI is an AI cloud platform with one of the largest GPU capacities in Europe. Launched in November 2023, the Nebius AI platform provides high-end, training-optimized infrastructure for AI practitioners. As an NVIDIA preferred cloud service provider, Nebius AI offers a variety of NVIDIA GPUs for training and inference, as well as a set of tools for efficient multi-node training.
Nebius AI owns a data center in Finland, built from the ground up by the company’s R&D team and showcasing our commitment to sustainability. The data center is home to ISEG, the most powerful commercially available supercomputer in Europe and the 16th most powerful globally (Top 500 list, November 2023).
Nebius’s headquarters are in Amsterdam, Netherlands, with teams working out of R&D hubs across Europe and the Middle East.
Nebius AI is built with the talent of more than 500 highly skilled engineers with a proven track record in developing sophisticated cloud and ML solutions and designing cutting-edge hardware. This allows all the layers of the Nebius AI cloud – from hardware to UI – to be built in-house, distictly differentiating Nebius AI from the majority of specialized clouds: Nebius customers get a true hyperscaler-cloud experience tailored for AI practitioners. We’re growing and expanding our products every day.
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
Token Factory is a part of Nebius Cloud, one of the world’s largest GPU clouds, running tens of thousands of GPUs. We are building an inference platform that makes every kind of foundation model — text, vision, audio, and emerging multimodal architectures — fast, reliable, and effortless to deploy at massive scale. To deliver on that promise, we need an engineer who can make the platform behave flawlessly under extreme load and recover gracefully when the unexpected happens.
In this role you will own the reliability, performance, and observability of the entire inference stack. Your day starts with designing and refining telemetry pipelines — metrics, logs, and traces that turn hundreds of terabytes of signal into clear, actionable insight. From there you might tune Kubernetes autoscalers to squeeze more efficiency out of GPUs, craft Terraform modules that bake resilience into every new cluster, or harden our request-routing and retry logic so even transient failures go unnoticed by users. When incidents do arise, you’ll rely on the automation and runbooks you helped create to detect, isolate, and remediate problems in minutes, then drive the post-mortem culture that prevents recurrence. All of this effort points toward a single goal: scaling the platform smoothly while hitting aggressive cost and reliability targets.
Success in the role calls for deep fluency with Kubernetes, Prometheus, Grafana, Terraform, and the craft of infrastructure-as-code. You script comfortably in Python or Bash, understand the nuances of alert design and SLOs for high-throughput APIs, and have spent enough time in production to know how distributed back-ends fail in the real world. Experience shepherding GPU-heavy workloads — whether with vLLM, Triton, Ray, or another accelerator stack — will serve you well, as will a background in MLOps or model-hosting platforms. Above all, you care about building self-healing systems, thrive on debugging performance from kernel to application layer, and enjoy collaborating with software engineers to turn reliability into a feature users never have to think about.
If the idea of safeguarding the infrastructure that powers tomorrow’s multimodal AI energizes you, we’d love to hear your story.
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
Compensation
We offer competitive compensation packages based on experience.
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
Customer experience
Customer experience at Nebius AI Cloud involves tackling customers’ challenges and directly impacting their success by solving real-world AI and ML problems at massive GPU cloud scale. You’ll not only resolve issues, but play a key role in shaping clients’ business success by optimizing their AI solutions.
Working with advanced GPUs such as H200, B200 and GB200, as well as modern ML frameworks, you’ll influence the development of the Nebius AI Cloud and gain experience at the intersection of infrastructure and AI. With minimal bureaucracy, you’ll have the freedom to innovate, take ownership and drive change. Opportunities for growth are abundant in this vibrant and supportive professional community.
We are looking for a Partner Solutions Architect to serve as the technical interface between Nebius and our strategic technology partners, ranging from data platforms and MLOps vendors to AI frameworks and ISVs that build on GPU infrastructure.
This is a hands-on engineering and integration role. You will design and develop integrated solutions, build reference implementations, enable partner engineering teams, and ensure joint customers succeed with combined offerings. You will influence Nebius’ product roadmap and drive deep technical collaboration across partner ecosystems.
You’re welcome to work from our office in Amsterdam or remotely from any EU country.
Your responsibilities will include:
We expect you to have:
It will be an added bonus if you have:
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
This is Adyen
Adyen provides payments, data, and financial products in a single solution for customers like Meta, Uber, H&M, and Microsoft - making us the financial technology platform of choice. At Adyen, everything we do is engineered for ambition.
For our teams, we create an environment with opportunities for our people to succeed, backed by the culture and support to ensure they are enabled to truly own their careers. We are motivated individuals who tackle unique technical challenges at scale and solve them as a team. Together, we deliver innovative and ethical solutions that help businesses achieve their ambitions faster.
Machine Learning Scientist
Adyen is looking for a Machine Learning Scientist to join our team in Amsterdam, a person sitting at the cornerstone of algorithms, mathematics, and engineering, who can solve problems by designing and implementing production-ready machine learning solutions. You will be responsible for building, developing and deploying algorithms that power data products at Adyen.
We are currently hiring for the following teams:
Insights - Diagnostics: The Insights team is at the core of this platform, providing the world’s largest merchants with the data and analytics they need to optimize their payment performance. Within this, our Proactive Diagnostics initiative acts as a proactive guard for merchant revenue, closing the loop between detecting an anomaly and providing a clear path to rectification. We operate at the intersection of Big Data and actionable intelligence. By leveraging Adyen’s global payment flow, we apply advanced statistical models and Causal Inference to not only detect performance drops but to explain the "why" behind them. We are looking for a Machine Learning Engineer to help us architect the next generation of our diagnostic engine.
Regulatory Reporting Technology:
Adyen's Regulatory Reporting Tech team is seeking a Machine Learning Scientist to join us in Amsterdam. You will help in further automating and scaling our global regulatory reporting framework to keep Adyen compliant across all markets. Together with the team, you will implement key technical solutions to streamline our regulatory reporting operations. If you have a strong machine learning background and love solving complex problems, we want to hear from you.
In this role, you will:
Who You Are:
Our Diversity, Equity and Inclusion commitments
Our unique approach is a product of our diverse perspectives. This diversity of backgrounds and cultures is essential in helping us maintain our momentum. Our business and technical challenges are unique, and we need as many different voices as possible to join us in solving them - voices like yours. No matter who you are or where you’re from, we welcome you to be your true self at Adyen.
Studies show that women and members of underrepresented communities apply for jobs only if they meet 100% of the qualifications. Does this sound like you? If so, Adyen encourages you to reconsider and apply. We look forward to your application!
What’s next?
Ensuring a smooth and enjoyable candidate experience is critical for us. We aim to get back to you regarding your application within 5 business days. Our interview process tends to take about 4 weeks to complete, but may fluctuate depending on the role. Learn more about our hiring process here. Don’t be afraid to let us know if you need more flexibility.
This role is based out of our Amsterdam office. We are an office-first company and value in-person collaboration; we do not offer remote-only roles.
Ready to apply?
Apply to AdyenThis is Adyen
At Adyen, we’re engineered for ambition. We empower our teams with the culture and support they need to own their careers. The people of Adyen are motivated problem-solvers who tackle unique technical challenges at scale, delivering innovative and ethical solutions to help the world’s best businesses achieve their ambitions faster, and we’re looking for a motivated Senior Machine Learning Engineer to join our team in Amsterdam.
Senior Machine Learning Engineer
The Customer Risk team is at the front line of this platform, building the next-generation systems required to assess and mitigate risk in real time. They are responsible for keeping our platform safe, while maintaining a seamless experience for our legitimate global merchants.
They operate at the critical intersection of high-stakes security and massive scale. By leveraging Adyen’s global payment flow, they are building a greenfield risk engine from scratch that moves beyond traditional detection to sophisticated, real-time entity assessment. We are looking for our first Senior Machine Learning Engineer to lead this effort and help architect the future of risk at Adyen.
In this role, you will:
Who You Are:
Our Diversity, Equity and Inclusion commitments
Our unique approach is a product of our diverse perspectives. This diversity of backgrounds and cultures is essential in helping us maintain our momentum. Our business and technical challenges are unique, and we need as many different voices as possible to join us in solving them - voices like yours. No matter who you are or where you’re from, we welcome you to be your true self at Adyen.
Studies show that women and members of underrepresented communities apply for jobs only if they meet 100% of the qualifications. Does this sound like you? If so, Adyen encourages you to reconsider and apply. We look forward to your application!
What’s next?
Ensuring a smooth and enjoyable candidate experience is critical for us. We aim to get back to you regarding your application within 5 business days. Our interview process tends to take about 4 weeks to complete, but may fluctuate depending on the role. Learn more about our hiring process here. Don’t be afraid to let us know if you need more flexibility.
This role is based in Amsterdam. Our culture is built on the foundation of in-person collaboration, where our teams work side-by-side to solve unique challenges and accelerate growth.
Ready to apply?
Apply to AdyenShare this job
Principal data engineers at Thoughtworks, are strategic leaders who spearhead data engineering initiatives, tackle complex business challenges and uncover transformative insights. They possess a deep understanding of a client's business ecosystem and partner with executives to align technology strategies with business objectives. By contextualizing emerging trends and Thoughtworks' exploration, they expand the impact of data engineering within the client organization.
They draw upon their profound expertise in developing modern data architectures and infrastructure for the management of data applications.
Effective collaboration is paramount, as data engineers adeptly convey their discoveries to both technical and non-technical stakeholders. They stay abreast of industry advancements, ensure data quality and security, and provide mentorship to junior team members.
At Thoughtworks, data engineers leverage their deep technical knowledge to solve complex business problems, making a significant impact on client success.
There is no one-size-fits-all career path at Thoughtworks: however you want to develop your career is entirely up to you. But we also balance autonomy with the strength of our cultivation culture. This means your career is supported by interactive tools, numerous development programs and teammates who want to help you grow. We see value in helping each other be our best and that extends to empowering our employees in their career journeys.
At Thoughtworks, we use AI tools to support our recruitment team with administrative tasks such as drafting communications, scheduling interviews and writing job descriptions.
Crucially, our AI tools do not screen, assess, rank or make hiring decisions. Every application is reviewed by our team and all selection decisions are made exclusively by our interviewers and hiring managers.
We are committed to fairness and responsible AI. We actively manage our AI systems by testing, monitoring for biased outcomes and implementing mitigation measures. We hold our third-party vendors to these same high standards through a rigorous governance process. For additional information, please see our full Thoughtworks AI Policy for Recruitment.
Thoughtworks is a dynamic and inclusive community of bright and supportive colleagues who are revolutionizing tech. As a leading technology consultancy, we’re pushing boundaries through our purposeful and impactful work. For 30+ years, we’ve delivered extraordinary impact together with our clients by helping them solve complex business problems with technology as the differentiator. Bring your brilliant expertise and commitment for continuous learning to Thoughtworks. Together, let’s be extraordinary.
#LI-Onsite
Ready to apply?
Apply to Thoughtworks
Share this job
Software engineers and AI agents alike suffer from the same problem: finding that one person or place that will answer their tough, specific question. Many solutions promise to solve this with similarity search in vector databases. Unfortunately, finding the answer is often a puzzle with pieces to be collected across a myriad of contradictory sources and cannot be solved without surgical search and careful reasoning.
Spectrum collects data from an organization’s code, docs, and issues, and organizes knowledge in a unified ontology that AI agents can efficiently search through and reason over. We aim to revolutionize the semantic layer space for software-building organizations and move beyond specs that fall out of sync with code, introducing a living spec – one that’s extracted from the whole system and used to keep it aligned. Spectrum is meant to be the single source of truth for all product and architectural knowledge.
Spectrum is a resident of JetBrains' startup incubator, with startup speed and autonomy, and backed by 25 years of developer tooling expertise. We are looking for a top-class ML Engineer who will help us shape the future of software development. You will own our AI and ML engineering stack and help define the research agenda for our team. Your technical vision and design decisions will directly shape the product and determine its success.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
Software engineers and AI agents alike suffer from the same problem: finding that one person or place that will answer their tough, specific question. Many solutions promise to solve this with similarity search in vector databases. Unfortunately, finding the answer is often a puzzle with pieces to be collected across a myriad of contradictory sources and cannot be solved without surgical search and careful reasoning.
Spectrum collects data from an organization's code, docs, and issues, and organizes knowledge in a unified ontology that AI agents can efficiently search through and reason over. We aim to revolutionize the semantic layer space for software-building organizations and move beyond specs that fall out of sync with code, introducing a living spec – one that's extracted from the whole system and used to keep it aligned. Spectrum is meant to be the single source of truth for all product and architectural knowledge.
A resident of JetBrains' startup incubator, Spectrum enjoys startup speed and autonomy, and is backed by 25 years of developer tooling expertise. We are looking for a Senior AI/ML Engineer to build and evolve the ML-powered systems at the heart of our product.
#LI-DNI
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
Dataiku is the Platform for AI Success, the enterprise orchestration layer for building, deploying, and governing AI. In a single environment, teams design and operate analytics, machine learning, and AI agents with the transparency, collaboration, and control enterprises require. Sitting above data platforms, cloud infrastructure, and AI services, Dataiku connects the full enterprise AI stack — empowering organizations to run AI across multi-vendor environments with centralized governance.
The world’s leading companies rely on Dataiku to operationalize AI and run it as a true business performance engine delivering measurable value. For more, visit the Dataiku blog, LinkedIn, X, and YouTube.
Why Engineering at Dataiku?
Dataiku’s on-premise, cloud, or SaaS-deployed platform connects many data science technologies, and our technology stack reflects our commitment to quality and innovation. We integrate the best of data and AI tech, selecting tools that truly enhance our product. From the latest LLMs to our dedication to open source communities, you'll work with a dynamic range of technologies and contribute to the collective knowledge of global tech innovators. You can find out even more about working in Engineering at Dataiku by taking a look here.
Here are some useful links so you preview what we do at Dataiku: Dataiku's Key Capabilities ; Dataiku's Github, you can also take a look at the Gallery, a public instance showcasing some example projects (note editing is very limited and will be regularly reset).
Our product is called Dataiku DSS which stands for Dataiku Data Science Studio. If you’d like to know more about it, you can watch the demo here or try the free version here.
How you’ll make an impact
This position is either onsite/hybrid from our Amsterdam office or full remote from any part of Netherlands.
As a Fullstack Engineer, you’ll contribute to building Dataiku DSS core features by joining one of the following themes:
What you need to be successful
Ready to apply?
Apply to Dataiku
Share this job
CSQ327R45
Depending on experience and scope, this position may be offered as Senior Solutions Consultant or Resident Solutions Architect
You may know this role as a Big Data Solutions Architect, Analytics Architect, Data Platform Architect, or Technical Consultant. The final title will align to your experience, technical depth, and customer-facing ownership.
As a Data & AI Platform Architect (Internal Title - Resident Solutions Architect) in our Professional Services team you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data. RSAs are billable and know how to complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead.
The impact you will have:
What we look for:
About Databricks
Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.
Our Commitment to Diversity and Inclusion
At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.
Compliance
If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Ready to apply?
Apply to Databricks
AI Consultant
Amsterdam, Rotterdam, hybrid
DEPT® is a Growth Invention company built to help the world’s most ambitious brands grow faster. Operating at the intersection of technology and marketing, our 4,000+ specialists deliver growth invention services across Brand & Media, Experience, Commerce, CRM, and Technology & Data. We’re 50|50 tech and marketing, partner-led, and first to move. Clients include Google, Lufthansa, Meta, eBay, and OpenAI. We have been certified B Corp and Climate Neutral since 2021.
DEPT®/AI
DEPT®/AI has a single mission: to make the best work in the industry using AI to enhance everything we do. This role sits within our Data & AI practice, which has deep expertise in leveraging AI. The team includes data strategists, consultants, data scientists and analysts that work alongside DEPT® teams around the world across different services – from commerce, to full-funnel media, content engineering to internal operations. You will be solving some of the hardest and most challenging problems facing some of the best loved brands in the world – and doing this alongside an experienced team.
JOB PURPOSE
We are looking for an AI Consultant who not only knows exactly how to create impact with algorithms, big data, machine learning, and generative AI, but can also translate our tech offerings to our current and future clients as well as our wider digital marketing teams internally
As an AI Consultant, you will work with our excellent portfolio of clients to accelerate their AI initiatives and adoption. You translate business problems into prioritised AI solutions and roadmaps, and guide clients in implementing them with the support from our team of talented engineers and developers. Your role focuses on understanding business challenges and designing strategies based on a deep understanding of AI, rather than writing code.
We are looking for AI Consultants who have had experience in a similar role, preferably at an agency.
WHAT YOU’LL DO
WHAT YOU’LL BRING
WE OFFER
WHY DEPT®?
We are a Growth Invention company built to help the world’s most ambitious brands grow faster. Operating at the intersection of technology and marketing, we create what is next by pioneering ideas, acting fast, and moving further because standing still just is not in our DNA.
We are drawn to people who stay curious, move with intent, and never stop inventing. Our culture runs on three values: better together, relentlessly curious, and get sh*t done. It is how we work, how we grow, and how we make things that matter.
At DEPT®, you will find the freedom to explore, the space to collaborate, and the trust to make a real impact for our clients, for each other, and for the world we are helping to build..
DIVERSITY, EQUITY & INCLUSION
At DEPT®, we take pride in creating an inclusive workplace where everyone has an equal opportunity to thrive. We actively seek to recruit, develop, nurture, and retain talented individuals from diverse backgrounds, with varying skills and perspectives.
Not sure you meet all qualifications? Apply, and let us decide! Research shows that women and members of underrepresented groups tend not to apply for jobs when they think they may not meet every requirement, when in fact they do. We believe in giving everyone a fair chance to shine.
We also encourage you to reach out to us and discuss any reasonable adjustments we can make to support you throughout the recruitment process and your time with us.
Want to know more about our dedication to diversity, equity, and inclusion? Check out our efforts here.
Ready to apply?
Apply to DEPT®The Job in short
Over 75 million people interact with banking products built on Backbase. Every model you ship here touches real accounts, real decisions, and real financial lives - at scale. That's the standard this role is built around.
Backbase leads in AI-native banking technology, helping the world's biggest banks move from fragmented systems to unified frontlines where humans and AI agents work together. We power 100+ of the largest banks globally, and our AI capabilities sit at the core of that shift.
As a Principal Machine Learning Engineer, you will define how intelligent systems operate in this environment: not just predicting outcomes, but making safe, auditable, and real-time decisions within a controlled execution model.
Reasons to build with us
Real-world impact at scale: Your models run in production for 150+ million end users across the globe's leading banks.
AI at the core: We're building the AI-native Banking OS from the ground up - ML engineering here shapes the product direction, not just the feature roadmap.
High-ownership culture: We ship early, iterate fast, and trust engineers to make decisive calls. You'll move at speed without layers of approval slowing you down.
"We don't debate AI in the abstract - we build it and put it in front of real banks. The engineers who thrive here are the ones who start with the problem, own the outcome, and raise the bar every sprint. That's the culture we've built, and it shows in what we ship.
- VP of Engineering, AI & Data
Meet the job
You'll be a key architect of Backbase's AI capabilities. That means moving fast, owning your work end-to-end, and building ML systems that solve real problems for banking partners - not theoretical models that gather dust.
That means:
● Designing, developing, and deploying production-grade ML models that personalize the banking experience and automate complex financial workflows
● Partnering with product and data teams to integrate AI capabilities directly into the Banking Platform
● Taking full ownership of the end-to-end ML lifecycle - from data discovery and feature engineering through to model monitoring and retraining
● Turning complex data architectures into clean, maintainable ML pipelines and APIs
● Running rigorous code reviews and mentoring mid-level engineers to keep raising the technical bar
● Identifying opportunities to apply AI where it removes friction and adds measurable value for end users
How about you
You're a seasoned engineer who thrives in a straight-talk culture. You don't settle for the industry standard - yesterday's best work is today's baseline, and you're always looking for what's next. Deep technical chops and a customer-first mindset aren't in tension for you; they're the same thing.
Your track record also shows:
● 5+ years designing and deploying large-scale ML systems in production environments
● Strong proficiency in Python and modern ML frameworks (PyTorch, TensorFlow, etc.)
● Experience with LLMs, RAG, or agent-based systems in production
Solid MLOps experience:
CI/CD for ML
model versioning
monitoring and observability
● Experience with data and streaming systems (e.g., Spark, Kafka)
● The ability to communicate complex technical concepts clearly to non-technical stakeholders, without oversimplifying or over-complicating
● A proven ability to make decisive technical choices, meet deadlines, and keep momentum in a fast-moving environment
Why Backbase
Backbase is where ambitious engineers come to do the most consequential work of their careers. We're growing fast, we build in the open, and we hold ourselves accountable to outcomes that show up in the real world - not just on slides.
We also offer:
● Competitive salary and performance-based bonus
● Flexible working arrangements and a hybrid setup
● Access to top-tier tools, cloud infrastructure, and ML platforms
● A learning budget to keep your skills sharp and your career moving
Collaborative teams across Amsterdam, Atlanta, Bangalore, and beyond
● A culture where straight talk is valued and good ideas win regardless of where they come from
● The chance to build AI systems that run at the scale of global banking - and see the results
Ready to apply?
Apply to Backbase
Share this job
You’ll work on building the tools and infrastructure to help our Machine Learning Engineers build and productionize robust machine learning models.
Working closely with ML Engineers, you’ll identify opportunities to improve the machine learning lifecycle at Picnic. From tools that improve model experimentation, to automations that simplify model deployment. You will collaborate with other platform teams at Picnic to make sure our tech stack remains aligned with the rest of the Tech team, while building and integrating the solutions that solve the problems unique to machine learning systems.
Check out some of our previous machine learning projects here: https://blog.picnic.nl/tagged/machine-learning
Various MLOps-oriented projects to:
Your contributions to the platform will power:
You will definitely:
✍🏼 Every expert was once a beginner!
You’ll get plenty of opportunities to challenge yourself and grow, including the Picnic Tech Academy, Lunch & Learn sessions, and tailored soft skills training. We also offer free professional weekly language courses.
🫱🏼🫲🏾 Teamwork makes the dream work
With more than 80 nationalities across 3 countries, you’ll be part of a diverse company with plenty of cool stuff to get involved with, from board game evenings to after-work drinks to our company ski trip and more!
🥗 Fresh Lunch, coffee, and snacks
Our offices are equipped with fully-fledged coffee bars and a kitchen and chefs. They prepare delicious fresh and warm lunches every day so you can keep your energy up.
🚲 Health insurance discount & bike plan
We have a partnership with CZ (a health insurance provider). Picnic employees get a discount on CZ insurance packages between 5% and 15%. Furthermore, through our partnership with Lease a Bike, you can rent-to-own a new (e)bike at a discounted rate
🌎 Relocation
If you’re moving from another country to join Picnic we make it as smooth as possible for you. We’ll cover your flight costs for you and your partner and kids, your first month's rent and moving costs (up to €2000), and help you with the 30% tax ruling setup and application.
📆 All the rest
At Picnic you get 25 holidays, we cover your travel expenses and offer a pension plan. And your phone and laptop are on us, as well.
Ready to apply?
Apply to Picnic
Share this job
At JetBrains, code is our passion. Ever since we started, back in 2000, we’ve been striving to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
Today, AI-powered assistance and agents are becoming a core part of how developers work in our IDEs. The ML Workflows Engineering team is dedicated to removing infrastructure challenges, streamlining machine learning operations (MLOps), and enabling teams to focus on the innovative work that matters most – building impactful ML models and intelligent agents. As part of the team, you'll play a key role in designing tools, automation, and pipelines that make machine learning development seamless and intuitive.
By integrating cutting-edge MLOps practices and engineering excellence, we aim to maximize productivity and remove the complexity of ML infrastructure so that our teams can push the boundaries of what’s possible in AI.
#LI-HYBRID
#LI-MR1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
At JetBrains, code is our passion. Ever since we started back in 2000, we have been striving to make the world’s most robust and effective developer tools. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
We are working on an ambitious new platform that provides AI capabilities to all JetBrains products. Our platform is based on models developed in-house for writing and coding assistance, as well as integration with our strategic partners.
We are looking for a Research Engineer who can contribute to training foundation models for coding tasks. You’ll be working on developing Large Language Models from scratch and deploying them into production environments where they will be accessible by end users across the globe.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
At JetBrains, code is our passion. Ever since we started back in 2000, we have been striving to make the world’s most robust and effective developer tools. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
JetBrains is evolving beyond standalone developer tools toward a unified, AI-native platform for software development.
AI is no longer just an assistant inside the editor – it is becoming an active participant in how software is planned, built, reviewed, and operated across teams and organizations. This shift introduces new challenges that cannot be solved at the level of individual tools alone: governance, security, cost control, observability, and coordinated work between humans and autonomous agents.
Our goal is to build a platform that enables companies to adopt AI in software development in a structured, scalable, and economically efficient manner without locking them into closed ecosystems. This platform will serve as the execution and governance layer for AI-driven development, deeply integrated with developer tools but designed to work across teams, products, and environments.
We are looking for an experienced ML leader who has created products with an ML backbone, weaving together research, technical excellence, and strong product focus.
We are seeking a professional who excels in three key areas: technology, product vision, and business operations. This role involves extensive cooperation with products across the company – both AI-native and just integrating AI.
#LI-KP1 #LI-HYBRID
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.