All active MLOps roles based in Netherlands.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
We are seeking a customer-obsessed Solutions Architect to lead technical engagement with Nebius customers across Europe. This role is focused on presales excellence, strong technical storytelling, and driving adoption of Nebius AI Cloud across a wide range of industry segments.
As a Solutions Architect, you will act as a trusted technical partner to customers, helping them understand, evaluate, and adopt Nebius for their AI/ML and high-performance computing workloads. You will combine deep technical expertise with strong communication skills to clearly articulate Nebius’ value proposition and differentiate our platform in a competitive market.
You’re welcome to work remotely from Europe.
Your responsibilities will include:
We expect you to have:
It will be an added bonus if you have:
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
We seek an experienced Specialist Solutions Architect to support AI-focused customers leveraging Nebius services. In this role, you will be a trusted advisor, collaborating with clients to design scalable AI solutions, resolve technical challenges and manage large-scale AI deployments involving hundreds to thousands of GPUs.
You’re welcome to work on-site in Amsterdam or remotely from any other EU country.
Your responsibilities will include:
We expect you to have:
It will be an added bonus if you have:
Preferred tooling:
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
Customer experience
Customer experience at Nebius AI Cloud involves tackling customers’ challenges and directly impacting their success by solving real-world AI and ML problems at massive GPU cloud scale. You’ll not only resolve issues, but play a key role in shaping clients’ business success by optimizing their AI solutions.
Working with advanced GPUs such as H200, B200 and GB200, as well as modern ML frameworks, you’ll influence the development of the Nebius AI Cloud and gain experience at the intersection of infrastructure and AI. With minimal bureaucracy, you’ll have the freedom to innovate, take ownership and drive change. Opportunities for growth are abundant in this vibrant and supportive professional community.
We are looking for a Partner Solutions Architect to serve as the technical interface between Nebius and our strategic technology partners, ranging from data platforms and MLOps vendors to AI frameworks and ISVs that build on GPU infrastructure.
This is a hands-on engineering and integration role. You will design and develop integrated solutions, build reference implementations, enable partner engineering teams, and ensure joint customers succeed with combined offerings. You will influence Nebius’ product roadmap and drive deep technical collaboration across partner ecosystems.
You’re welcome to work from our office in Amsterdam or remotely from any EU country.
Your responsibilities will include:
We expect you to have:
It will be an added bonus if you have:
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
Principal data engineers at Thoughtworks, are strategic leaders who spearhead data engineering initiatives, tackle complex business challenges and uncover transformative insights. They possess a deep understanding of a client's business ecosystem and partner with executives to align technology strategies with business objectives. By contextualizing emerging trends and Thoughtworks' exploration, they expand the impact of data engineering within the client organization.
They draw upon their profound expertise in developing modern data architectures and infrastructure for the management of data applications.
Effective collaboration is paramount, as data engineers adeptly convey their discoveries to both technical and non-technical stakeholders. They stay abreast of industry advancements, ensure data quality and security, and provide mentorship to junior team members.
At Thoughtworks, data engineers leverage their deep technical knowledge to solve complex business problems, making a significant impact on client success.
There is no one-size-fits-all career path at Thoughtworks: however you want to develop your career is entirely up to you. But we also balance autonomy with the strength of our cultivation culture. This means your career is supported by interactive tools, numerous development programs and teammates who want to help you grow. We see value in helping each other be our best and that extends to empowering our employees in their career journeys.
At Thoughtworks, we use AI tools to support our recruitment team with administrative tasks such as drafting communications, scheduling interviews and writing job descriptions.
Crucially, our AI tools do not screen, assess, rank or make hiring decisions. Every application is reviewed by our team and all selection decisions are made exclusively by our interviewers and hiring managers.
We are committed to fairness and responsible AI. We actively manage our AI systems by testing, monitoring for biased outcomes and implementing mitigation measures. We hold our third-party vendors to these same high standards through a rigorous governance process. For additional information, please see our full Thoughtworks AI Policy for Recruitment.
Thoughtworks is a dynamic and inclusive community of bright and supportive colleagues who are revolutionizing tech. As a leading technology consultancy, we’re pushing boundaries through our purposeful and impactful work. For 30+ years, we’ve delivered extraordinary impact together with our clients by helping them solve complex business problems with technology as the differentiator. Bring your brilliant expertise and commitment for continuous learning to Thoughtworks. Together, let’s be extraordinary.
#LI-Onsite
Ready to apply?
Apply to Thoughtworks
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
Compensation
We offer competitive compensation packages based on experience.
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
Token Factory is a part of Nebius Cloud, one of the world’s largest GPU clouds, running tens of thousands of GPUs. We are building an inference platform that makes every kind of foundation model — text, vision, audio, and emerging multimodal architectures — fast, reliable, and effortless to deploy at massive scale. To deliver on that promise, we need an engineer who can make the platform behave flawlessly under extreme load and recover gracefully when the unexpected happens.
In this role you will own the reliability, performance, and observability of the entire inference stack. Your day starts with designing and refining telemetry pipelines — metrics, logs, and traces that turn hundreds of terabytes of signal into clear, actionable insight. From there you might tune Kubernetes autoscalers to squeeze more efficiency out of GPUs, craft Terraform modules that bake resilience into every new cluster, or harden our request-routing and retry logic so even transient failures go unnoticed by users. When incidents do arise, you’ll rely on the automation and runbooks you helped create to detect, isolate, and remediate problems in minutes, then drive the post-mortem culture that prevents recurrence. All of this effort points toward a single goal: scaling the platform smoothly while hitting aggressive cost and reliability targets.
Success in the role calls for deep fluency with Kubernetes, Prometheus, Grafana, Terraform, and the craft of infrastructure-as-code. You script comfortably in Python or Bash, understand the nuances of alert design and SLOs for high-throughput APIs, and have spent enough time in production to know how distributed back-ends fail in the real world. Experience shepherding GPU-heavy workloads — whether with vLLM, Triton, Ray, or another accelerator stack — will serve you well, as will a background in MLOps or model-hosting platforms. Above all, you care about building self-healing systems, thrive on debugging performance from kernel to application layer, and enjoy collaborating with software engineers to turn reliability into a feature users never have to think about.
If the idea of safeguarding the infrastructure that powers tomorrow’s multimodal AI energizes you, we’d love to hear your story.
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
The role
At Nebius, we’re building a next-generation AI compute platform for large-scale ML training and inference — from a few nodes to thousands of GPUs.
We’re looking for a Technical Product Manager to own product direction for Soperator — our Slurm-on-Kubernetes control plane for GPU clusters.
In this role, you will shape how ML engineers and research teams run, scale, and optimize distributed workloads in production.
If you care about systems that combine performance, reliability, and developer experience at the frontier of AI infrastructure, this role is for you.
Your responsibilities will include:
• Own the full user journey across Soperator clusters: Slurm workflows, dashboards, alerts/notifications, node lifecycle, and training/inference capacity management.
• Define product direction end-to-end: problem discovery → solution design → delivery → adoption.
• Lead deep customer discovery through interviews, usage analytics, and workload analysis to uncover high-impact opportunities.
• Drive execution across platform teams: compute, networking, storage, observability, IAM and etc.
• Translate frontier ML and infrastructure ideas into practical product capabilities for real-world GPU clusters.
• Define success metrics, prioritize roadmap decisions with data, and ensure measurable customer/business impact.
• Lead the open-source strategy and execution for Soperator: shape public roadmap themes, prioritize OSS-facing capabilities, and ensure strong adoption in the community.
We expect you to have:
• 3–5+ years in Product Management, ML infrastructure/MLOps, distributed systems, or cloud platform engineering.
• Strong technical depth in distributed systems, cloud infrastructure, or ML platforms.
• Hands-on familiarity with large-scale ML training and orchestration tools (e.g., Slurm, Kubernetes, Ray).
• Track record of shipping technically complex products with multiple engineering teams.
• Strong communication and stakeholder management across engineering, research, and customers.
• Experience with product analytics, data-informed prioritization, and experimentation.
• High ownership, high learning velocity, and comfort operating in fast-moving AI infrastructure environments.
It will be an added bonus if you have:
• Experience with GPU platforms and HPC primitives: InfiniBand/RDMA, topology-aware scheduling, high-throughput storage.
• Practical understanding of modern ML training stacks: PyTorch, DeepSpeed, FSDP/ZeRO, NCCL.
• Familiarity with efficiency and reliability metrics: Goodput, MFU, failure modes, preemption handling, health checks.
• Exposure to large-scale LLM training/inference systems.
• Experience in observability, performance tuning, or SRE/reliability engineering.
• Customer-facing technical experience (solutioning, support, architecture advisory).
About Nebius
Nebius AI is an AI cloud platform with one of the largest GPU capacities in Europe. Launched in November 2023, the Nebius AI platform provides high-end, training-optimized infrastructure for AI practitioners. As an NVIDIA preferred cloud service provider, Nebius AI offers a variety of NVIDIA GPUs for training and inference, as well as a set of tools for efficient multi-node training.
Nebius AI owns a data center in Finland, built from the ground up by the company’s R&D team and showcasing our commitment to sustainability. The data center is home to ISEG, the most powerful commercially available supercomputer in Europe and the 16th most powerful globally (Top 500 list, November 2023).
Nebius’s headquarters are in Amsterdam, Netherlands, with teams working out of R&D hubs across Europe and the Middle East.
Nebius AI is built with the talent of more than 500 highly skilled engineers with a proven track record in developing sophisticated cloud and ML solutions and designing cutting-edge hardware. This allows all the layers of the Nebius AI cloud – from hardware to UI – to be built in-house, distictly differentiating Nebius AI from the majority of specialized clouds: Nebius customers get a true hyperscaler-cloud experience tailored for AI practitioners. We’re growing and expanding our products every day.
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
Software engineers and AI agents alike suffer from the same problem: finding that one person or place that will answer their tough, specific question. Many solutions promise to solve this with similarity search in vector databases. Unfortunately, finding the answer is often a puzzle with pieces to be collected across a myriad of contradictory sources and cannot be solved without surgical search and careful reasoning.
Spectrum collects data from an organization’s code, docs, and issues, and organizes knowledge in a unified ontology that AI agents can efficiently search through and reason over. We aim to revolutionize the semantic layer space for software-building organizations and move beyond specs that fall out of sync with code, introducing a living spec – one that’s extracted from the whole system and used to keep it aligned. Spectrum is meant to be the single source of truth for all product and architectural knowledge.
Spectrum is a resident of JetBrains' startup incubator, with startup speed and autonomy, and backed by 25 years of developer tooling expertise. We are looking for a top-class ML Engineer who will help us shape the future of software development. You will own our AI and ML engineering stack and help define the research agenda for our team. Your technical vision and design decisions will directly shape the product and determine its success.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Software engineers and AI agents alike suffer from the same problem: finding that one person or place that will answer their tough, specific question. Many solutions promise to solve this with similarity search in vector databases. Unfortunately, finding the answer is often a puzzle with pieces to be collected across a myriad of contradictory sources and cannot be solved without surgical search and careful reasoning.
Spectrum collects data from an organization's code, docs, and issues, and organizes knowledge in a unified ontology that AI agents can efficiently search through and reason over. We aim to revolutionize the semantic layer space for software-building organizations and move beyond specs that fall out of sync with code, introducing a living spec – one that's extracted from the whole system and used to keep it aligned. Spectrum is meant to be the single source of truth for all product and architectural knowledge.
A resident of JetBrains' startup incubator, Spectrum enjoys startup speed and autonomy, and is backed by 25 years of developer tooling expertise. We are looking for a Senior AI/ML Engineer to build and evolve the ML-powered systems at the heart of our product.
#LI-DNI
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
The climate crisis is the defining challenge of our time—but it’s also the greatest opportunity for innovation, and a challenge we’re proud to take on. At Overstory, we’re harnessing cutting-edge technology to enable a resilient electrical grid that keeps communities thriving as our world changes.
The grid is the backbone of life as we know it. It powers hospitals, keeps food fresh, and ensures communities stay connected. But extreme weather, aging infrastructure, and growing wildfire risks are putting this critical system under pressure. All of this combined makes the electric utility industry the greatest opportunity for tackling climate change.
One of the leading causes of catastrophic wildfires and power outages? Trees and brush coming into contact with power lines.
That’s where we help. At Overstory, we use AI and advanced satellite imagery to pinpoint and prioritize vegetation risks before they materialize. By giving utilities critical analysis on those risks, we’re helping prevent outages, reduce wildfire risks, and accelerate the transition to a safer, more resilient grid.
Our team spans the Americas and Europe, and we work with utility partners across the Americas and beyond. We’re outdoor enthusiasts, musicians, artists, athletes, parents, and adventurers. What unites us is a passion for solving complex problems, a commitment to climate action, and the belief that technology should be a force for good.
Join us to help us build a more resilient world together.
We’re looking for a Director of Platform & Infrastructure to own and evolve our core engineering foundations. Reporting to our VP of Product Engineering, this role will lead SRE, MLOps, and Platform teams with a mandate to create leverage across the entire organization.
As a scaling startup, we’re evolving from early-stage infrastructure to a more mature, reliable platform that can support rapid product development and growing customer demand. This role is critical in helping us make that transition — building systems, standards, and teams that enable speed today while laying the foundation for long-term scale.
Partnering closely with product, data, and application engineering teams, you will set the technical vision, define best practices, and ensure our infrastructure enables teams to move fast with confidence.
This is a highly cross-functional leadership role for someone who combines deep technical judgment with a systems mindset, strong people leadership, and a passion for building platforms that empower others.
Time zone requirement: Europe (GMT/WET, CET, EET) and Eastern North America (NST, AST, EST)
Note: We believe that all people are capable of great things. We encourage you to apply even if you do not meet all of the requirements that are listed within this job description.
We are a group of 100 people from all over the world. Fifteen nationalities are represented in our team and at last count we speak fourteen languages: English, Dutch, French, Spanish, German, Italian, Portuguese, Russian, Luxembourgish, Lithuanian, Bulgarian, Cantonese, Estonian, and Danish.
We work remotely from eleven countries and are looking for candidates that are living and working in one of them: United States, the Netherlands, United Kingdom, Ireland, Estonia, Portugal, France, Sweden, Switzerland, Denmark and Canada. We gather once a year in-person for our unforgettable team gathering event. We also offer the option to occasionally meet up for in-person collaboration.
The climate crisis is a human crisis that requires diverse perspectives to solve.
We place enormous value on diversity and believe that the best ideas emerge when people with different backgrounds and experience work together. We remain committed to scaling a team that reflects the communities we serve, and strive to uphold equitable and inclusive practices across every aspect of our business. We are responsible for creating and maintaining a culture where everyone - regardless of background - has a voice in building a sustainable future.
We act with urgency.
We recognize that change is constant, and we find joy and power in exploration.
Just as ecosystems need biodiversity to thrive, our resiliency comes from our differences.
We love the power of machines but we nurture each other as humans.
We assume the best in everyone, and we share ideas openly so that we have a positive impact.
_________________________________
Use of AI in Our Hiring Process
We sometimes use AI tools to support parts of our hiring process, such as helping us manage applications more efficiently or ensuring job descriptions are clear and inclusive. All hiring decisions are always made by people, not machines. Any data processed by AI is handled securely in line with GDPR and our Privacy Notice.
Ready to apply?
Apply to Overstory
Dataiku is the Platform for AI Success, the enterprise orchestration layer for building, deploying, and governing AI. In a single environment, teams design and operate analytics, machine learning, and AI agents with the transparency, collaboration, and control enterprises require. Sitting above data platforms, cloud infrastructure, and AI services, Dataiku connects the full enterprise AI stack — empowering organizations to run AI across multi-vendor environments with centralized governance.
The world’s leading companies rely on Dataiku to operationalize AI and run it as a true business performance engine delivering measurable value. For more, visit the Dataiku blog, LinkedIn, X, and YouTube.
Why Engineering at Dataiku?
Dataiku’s on-premise, cloud, or SaaS-deployed platform connects many data science technologies, and our technology stack reflects our commitment to quality and innovation. We integrate the best of data and AI tech, selecting tools that truly enhance our product. From the latest LLMs to our dedication to open source communities, you'll work with a dynamic range of technologies and contribute to the collective knowledge of global tech innovators. You can find out even more about working in Engineering at Dataiku by taking a look here.
Here are some useful links so you preview what we do at Dataiku: Dataiku's Key Capabilities ; Dataiku's Github, you can also take a look at the Gallery, a public instance showcasing some example projects (note editing is very limited and will be regularly reset).
Our product is called Dataiku DSS which stands for Dataiku Data Science Studio. If you’d like to know more about it, you can watch the demo here or try the free version here.
How you’ll make an impact
This position is either onsite/hybrid from our Amsterdam office or full remote from any part of Netherlands.
As a Fullstack Engineer, you’ll contribute to building Dataiku DSS core features by joining one of the following themes:
What you need to be successful
Ready to apply?
Apply to Dataiku
Dataiku is the Platform for AI Success, the enterprise orchestration layer for building, deploying, and governing AI. In a single environment, teams design and operate analytics, machine learning, and AI agents with the transparency, collaboration, and control enterprises require. Sitting above data platforms, cloud infrastructure, and AI services, Dataiku connects the full enterprise AI stack — empowering organizations to run AI across multi-vendor environments with centralized governance.
The world’s leading companies rely on Dataiku to operationalize AI and run it as a true business performance engine delivering measurable value. For more, visit the Dataiku blog, LinkedIn, X, and YouTube.
Why Engineering at Dataiku?
Dataiku’s on-premise, cloud, or SaaS-deployed platform connects many data science technologies, and our technology stack reflects our commitment to quality and innovation. We integrate the best of data and AI tech, selecting tools that truly enhance our product. From the latest LLMs to our dedication to open source communities, you'll work with a dynamic range of technologies and contribute to the collective knowledge of global tech innovators. You can find out even more about working in Engineering at Dataiku by taking a look here.
Here are some useful links so you preview what we do at Dataiku: Dataiku's Key Capabilities ; Dataiku's Github, you can also take a look at the Gallery, a public instance showcasing some example projects (note editing is very limited and will be regularly reset).
Our product is called Dataiku DSS which stands for Dataiku Data Science Studio. If you’d like to know more about it, you can watch the demo here or try the free version here.
How you’ll make an impact
This position is either onsite/hybrid from our Berlin office or full remote from any part of Germany.
As a Fullstack Engineer, you’ll contribute to building Dataiku DSS core features by joining one of the following themes:
What you need to be successful
Ready to apply?
Apply to Dataiku
CSQ327R45
Depending on experience and scope, this position may be offered as Senior Solutions Consultant or Resident Solutions Architect
You may know this role as a Big Data Solutions Architect, Analytics Architect, Data Platform Architect, or Technical Consultant. The final title will align to your experience, technical depth, and customer-facing ownership.
As a Data & AI Platform Architect (Internal Title - Resident Solutions Architect) in our Professional Services team you will work with clients on short to medium-term customer engagements on their big data challenges using the Databricks platform. You will provide data engineering, data science, and cloud technology projects which require integrating with client systems, training, and other technical tasks to help customers to get most value out of their data. RSAs are billable and know how to complete projects according to specification with excellent customer service. You will report to the regional Manager/Lead.
The impact you will have:
What we look for:
About Databricks
Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.
Benefits
At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region click here.
Our Commitment to Diversity and Inclusion
At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics.
Compliance
If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Ready to apply?
Apply to Databricks
You’ll work on building the tools and infrastructure to help our Machine Learning Engineers build and productionize robust machine learning models.
Working closely with ML Engineers, you’ll identify opportunities to improve the machine learning lifecycle at Picnic. From tools that improve model experimentation, to automations that simplify model deployment. You will collaborate with other platform teams at Picnic to make sure our tech stack remains aligned with the rest of the Tech team, while building and integrating the solutions that solve the problems unique to machine learning systems.
Check out some of our previous machine learning projects here: https://blog.picnic.nl/tagged/machine-learning
Various MLOps-oriented projects to:
Your contributions to the platform will power:
You will definitely:
✍🏼 Every expert was once a beginner!
You’ll get plenty of opportunities to challenge yourself and grow, including the Picnic Tech Academy, Lunch & Learn sessions, and tailored soft skills training. We also offer free professional weekly language courses.
🫱🏼🫲🏾 Teamwork makes the dream work
With more than 80 nationalities across 3 countries, you’ll be part of a diverse company with plenty of cool stuff to get involved with, from board game evenings to after-work drinks to our company ski trip and more!
🥗 Fresh Lunch, coffee, and snacks
Our offices are equipped with fully-fledged coffee bars and a kitchen and chefs. They prepare delicious fresh and warm lunches every day so you can keep your energy up.
🚲 Health insurance discount & bike plan
We have a partnership with CZ (a health insurance provider). Picnic employees get a discount on CZ insurance packages between 5% and 15%. Furthermore, through our partnership with Lease a Bike, you can rent-to-own a new (e)bike at a discounted rate
🌎 Relocation
If you’re moving from another country to join Picnic we make it as smooth as possible for you. We’ll cover your flight costs for you and your partner and kids, your first month's rent and moving costs (up to €2000), and help you with the 30% tax ruling setup and application.
📆 All the rest
At Picnic you get 25 holidays, we cover your travel expenses and offer a pension plan. And your phone and laptop are on us, as well.
Ready to apply?
Apply to Picnic
At JetBrains, code is our passion. Ever since we started, back in 2000, we’ve been striving to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
Today, AI-powered assistance and agents are becoming a core part of how developers work in our IDEs. The ML Workflows Engineering team is dedicated to removing infrastructure challenges, streamlining machine learning operations (MLOps), and enabling teams to focus on the innovative work that matters most – building impactful ML models and intelligent agents. As part of the team, you'll play a key role in designing tools, automation, and pipelines that make machine learning development seamless and intuitive.
By integrating cutting-edge MLOps practices and engineering excellence, we aim to maximize productivity and remove the complexity of ML infrastructure so that our teams can push the boundaries of what’s possible in AI.
#LI-HYBRID
#LI-MR1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
At JetBrains, code is our passion. Ever since we started back in 2000, we have been striving to make the world’s most robust and effective developer tools. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
We are working on an ambitious new platform that provides AI capabilities to all JetBrains products. Our platform is based on models developed in-house for writing and coding assistance, as well as integration with our strategic partners.
We are looking for a Research Engineer who can contribute to training foundation models for coding tasks. You’ll be working on developing Large Language Models from scratch and deploying them into production environments where they will be accessible by end users across the globe.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
At JetBrains, code is our passion. Ever since we started back in 2000, we have been striving to make the world’s most robust and effective developer tools. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
JetBrains is evolving beyond standalone developer tools toward a unified, AI-native platform for software development.
AI is no longer just an assistant inside the editor – it is becoming an active participant in how software is planned, built, reviewed, and operated across teams and organizations. This shift introduces new challenges that cannot be solved at the level of individual tools alone: governance, security, cost control, observability, and coordinated work between humans and autonomous agents.
Our goal is to build a platform that enables companies to adopt AI in software development in a structured, scalable, and economically efficient manner without locking them into closed ecosystems. This platform will serve as the execution and governance layer for AI-driven development, deeply integrated with developer tools but designed to work across teams, products, and environments.
We are looking for an experienced ML leader who has created products with an ML backbone, weaving together research, technical excellence, and strong product focus.
We are seeking a professional who excels in three key areas: technology, product vision, and business operations. This role involves extensive cooperation with products across the company – both AI-native and just integrating AI.
#LI-KP1 #LI-HYBRID
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.