All active Jenkins roles based in Amsterdam.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
Dataiku is the Platform for AI Success, the enterprise orchestration layer for building, deploying, and governing AI. In a single environment, teams design and operate analytics, machine learning, and AI agents with the transparency, collaboration, and control enterprises require. Sitting above data platforms, cloud infrastructure, and AI services, Dataiku connects the full enterprise AI stack — empowering organizations to run AI across multi-vendor environments with centralized governance.
The world’s leading companies rely on Dataiku to operationalize AI and run it as a true business performance engine delivering measurable value. For more, visit the Dataiku blog, LinkedIn, X, and YouTube.
Dataiku is looking for a Data Engineer to join our Enterprise Data and Analytics (EDA) team. As a member of the EDA team, you will play a central role in delivering data to fuel analytics and data-driven insights to various stakeholders and teams within the company. You will also be a key technical member contributing to the data platform that fuels centralized analytics, embedded analytics teams, Generative AI engineering, and self-service users across the organization.
This role is about 50% Data Operations, Support & Troubleshooting, and 50% new development. The data engineering day-to-day will primarily be within the data platform built using Snowflake, Dataiku, and GitHub. Primary development will focus on Python & SQL, DataOps processes built within GitHub Actions & Dataiku, and data platform processes built within Snowflake & Dataiku.
Non-technical skills and learning are also critical, as you will collaborate with engineers from various teams and help deliver solutions across a wide variety of technical domains. The ideal candidate is naturally curious, has excellent verbal and written communication skills, a sharp analytical mind, a positive attitude towards work, and thrives when collaborating towards a shared goal.
This is an internal and non-client facing role.
Dataiku is unique in that every Dataiker is encouraged to use our own product within our Enterprise Data Platform. That means this is a unique opportunity to deliver a scalable platform with governed data to fuel an entire company of current or potential Data Analysts & Data Consumers! Your responsibilities within the team include but are not limited to:
Develop engineering expertise within the Dataiku Platform to help maintain and develop system integrations, platform automations, and platform configurations.
Develop engineering expertise within Snowflake for data engineering and security/governance features
Build & maintain python & SQL data replication & data pipelines on large & often complex data sets
Build & maintain data quality metrics & observability to help drive data quality standards
Learn about existing systems and processes across Data Platforms, Data Engineering and Data Governance
Troubleshoot data pipelines, platform automations, data access system.
Help field and troubleshoot various community questions and challenges
Own, maintain and enhance data operation processes, monitoring & data quality systems
Design data models for both short term and long term use cases to support data warehouse scalability
Build & maintain administration systems and applications for monitoring, alerting, data observability, access management, platform metrics, and end user transparency
Identify opportunities for improvements & optimization for greater scalability & delivery velocity
Collaborate closely with Analytics Engineers to provide data & data models for analytical deliverables
Perform root cause analysis on often complex errors to help ensure data pipeline availability
Help test new features in Dataiku and partner tools to both provide feedback internally as well as determine value towards internal analytics & data platform integration
Work closely with key stakeholders across the organization including Infra, embedded analytics teams, Product and Engineering to help foster both technical implementations & requirements gathering
Proactively drive innovation internally with bringing ideas for platform and process improvements
Help contribute to the ongoing documentation of internal systems and processes
2+ years of relevant experience in Data Engineering / Data Platform Engineering
Strong technical skills in SQL & Python are a must. Experience in Dataiku DSS is a big plus.
Prior experience with Snowflake a plus
Prior experience with DevOps technologies such as Github Actions, Azure DevOps or Jenkins
Experience in building data models
Prior experience building and maintaining replication & data pipelines in a cloud data warehouse or data lake environment
Excellent analytical and creative problem-solving skills - exhibit confidence to ask questions to bring clarity, share ideas, and challenge the norm.
Passion for continuous learning and teaching to help learn & teach new technologies & implementation strategies
Experience working with complex stakeholders; dissecting vague asks and helping to define tangible requirements
Ability to manage multiple projects and time constraints simultaneously in a high-trust remote environment
Ability to wear multiple hats depending on the project with the focus on accomplishing end goals while inspiring colleagues to do the same
Excellent written and verbal communication skills (especially with senior-level stakeholders) with the ability to speak to both the business value, data products, & technical capabilities of a platform. Ability to create clear and concise documentations with a high degree of precision
Ready to apply?
Apply to Dataiku
Share this job
Dataiku is the Platform for AI Success, the enterprise orchestration layer for building, deploying, and governing AI. In a single environment, teams design and operate analytics, machine learning, and AI agents with the transparency, collaboration, and control enterprises require. Sitting above data platforms, cloud infrastructure, and AI services, Dataiku connects the full enterprise AI stack — empowering organizations to run AI across multi-vendor environments with centralized governance.
The world’s leading companies rely on Dataiku to operationalize AI and run it as a true business performance engine delivering measurable value. For more, visit the Dataiku blog, LinkedIn, X, and YouTube.
Dataiku is looking for a Data Engineer II to join our Enterprise Data and Analytics (EDA) team. As a member of the EDA Team, you will play a central role in delivering data to fuel analytics, AI, and data-driven insights to various stakeholders and teams within the company. You will also be a key technical member contributing to the Data Platform that fuels centralized analytics, Generative AI engineering, embedded analytics teams, and self-service users across the organization.
You will become a technical expert on the various platforms we work in and help drive engineering excellence both within the EDA team and across the wider Analytics Community. The Data Engineering day to day will primarily be within the Data Platform built using Snowflake, Dataiku, and GitHub. Primary development will focus on Python & SQL, DataOps processes built within GitHub Actions & Dataiku, and data platform processes built within Snowflake & Dataiku.
Non-technical skills and learning are also critical, as you will collaborate with engineers from various teams and help deliver solutions across a wide variety of technical domains. Strong software development lifecycle knowledge and DataOps skills are a must. The ideal candidate is naturally curious, has excellent verbal and written communication skills, a sharp analytical mind, a positive attitude towards work, and thrives when collaborating towards a shared goal.
This is an internal and non-client-facing role.
Dataiku is unique in that every Dataiker is encouraged to use our own product within our Enterprise Data Platform. That means this is a unique opportunity to deliver a scalable platform with governed data to fuel an entire company of current or potential Data Analysts! Your responsibilities within the team include but are not limited to:
Be an expert level engineer within the Dataiku Platform including Platform Automation, GenAI Capabilities, Plugin Development, maintenance & troubleshooting
Be an expert level engineer within Snowflake for data engineering and security/governance features
Build & maintain python & SQL based platform automation process
Build & maintain data quality metrics & observability to help drive data quality standards
Design data models for both short term and long term use cases to support data warehouse scalability
Build & maintain administration systems and applications for monitoring, alerting, data observability, access management, platform metrics, and end user transparency
Build & maintain GenAI Platform platform solutions focused on security and governance for engineering delivery
Build & maintain DataOps process for SDLC delivery
Identify opportunities for improvements & optimization for greater scalability & delivery velocity
Collaborate closely with Analytics Engineers to provide data & data models for analytical deliverables
Perform root cause analysis on often complex errors to help ensure data pipeline availability
Help drive technical & architectural decisions on the data platform including decisions on data architecture, data engineering processes, data quality frameworks, data access security & governance frameworks, DataOps processes & data consumption models.
Help test new features in Dataiku and partner tools to both provide feedback internally as well as determine value towards internal analytics & data platform integration
Work closely with key stakeholders across the organization including Infra, embedded analytics teams, Product and Engineering to help foster both technical implementations & requirements gathering
Proactively drive innovation internally with dedicated innovation time & projects that aim to be transformational for either the platform, team or company as a whole.
Actively contribute to the expertise level and competencies of the EDA Team and participate in the creation and support of data development standards and best practices.
3+ years of relevant experience in Data Engineering / Data Platform Engineering
Expertise in SQL & Python is a must. Experience in Dataiku DSS is a big plus.
Prior experience with Snowflake strongly desired
Prior experience with DevOps technologies such as Github Actions, Azure DevOps or Jenkins
Strong understanding of data architecture & data modeling concepts
Prior experience building and maintaining replication & data pipelines in a cloud data warehouse or data lake environment
Excellent analytical and creative problem-solving skills - exhibit confidence to ask questions to bring clarity, share ideas and challenge the norm.
Passion for continuous learning and teaching to help learn & teach new technologies & implementation strategies
Experience working with complex stakeholders; dissecting vague asks and helping to define tangible requirements
Ability to manage multiple projects and time constraints simultaneously in a high trust remote environment
Ability to wear multiple hats depending on the project with the focus on accomplishing end goals while inspiring colleagues to do the same
Excellent written and verbal communication skills (especially with senior level stakeholders) with the ability to speak to both the business value, data products, & technical capabilities of a platform. Ability to create clear and concise documentations with a high degree of precision
Ready to apply?
Apply to Dataiku
Share this job
We’re looking for a Test and Automation Engineer to join our hardware R&D team building long-range HF communication systems. You’ll build and scale the systems that validate and operate our hardware across both development and production, working across test automation, lab infrastructure, and hardware coordination. In this role, you’ll own how hardware is exercised end-to-end, ensuring reliable and repeatable performance in demanding environments.
What you'll do
As a Test and Automation Engineer, your key responsibilities include:
Designing, developing, and maintaining automated test frameworks for hardware validation and production testing
Building and maintaining automation for hardware coordination and state control; managing transitions between operational states and integrating with monitoring systems, safety interlocks, and protection mechanisms
Developing Python-based tooling and test suites to exercise hardware interfaces, including Ethernet, SPI/I2C/Serial, Modbus, SCPI, and custom protocols
Implementing and maintaining communication with industrial peripherals and test equipment
Automating lab instrumentation, including oscilloscopes, spectrum analyzers, VNAs, and cable/component testers
Creating and managing CI/CD pipelines that integrate hardware test results into the broader build and release workflow
Instrumenting and monitoring test and production infrastructure by collecting metrics, generating reports, and tracking hardware test coverage over time
Collaborating closely with hardware, RF, and infrastructure engineers to improve system reliability, observability, and testability
Who you are
Experienced in hardware test automation, production/manufacturing automation, or embedded systems validation
Strong in Python, with a focus on building clean, maintainable, and scalable automation frameworks
Hands-on experience communicating with and coordinating industrial devices and peripherals
Familiar with Ethernet-based systems and protocols (TCP/UDP sockets, network configuration, packet capture/analysis)
Experienced in automating lab instruments using SCPI, VISA, or vendor-specific APIs
Comfortable working in Linux environments, including command-line tooling, shell scripting, and service management
Able to take ownership of systems end-to-end and operate effectively in a highly collaborative, fast-paced environment
Experience with RF systems, SDR, or HF communication systems
Familiarity with Protocol Buffers or similar serialization frameworks
Experience with CI/CD systems such as Jenkins or Bamboo for hardware-integrated pipelines
Understanding of network infrastructure concepts such as VLANs, DHCP, PXE/TFTP boot, and SSH tunneling
Who we are
At Optiver, our mission is to constantly improve the market by injecting liquidity, providing accurate pricing, increasing transparency and acting as a stabilizing force no matter the market conditions. With a focus on continuous improvement, we participate in the safeguarding of healthy and efficient markets for everyone who participates. As one of the largest market making institutions, we are a trusted partner of 70+ exchanges across the globe.
What you'll get
You’ll join a culture of collaboration and excellence, where you’ll be surrounded by curious thinkers and creative problem solvers. Motivated by a passion for continuous improvement, you’ll thrive in a supportive, high-performing environment alongside talented colleagues, working collectively to tackle the toughest problems in the financial markets.
In addition, you’ll receive:
How to apply
Apply directly via the form below. If you have any questions feel free to contact our Recruitment team via our recruitment inquiry form.
Please note:
Diversity statement
Optiver is committed to diversity and inclusion.
Ready to apply?
Apply to Optiver
This is Adyen
Adyen provides payments, data, and financial products in a single solution for customers like Meta, Uber, H&M, and Microsoft - making us the financial technology platform of choice. At Adyen, everything we do is engineered for ambition.
For our teams, we create an environment with opportunities for our people to succeed, backed by the culture and support to ensure they are enabled to truly own their careers. We are motivated individuals who tackle unique technical challenges at scale and solve them as a team. Together, we deliver innovative and ethical solutions that help businesses achieve their ambitions faster.
We’re looking for an engineer who is passionate about developer productivity, build systems, and automation to enhance and scale our internal development experience. In this role, you’ll help shape the future of our build and testing ecosystem, optimize build/test workflows, and drive automation that improves the day-to-day experience of engineers across Adyen.
What you’ll do
Who you are
You have experience in:
Nice to have:
You are:
Our Diversity, Equity and Inclusion commitments
Our unique approach is a product of our diverse perspectives. This diversity of backgrounds and cultures is essential in helping us maintain our momentum. Our business and technical challenges are unique, and we need as many different voices as possible to join us in solving them - voices like yours. No matter who you are or where you’re from, we welcome you to be your true self at Adyen.
Studies show that women and members of underrepresented communities apply for jobs only if they meet 100% of the qualifications. Does this sound like you? If so, Adyen encourages you to reconsider and apply. We look forward to your application!
What’s next?
Ensuring a smooth and enjoyable candidate experience is critical for us. We aim to get back to you regarding your application within 5 business days. Our interview process tends to take about 4 weeks to complete, but may fluctuate depending on the role. Learn more about our hiring process here. Don’t be afraid to let us know if you need more flexibility.
This role is based out of our Amsterdam office. We are an office-first company and value in-person collaboration; we do not offer remote-only roles.
Ready to apply?
Apply to AdyenShare this job
At JetBrains, code is our passion. Ever since we started, back in 2000, we have strived to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
AI is no longer just an assistant inside the editor – it is becoming an active participant in how software is planned, built, reviewed, and operated across teams and organizations. This shift introduces new challenges that cannot be solved at the level of individual tools alone: governance, security, cost control, observability, and coordinated work between humans and autonomous agents.
Our goal is to build a platform that enables companies to adopt AI in software development in a structured, scalable, and economically efficient manner – without locking them into closed ecosystems.
We are building the foundation that connects developer workflows, team-level collaboration, and organizational control into a single coherent system.
This platform will serve as the execution and governance layer for AI-driven development, deeply integrated with developer tools but designed to work across teams, products, and environments.
This is a long-term strategic investment for JetBrains and a key pillar of our vision for the future of software development.
The role
We are looking for a Principal Engineer (JetBrains Cloud Platform, Developer Experience) to drive large-scale improvements to the development experience across the JCP.
This role focuses on making JCP engineers faster and more productive by improving build systems, CI/CD pipelines, local development workflows, tooling infrastructure, and AI pipelines. You will own the developer experience end to end – from how engineers develop, build, and test locally to how code moves through CI and reaches production. As this is a rapidly growing platform, many workflows and processes are still maturing. You will define what a great development experience looks like and drive the organization toward it.
As part of the team, you will:
- Own and drive the strategy for developer experience improvements across the entire JCP platform.
- Optimize build systems (Gradle, Nx, and others) for faster builds, better caching, and reliable reproducibility at scale.
- Set up AI development pipelines, managing context, agents, and handoffs, and leveraging tools developed within the JCP.
- Improve CI/CD pipelines – reduce build times, increase reliability, optimize resource usage, and shorten feedback loops across TeamCity and GitHub.
- Improve containerized development workflows (Docker, Dev Containers) to ensure fast and consistent local environments.
- Identify and eliminate bottlenecks in the development cycle – from code commit to production deployment.
- Establish best practices, tooling standards, and shared infrastructure that enable all teams to move faster.
- Collaborate with platform and product teams to understand pain points and deliver high-impact improvements.
We are looking for someone who:
- Has extensive experience with build systems (Gradle, Maven, or Bazel) and a track record of optimizing them at scale.
- Has strong expertise in CI/CD systems (GitHub Actions, TeamCity, Jenkins, or similar) and knows how to ensure high performance and reliability.
- Has hands-on experience with Docker and containerized development workflows.
- Understands the full software development life cycle and can reason about developer productivity holistically.
- Is able to drive cross-team initiatives and influence engineering practices across a large organization.
- Is motivated by making other engineers more productive and removing friction from their daily work.
We'd be particularly thrilled if you:
- Have experience improving the developer experience at scale in a platform or infrastructure organization.
- Have worked on AI-driven development pipelines.
- Have expertise in build caching, remote execution, and incremental build strategies.
- Have contributed to or maintained open-source build tooling or CI/CD infrastructure.
- Enjoy digging into performance problems and turning slow, flaky processes into fast, reliable ones.
#LI-MP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
At JetBrains, code is our passion. Ever since we started, back in 2000, we have strived to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
AI is no longer just an assistant inside the editor – it is becoming an active participant in how software is planned, built, reviewed, and operated across teams and organizations. This shift introduces new challenges that cannot be solved at the level of individual tools alone: governance, security, cost control, observability, and coordinated work between humans and autonomous agents.
Our goal is to build a platform that enables companies to adopt AI in software development in a structured, scalable, and economically efficient manner – without locking them into closed ecosystems.
We are building the foundation that connects developer workflows, team-level collaboration, and organizational control into a single coherent system.
This platform will serve as the execution and governance layer for AI-driven development, deeply integrated with developer tools but designed to work across teams, products, and environments.
This is a long-term strategic investment for JetBrains and a key pillar of our vision for the future of software development.
The role
We are looking for a Developer Experience Lead to drive large-scale improvements to the development experience across the JCP.
This role focuses on making JCP engineers faster and more productive by improving build systems, CI/CD pipelines, local development workflows, tooling infrastructure, and AI pipelines. You will own the developer experience end to end – from how engineers develop, build, and test locally to how code moves through CI and reaches production. As this is a rapidly growing platform, many workflows and processes are still maturing. You will define what a great development experience looks like and drive the organization toward it.
As part of the team, you will:
- Own and drive the strategy for developer experience improvements across the entire JCP platform.
- Optimize build systems (Gradle, Nx, and others) for faster builds, better caching, and reliable reproducibility at scale.
- Set up AI development pipelines, managing context, agents, and handoffs, and leveraging tools developed within the JCP.
- Improve CI/CD pipelines – reduce build times, increase reliability, optimize resource usage, and shorten feedback loops across TeamCity and GitHub.
- Improve containerized development workflows (Docker, Dev Containers) to ensure fast and consistent local environments.
- Identify and eliminate bottlenecks in the development cycle – from code commit to production deployment.
- Establish best practices, tooling standards, and shared infrastructure that enable all teams to move faster.
- Collaborate with platform and product teams to understand pain points and deliver high-impact improvements.
We are looking for someone who:
- Has extensive experience with build systems (Gradle, Maven, or Bazel) and a track record of optimizing them at scale.
- Has strong expertise in CI/CD systems (GitHub Actions, TeamCity, Jenkins, or similar) and knows how to ensure high performance and reliability.
- Has hands-on experience with Docker and containerized development workflows.
- Understands the full software development life cycle and can reason about developer productivity holistically.
- Is able to drive cross-team initiatives and influence engineering practices across a large organization.
- Is motivated by making other engineers more productive and removing friction from their daily work.
We'd be particularly thrilled if you:
- Have experience improving the developer experience at scale in a platform or infrastructure organization.
- Have worked on AI-driven development pipelines.
- Have expertise in build caching, remote execution, and incremental build strategies.
- Have contributed to or maintained open-source build tooling or CI/CD infrastructure.
- Enjoy digging into performance problems and turning slow, flaky processes into fast, reliable ones.
#LI-MP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
We are XYZ, We are a dynamic and forward-thinking consultancy firm that operates across a wide range of industries, creating solutions that are tailored to specific needs. We bring long-lasting impact and growth to our partners and clients.
With our multidisciplinary teams, we create bold strategies and innovative solutions that are tailored to our client’s needs and help them capitalize on opportunities. Together we strive to bring long-lasting impact and celebrate collective victories.
At XYZ we share an entrepreneurial mindset, a compelling passion to create, and, more importantly, a firm belief in our partnership-driven business model. This is embedded in our culture and safeguarded as we evolve.
Our team counts over 450 ambitious professionals with various backgrounds, spread over several continents. Our work environment unifies creative problem-solving and strategic thinking. We provide resources to visionaries who understand the direction in which the world is moving. Tomorrow’s world is data-driven. It is digital. It is international.
Become part of the journey, become part of XYZ. Partners for impact.
Ready to apply?
Apply to Liffey Moher and Blarney Inc.
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.