All active GitHub Actions roles based in Amsterdam.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Together AI is building the AI Acceleration Cloud, an end-to-end platform for the full generative AI lifecycle, combining the fastest LLM inference engine with state-of-the-art AI cloud infrastructure.
As a Senior AI Infrastructure Engineer, you will play a key role in building the next generation AI cloud platform – a highly available, global, blazing-fast cloud infrastructure that virtualizes cutting-edge ML hardware (GB200s/GB300s, BlueField DPUs) and enables state-of-the-art ML practitioners with self-serve AI cloud services, such as on-demand + managed Kubernetes and Slurm clusters. This platform serves both our internal SaaS products (inference, fine-tuning) and our external cloud customers, spanning dozens of data centers across the world.
Hybrid working two days a week in the Amsterdam office.
To be successful, you’ll need to be deeply technical and possess excellent communication, collaboration, and diplomacy skills. You have strong fundamental software development skills. In addition, you have strong systems knowledge and troubleshooting abilities.
Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, hardware, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancement such as FlashAttention, Hyena, FlexGen, and RedPajama. We invite you to join a passionate group of researchers in our journey in building the next generation AI infrastructure.
Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.
Please see our privacy policy at https://www.together.ai/privacy
Ready to apply?
Apply to Together AI
Guerrilla, the Amsterdam-based developer of Horizon Zero Dawn, is looking for an Online Services Engineer to support our Online Tech team with upcoming projects.
Our Online Tech team builds reliable, and high-performance online systems with a focus on low-latency delivery. We work with various other disciplines to create tools needed to build online game features. We deliver the networking and replication layers for both game client and server, integration with online platform services (e.g. PSN), authentication, matchmaking, inventory, store, server allocations, and more. The team works in close collaboration with our Online Infrastructure team, which designs and maintains the build and deployment environments for our services, servers and tools.
Our services use industry-standard web technologies built on modern frameworks: Java 21 and Spring Boot 3.5.x (with migration to Java 25 and Spring Boot 4 coming soon), Kotlin, Hibernate, and OpenAPI. We leverage Redis for caching and pub/sub, Amazon SQS for queuing, and PostgreSQL/MySQL for persistence. The services are designed around domain-driven architecture with a hybrid approach between microservices and domain services, and we share as little state as possible between services.
We host our services on Kubernetes backed by Amazon EKS and use other AWS services like S3, DynamoDB, and RDS. Our deployment pipeline uses GitHub Actions and ArgoCD for continuous delivery. Some of our DevOps tooling is built with Golang.
As an Online Services Engineer we will look to you to:
We'd love to hear from you if:
If you are up for the challenge, hit the “apply now” button to submit your application to us. Be sure to submit your CV, motivation letter, and a Java code sample that shows your best and most recent work. We would love to see structured programming as code examples and a demo presenting relevant work to the position.
Please note: This position is based in our studio in the heart of Amsterdam. Guerrilla offers relocation and immigration support.
At Guerrilla, we believe that our team’s varied backgrounds, experiences, and perspectives help us create games that resonate with a diverse audience.
As an equal opportunities employer, we strive to create an inclusive environment, empower employees and embrace diversity.
We encourage everyone to respond as we consider all candidates on the basis of their merit, unique strengths, and perspectives they will bring to the role.
Please refer to our Candidate Privacy Notice for more information about what personal information we collect, how we use it, who we share it with, and your data protection rights.
Please note, Sony Interactive Entertainment conducts background checks at the offer stage for all new employees (which may include criminal background checks for some roles) and will need to process personal information to support these checks.
Please refer to our Candidate Privacy Notice for more information about what personal information we collect, how we use it, who we share it with, and your data protection rights.
Ready to apply?
Apply to Guerrilla Games
Dataiku is the Platform for AI Success, the enterprise orchestration layer for building, deploying, and governing AI. In a single environment, teams design and operate analytics, machine learning, and AI agents with the transparency, collaboration, and control enterprises require. Sitting above data platforms, cloud infrastructure, and AI services, Dataiku connects the full enterprise AI stack — empowering organizations to run AI across multi-vendor environments with centralized governance.
The world’s leading companies rely on Dataiku to operationalize AI and run it as a true business performance engine delivering measurable value. For more, visit the Dataiku blog, LinkedIn, X, and YouTube.
Dataiku is looking for a Data Engineer to join our Enterprise Data and Analytics (EDA) team. As a member of the EDA team, you will play a central role in delivering data to fuel analytics and data-driven insights to various stakeholders and teams within the company. You will also be a key technical member contributing to the data platform that fuels centralized analytics, embedded analytics teams, Generative AI engineering, and self-service users across the organization.
This role is about 50% Data Operations, Support & Troubleshooting, and 50% new development. The data engineering day-to-day will primarily be within the data platform built using Snowflake, Dataiku, and GitHub. Primary development will focus on Python & SQL, DataOps processes built within GitHub Actions & Dataiku, and data platform processes built within Snowflake & Dataiku.
Non-technical skills and learning are also critical, as you will collaborate with engineers from various teams and help deliver solutions across a wide variety of technical domains. The ideal candidate is naturally curious, has excellent verbal and written communication skills, a sharp analytical mind, a positive attitude towards work, and thrives when collaborating towards a shared goal.
This is an internal and non-client facing role.
Dataiku is unique in that every Dataiker is encouraged to use our own product within our Enterprise Data Platform. That means this is a unique opportunity to deliver a scalable platform with governed data to fuel an entire company of current or potential Data Analysts & Data Consumers! Your responsibilities within the team include but are not limited to:
Develop engineering expertise within the Dataiku Platform to help maintain and develop system integrations, platform automations, and platform configurations.
Develop engineering expertise within Snowflake for data engineering and security/governance features
Build & maintain python & SQL data replication & data pipelines on large & often complex data sets
Build & maintain data quality metrics & observability to help drive data quality standards
Learn about existing systems and processes across Data Platforms, Data Engineering and Data Governance
Troubleshoot data pipelines, platform automations, data access system.
Help field and troubleshoot various community questions and challenges
Own, maintain and enhance data operation processes, monitoring & data quality systems
Design data models for both short term and long term use cases to support data warehouse scalability
Build & maintain administration systems and applications for monitoring, alerting, data observability, access management, platform metrics, and end user transparency
Identify opportunities for improvements & optimization for greater scalability & delivery velocity
Collaborate closely with Analytics Engineers to provide data & data models for analytical deliverables
Perform root cause analysis on often complex errors to help ensure data pipeline availability
Help test new features in Dataiku and partner tools to both provide feedback internally as well as determine value towards internal analytics & data platform integration
Work closely with key stakeholders across the organization including Infra, embedded analytics teams, Product and Engineering to help foster both technical implementations & requirements gathering
Proactively drive innovation internally with bringing ideas for platform and process improvements
Help contribute to the ongoing documentation of internal systems and processes
2+ years of relevant experience in Data Engineering / Data Platform Engineering
Strong technical skills in SQL & Python are a must. Experience in Dataiku DSS is a big plus.
Prior experience with Snowflake a plus
Prior experience with DevOps technologies such as Github Actions, Azure DevOps or Jenkins
Experience in building data models
Prior experience building and maintaining replication & data pipelines in a cloud data warehouse or data lake environment
Excellent analytical and creative problem-solving skills - exhibit confidence to ask questions to bring clarity, share ideas, and challenge the norm.
Passion for continuous learning and teaching to help learn & teach new technologies & implementation strategies
Experience working with complex stakeholders; dissecting vague asks and helping to define tangible requirements
Ability to manage multiple projects and time constraints simultaneously in a high-trust remote environment
Ability to wear multiple hats depending on the project with the focus on accomplishing end goals while inspiring colleagues to do the same
Excellent written and verbal communication skills (especially with senior-level stakeholders) with the ability to speak to both the business value, data products, & technical capabilities of a platform. Ability to create clear and concise documentations with a high degree of precision
Ready to apply?
Apply to Dataiku
Dataiku is the Platform for AI Success, the enterprise orchestration layer for building, deploying, and governing AI. In a single environment, teams design and operate analytics, machine learning, and AI agents with the transparency, collaboration, and control enterprises require. Sitting above data platforms, cloud infrastructure, and AI services, Dataiku connects the full enterprise AI stack — empowering organizations to run AI across multi-vendor environments with centralized governance.
The world’s leading companies rely on Dataiku to operationalize AI and run it as a true business performance engine delivering measurable value. For more, visit the Dataiku blog, LinkedIn, X, and YouTube.
Dataiku is looking for a Data Engineer II to join our Enterprise Data and Analytics (EDA) team. As a member of the EDA Team, you will play a central role in delivering data to fuel analytics, AI, and data-driven insights to various stakeholders and teams within the company. You will also be a key technical member contributing to the Data Platform that fuels centralized analytics, Generative AI engineering, embedded analytics teams, and self-service users across the organization.
You will become a technical expert on the various platforms we work in and help drive engineering excellence both within the EDA team and across the wider Analytics Community. The Data Engineering day to day will primarily be within the Data Platform built using Snowflake, Dataiku, and GitHub. Primary development will focus on Python & SQL, DataOps processes built within GitHub Actions & Dataiku, and data platform processes built within Snowflake & Dataiku.
Non-technical skills and learning are also critical, as you will collaborate with engineers from various teams and help deliver solutions across a wide variety of technical domains. Strong software development lifecycle knowledge and DataOps skills are a must. The ideal candidate is naturally curious, has excellent verbal and written communication skills, a sharp analytical mind, a positive attitude towards work, and thrives when collaborating towards a shared goal.
This is an internal and non-client-facing role.
Dataiku is unique in that every Dataiker is encouraged to use our own product within our Enterprise Data Platform. That means this is a unique opportunity to deliver a scalable platform with governed data to fuel an entire company of current or potential Data Analysts! Your responsibilities within the team include but are not limited to:
Be an expert level engineer within the Dataiku Platform including Platform Automation, GenAI Capabilities, Plugin Development, maintenance & troubleshooting
Be an expert level engineer within Snowflake for data engineering and security/governance features
Build & maintain python & SQL based platform automation process
Build & maintain data quality metrics & observability to help drive data quality standards
Design data models for both short term and long term use cases to support data warehouse scalability
Build & maintain administration systems and applications for monitoring, alerting, data observability, access management, platform metrics, and end user transparency
Build & maintain GenAI Platform platform solutions focused on security and governance for engineering delivery
Build & maintain DataOps process for SDLC delivery
Identify opportunities for improvements & optimization for greater scalability & delivery velocity
Collaborate closely with Analytics Engineers to provide data & data models for analytical deliverables
Perform root cause analysis on often complex errors to help ensure data pipeline availability
Help drive technical & architectural decisions on the data platform including decisions on data architecture, data engineering processes, data quality frameworks, data access security & governance frameworks, DataOps processes & data consumption models.
Help test new features in Dataiku and partner tools to both provide feedback internally as well as determine value towards internal analytics & data platform integration
Work closely with key stakeholders across the organization including Infra, embedded analytics teams, Product and Engineering to help foster both technical implementations & requirements gathering
Proactively drive innovation internally with dedicated innovation time & projects that aim to be transformational for either the platform, team or company as a whole.
Actively contribute to the expertise level and competencies of the EDA Team and participate in the creation and support of data development standards and best practices.
3+ years of relevant experience in Data Engineering / Data Platform Engineering
Expertise in SQL & Python is a must. Experience in Dataiku DSS is a big plus.
Prior experience with Snowflake strongly desired
Prior experience with DevOps technologies such as Github Actions, Azure DevOps or Jenkins
Strong understanding of data architecture & data modeling concepts
Prior experience building and maintaining replication & data pipelines in a cloud data warehouse or data lake environment
Excellent analytical and creative problem-solving skills - exhibit confidence to ask questions to bring clarity, share ideas and challenge the norm.
Passion for continuous learning and teaching to help learn & teach new technologies & implementation strategies
Experience working with complex stakeholders; dissecting vague asks and helping to define tangible requirements
Ability to manage multiple projects and time constraints simultaneously in a high trust remote environment
Ability to wear multiple hats depending on the project with the focus on accomplishing end goals while inspiring colleagues to do the same
Excellent written and verbal communication skills (especially with senior level stakeholders) with the ability to speak to both the business value, data products, & technical capabilities of a platform. Ability to create clear and concise documentations with a high degree of precision
Ready to apply?
Apply to Dataiku
At Gigs, we're building the operating system for mobile services—a platform that lets tech companies embed global connectivity into their products effortlessly.
Just as Stripe lets any business add a payment button in seconds, Gigs empowers platforms to weave in connectivity—bridging the traditional world of telecom with modern tech. From fintechs launching mobile services to HR platforms offering work phone plans, we automate provisioning and remove telecom complexity.
Our team of around 100 people across the US and Europe, backed by nearly $100 million in funding from Ribbit Capital, Google, and Y Combinator. As one of the fastest-growing tech companies, bringing together early-stage engineers, product builders, and business athletes from companies like Stripe, Airbnb, and Shopify. We’re tackling deep technical and regulatory challenges to make connectivity truly seamless.
If you’re driven by curiosity, creativity, and the chance to shape the future of telecom, we’d love to hear from you.
We are building in a highly complex space and tackling a massive problem. We want people who lean in when things get hard.
Salary range: €85k - €120k.
At Gigs, we value in-person collaboration. We believe the best ideas, decisions, and relationships are built when teams spend meaningful time together, and our culture is designed around that belief. We support flexibility where it makes sense. Some focused work can be done remotely, and not every role or week looks the same. You should expect regular time in one of our hubs, as well as occasional travel for team workshops, customer meetings, and Gigs Republic, our bi-annual company off-site. Our offices are designed to feel like home-inspired workspaces, with plants, thoughtful tools, and small, tight-knit teams that make collaboration feel natural, energizing, and effective.
At Gigs, we believe in rewarding excellence. We offer competitive compensation and stock options because we see you as a true partner in our growth. We also provide stipends for your home office or work setup, a budget for learning and development to fuel your career, and of course, a free phone and international data plan
Want to learn more about our benefits, hubs, and what it’s like to work at Gigs? Check out our Careers page.
Ready to apply?
Apply to Builders at Gigs
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
Summary:
Location: Amsterdam
Duration: 3 months
Start date: June 2026
Compensation: Paid
Eligibility: Current University student (Computer Science or related field), Recent Graduate or Early Career specialist
Work authorization: Permitted to work in the job’s location
We are looking for an Early Talent to develop cloud-based integrations on Microsoft Azure. You will work with experienced engineers and gain hands-on experience building API-based and cloud-native integration solutions.
We expect you to have
Nice to Have
Your responsibilities will include
What we offer
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
What we offer:
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply, and will be required to provide proof of employment eligibility as a condition of hire.
Ready to apply?
Apply to Nebius
As a Linux Engineer in the Global Linux team, you will design, build and operate the critical infrastructure that powers our trading environment. The team owns both large-scale bare-metal Linux platforms and on-prem Kubernetes clusters across our global datacenters.
You will play a key role in evolving our Kubernetes platform while continuing to support and improve our high-performance Linux estate.
Design, deploy and operate on-prem Kubernetes clusters in a bare-metal environment (Rancher-managed)
Maintain and improve our bare-metal Linux infrastructure, including provisioning, lifecycle management and performance tuning
Own and operate core Kubernetes components including Cilium (CNI), ArgoCD (GitOps), and Rancher
Build and maintain CI/CD workflows for infrastructure and cluster deployments using ArgoCD and GitHub Actions
Develop and maintain infrastructure as code using Terraform and bespoke on-prem systems
Participate in on-call rotation covering both Kubernetes clusters and other Linux-based production systems
Drive reliability, scalability and performance improvements across both containerized and non-containerized workloads
Contribute to monitoring, observability and security posture across the Linux and Kubernetes platforms
Ensure the broader team can effectively operate and support the Kubernetes and Linux platforms by building clear documentation, operational runbooks, and providing practical, hands-on guidance
Strong Linux systems engineering experience in production environments
Hands-on experience operating Kubernetes on-prem (not only managed cloud offerings)
Experience with:
Rancher
Cilium or other CNI technologies
ArgoCD (GitOps workflows)
Terraform
CI/CD systems such as GitHub Actions
Solid understanding of networking fundamentals (L2/L3, routing, firewalls, load balancing)
Experience troubleshooting performance issues across kernel, container and network layers
Production mindset with experience in incident response and post-mortem driven improvements
Nice to have
Experience building monitoring stacks (e.g. Prometheus, Grafana)
Experience with Kubernetes security controls (network policies, RBAC, policy engines)
Knowledge of low-latency or performance-sensitive environments
Experience contributing to internal platform tooling
No prior trading or financial experience is needed – our onboarding program will kick-start your career at Optiver and support you in gaining all the relevant knowledge about our business.
Who we are
At Optiver, our mission is to constantly improve the market by injecting liquidity, providing accurate pricing, increasing transparency and acting as a stabilizing force no matter the market conditions. With a focus on continuous improvement, we participate in the safeguarding of healthy and efficient markets for everyone who participates. As one of the largest market making institutions, we are a trusted partner of 70+ exchanges across the globe.
What you'll get
You’ll join a culture of collaboration and excellence, where you’ll be surrounded by curious thinkers and creative problem solvers. Motivated by a passion for continuous improvement, you’ll thrive in a supportive, high-performing environment alongside talented colleagues, working collectively to tackle the toughest problems in the financial markets.
In addition, you’ll receive:
How to apply
Apply directly via the form below. If you have any questions feel free to contact our Recruitment team via our recruitment inquiry form.
Please note:
Diversity statement
Optiver is committed to diversity and inclusion.
Ready to apply?
Apply to Optiver
Workato delivers enterprise infrastructure for the agentic era, redefining iPaaS and helping enterprises unify data, applications, processes, and AI into a single, governed platform. A leader in Enterprise MCP and trusted by 50% of the Fortune 500, Workato’s cloud-native architecture connects every application, data source, and process to power real-time orchestration at scale. With enterprise-grade security and continuous innovation at its core, Workato provides the trusted foundation for organizations to automate with confidence and operationalize AI across the business. To learn more, visit www.workato.com
Ultimately, Workato believes in fostering a flexible, trust-oriented culture that empowers everyone to take full ownership of their roles. We are driven by innovation and looking for team players who want to actively build our company.
But, we also believe in balancing productivity with self-care. That’s why we offer all of our employees a vibrant and dynamic work environment along with a multitude of benefits they can enjoy inside and outside of their work lives.
If this sounds right up your alley, please submit an application. We look forward to getting to know you!
Also, feel free to check out why:
Business Insider named us an “enterprise startup to bet your career on”
Forbes’ Cloud 100 recognized us as one of the top 100 private cloud companies in the world
Deloitte Tech Fast 500 ranked us as the 17th fastest growing tech company in the Bay Area, and 96th in North America
Quartz ranked us the #1 best company for remote workers
We are looking for a Senior Python Engineer to play a key role in building the core of our AI platform. In this position, you will design and develop production-grade systems that power intelligent automation, agentic workflows, and large-scale retrieval services. This is a highly technical, hands-on role that involves close collaboration with product and platform teams to transform advanced AI concepts into reliable, scalable, and secure solutions used across our enterprise ecosystem. You will also be responsible to:
Design, build, and maintain AI-powered services and APIs, leveraging LLMs (OpenAI, Anthropic, Qwen, OSS models) and custom ML models.
Develop an enterprise-grade agentic framework that enables orchestration, retrieval, and collaboration between multiple AI agents.
Implement and optimize knowledge retrieval systems and agentic search capabilities using vector databases such as Qdrant and ElasticSearch.
Write well-structured, efficient, and testable Python code for production services, experimentation, and internal developer tools.
Build and maintain shared Python libraries and SDKs used across multiple applications and microservices.
Collaborate with cross-functional teams on architecture, internal protocols, and API standards to ensure consistency and reliability across the platform.
Develop and enhance monitoring, validation, and observability for production-grade AI solutions.
Drive the full software development lifecycle - from design and implementation to deployment, monitoring, and continuous improvement.
Identify and resolve performance bottlenecks, reliability issues, and scaling challenges in complex, data-intensive environments.
Participate in code reviews and technical discussions, mentoring other engineers and contributing to a culture of excellence.
Building an evaluation and observability framework for AI model performance and reliability.
Developing an agentic orchestration platform that enables collaboration among multiple AI agents and tools.
Implementing semantic retrieval and agentic search capabilities over large enterprise knowledge bases.
Designing AI services that process and reason over high-volume real-world data at scale.
Qualifications / Experience / Technical Skills
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
5+ years of experience as a Software Engineer, with strong proficiency in Python.
Proven track record of building and maintaining production-grade systems using Python.
Strong understanding of distributed systems, API design, and data-driven architectures.
Experience with relational and non-relational databases (PostgreSQL, Elastic, Qdrant, or similar).
Familiarity with AI/ML system design, including LLM integration and evaluation pipelines.
Knowledge of DevOps and observability practices (CI/CD, monitoring, metrics, and model validation).
Experience working with multiple LLM providers (OpenAI, Anthropic, Qwen, open-source models).
Background in developer platforms or AI infrastructure services.
Familiarity with vector databases, semantic retrieval, and knowledge graph architectures.
Exposure to Langfuse, LiteLLM, LangChain, or similar frameworks.
Experience developing enterprise-scale SaaS or distributed backend systems.
Contributions to open-source projects in Python, AI, or infrastructure engineering.
Excellent communication skills, with the ability to convey complex technical ideas clearly to both technical and non-technical audiences.
Collaborative and proactive approach, comfortable working across teams in a dynamic environment.
Strong analytical and problem-solving abilities, with a focus on continuous improvement and innovation.
Curiosity and a genuine interest in emerging AI technologies and modern backend architectures.
Ready to apply?
Apply to Workato
At JetBrains, code is our passion. Ever since we started, back in 2000, we have strived to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
AI is no longer just an assistant inside the editor – it is becoming an active participant in how software is planned, built, reviewed, and operated across teams and organizations. This shift introduces new challenges that cannot be solved at the level of individual tools alone: governance, security, cost control, observability, and coordinated work between humans and autonomous agents.
Our goal is to build a platform that enables companies to adopt AI in software development in a structured, scalable, and economically efficient manner – without locking them into closed ecosystems.
We are building the foundation that connects developer workflows, team-level collaboration, and organizational control into a single coherent system.
This platform will serve as the execution and governance layer for AI-driven development, deeply integrated with developer tools but designed to work across teams, products, and environments.
This is a long-term strategic investment for JetBrains and a key pillar of our vision for the future of software development.
The role
We are looking for a Principal Engineer (JetBrains Cloud Platform, Developer Experience) to drive large-scale improvements to the development experience across the JCP.
This role focuses on making JCP engineers faster and more productive by improving build systems, CI/CD pipelines, local development workflows, tooling infrastructure, and AI pipelines. You will own the developer experience end to end – from how engineers develop, build, and test locally to how code moves through CI and reaches production. As this is a rapidly growing platform, many workflows and processes are still maturing. You will define what a great development experience looks like and drive the organization toward it.
As part of the team, you will:
- Own and drive the strategy for developer experience improvements across the entire JCP platform.
- Optimize build systems (Gradle, Nx, and others) for faster builds, better caching, and reliable reproducibility at scale.
- Set up AI development pipelines, managing context, agents, and handoffs, and leveraging tools developed within the JCP.
- Improve CI/CD pipelines – reduce build times, increase reliability, optimize resource usage, and shorten feedback loops across TeamCity and GitHub.
- Improve containerized development workflows (Docker, Dev Containers) to ensure fast and consistent local environments.
- Identify and eliminate bottlenecks in the development cycle – from code commit to production deployment.
- Establish best practices, tooling standards, and shared infrastructure that enable all teams to move faster.
- Collaborate with platform and product teams to understand pain points and deliver high-impact improvements.
We are looking for someone who:
- Has extensive experience with build systems (Gradle, Maven, or Bazel) and a track record of optimizing them at scale.
- Has strong expertise in CI/CD systems (GitHub Actions, TeamCity, Jenkins, or similar) and knows how to ensure high performance and reliability.
- Has hands-on experience with Docker and containerized development workflows.
- Understands the full software development life cycle and can reason about developer productivity holistically.
- Is able to drive cross-team initiatives and influence engineering practices across a large organization.
- Is motivated by making other engineers more productive and removing friction from their daily work.
We'd be particularly thrilled if you:
- Have experience improving the developer experience at scale in a platform or infrastructure organization.
- Have worked on AI-driven development pipelines.
- Have expertise in build caching, remote execution, and incremental build strategies.
- Have contributed to or maintained open-source build tooling or CI/CD infrastructure.
- Enjoy digging into performance problems and turning slow, flaky processes into fast, reliable ones.
#LI-MP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
At JetBrains, code is our passion. Ever since we started, back in 2000, we have strived to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
AI is no longer just an assistant inside the editor – it is becoming an active participant in how software is planned, built, reviewed, and operated across teams and organizations. This shift introduces new challenges that cannot be solved at the level of individual tools alone: governance, security, cost control, observability, and coordinated work between humans and autonomous agents.
Our goal is to build a platform that enables companies to adopt AI in software development in a structured, scalable, and economically efficient manner – without locking them into closed ecosystems.
We are building the foundation that connects developer workflows, team-level collaboration, and organizational control into a single coherent system.
This platform will serve as the execution and governance layer for AI-driven development, deeply integrated with developer tools but designed to work across teams, products, and environments.
This is a long-term strategic investment for JetBrains and a key pillar of our vision for the future of software development.
The role
We are looking for a Developer Experience Lead to drive large-scale improvements to the development experience across the JCP.
This role focuses on making JCP engineers faster and more productive by improving build systems, CI/CD pipelines, local development workflows, tooling infrastructure, and AI pipelines. You will own the developer experience end to end – from how engineers develop, build, and test locally to how code moves through CI and reaches production. As this is a rapidly growing platform, many workflows and processes are still maturing. You will define what a great development experience looks like and drive the organization toward it.
As part of the team, you will:
- Own and drive the strategy for developer experience improvements across the entire JCP platform.
- Optimize build systems (Gradle, Nx, and others) for faster builds, better caching, and reliable reproducibility at scale.
- Set up AI development pipelines, managing context, agents, and handoffs, and leveraging tools developed within the JCP.
- Improve CI/CD pipelines – reduce build times, increase reliability, optimize resource usage, and shorten feedback loops across TeamCity and GitHub.
- Improve containerized development workflows (Docker, Dev Containers) to ensure fast and consistent local environments.
- Identify and eliminate bottlenecks in the development cycle – from code commit to production deployment.
- Establish best practices, tooling standards, and shared infrastructure that enable all teams to move faster.
- Collaborate with platform and product teams to understand pain points and deliver high-impact improvements.
We are looking for someone who:
- Has extensive experience with build systems (Gradle, Maven, or Bazel) and a track record of optimizing them at scale.
- Has strong expertise in CI/CD systems (GitHub Actions, TeamCity, Jenkins, or similar) and knows how to ensure high performance and reliability.
- Has hands-on experience with Docker and containerized development workflows.
- Understands the full software development life cycle and can reason about developer productivity holistically.
- Is able to drive cross-team initiatives and influence engineering practices across a large organization.
- Is motivated by making other engineers more productive and removing friction from their daily work.
We'd be particularly thrilled if you:
- Have experience improving the developer experience at scale in a platform or infrastructure organization.
- Have worked on AI-driven development pipelines.
- Have expertise in build caching, remote execution, and incremental build strategies.
- Have contributed to or maintained open-source build tooling or CI/CD infrastructure.
- Enjoy digging into performance problems and turning slow, flaky processes into fast, reliable ones.
#LI-MP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Kineto is a next-generation platform that enables creators, educators, and small businesses to generate, deploy, and operate fully functional AI-powered web applications – instantly and at scale. It combines LLM-driven code generation, multi-tenant Postgres (Neon), dynamic hosting (GKE and Knative), automated deployments (Flux), analytics, billing, and a seamless chat-based UX to make software creation accessible to everyone. Our team is growing rapidly, and we’re now seeking an experienced Infrastructure Engineer who can design, build, and maintain our cloud-native platform, with a focus on scalability, reliability, and automated operations.
#LI-YY1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
At JetBrains, code is our passion. Ever since we started, back in 2000, we’ve been striving to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
Today, AI-powered assistance and agents are becoming a core part of how developers work in our IDEs. The ML Workflows Engineering team is dedicated to removing infrastructure challenges, streamlining machine learning operations (MLOps), and enabling teams to focus on the innovative work that matters most – building impactful ML models and intelligent agents. As part of the team, you'll play a key role in designing tools, automation, and pipelines that make machine learning development seamless and intuitive.
By integrating cutting-edge MLOps practices and engineering excellence, we aim to maximize productivity and remove the complexity of ML infrastructure so that our teams can push the boundaries of what’s possible in AI.
#LI-HYBRID
#LI-MR1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
SurveyMonkey is the world’s most popular platform for surveys and forms, built for business—loved by users. We combine powerful capabilities with intuitive design, effectively serving every use case, from customer experience to employee engagement, market research to payment and registration forms. With built-in research expertise and AI-powered technology, it’s like having a team of expert researchers at your fingertips.
Trusted by millions—from startups to Fortune 500 companies—SurveyMonkey helps teams gather insights and information that inspire better decisions, create experiences people love, and drive business growth. Discover how at surveymonkey.com.
What we’re looking for
The Quality Platform team is looking for a Software Engineer passionate about software quality and test automation. We are a small team dedicated to supporting the tooling and infrastructure that help our engineering teams write, execute, and report on tests. You’ll have the opportunity to debug test automation issues with fellow engineers and build metric dashboards to help report on product quality.
What you’ll be working on
We’d love to hear from people with
SurveyMonkey believes in-person collaboration is valuable for building relationships, fostering community, and enhancing our speed and execution in problem-solving and decision-making. As such, you will be required to work from our Amsterdam office up to 1 day per week.
#LI-Hybrid
Why SurveyMonkey? We’re glad you asked
At SurveyMonkey, curiosity powers everything we do. We’re a global company where people from all backgrounds can make an impact, build meaningful connections, and grow their careers. Our teams work in a flexible, hybrid environment with thoughtfully designed offices and programs like the CHOICE Fund to help employees thrive in work and life.
We’ve been trusted by organizations for over 25 years, and we’re just getting started. Our milestones include celebrating a quarter-century of curiosity with 25 acts of giving, opening new hubs in Costa Rica and India, crossing the threshold of 100 billion questions answered, and earning recognition as one of the Most Inspiring Workplaces across North America and Asia.
We live our company values—like championing inclusion and making it happen—by embedding them into how we hire, collaborate, and grow. They help shape everything from our culture to our business decisions. Come join us and see where your curiosity can take you.
Our commitment to an inclusive workplace
SurveyMonkey is an equal opportunity employer committed to providing a workplace free from harassment and discrimination. We celebrate the unique differences of our employees because that is what drives curiosity, innovation, and the success of our business. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity or expression, age, marital status, veteran status, disability status, pregnancy, parental status, genetic information, political affiliation, or any other status protected by the laws or regulations in the locations where we operate. Accommodations are available for applicants with disabilities.
Your data
For more information on how SurveyMonkey (including its subsidiary and affiliated companies) processes your personal data as a job candidate or applicant, please see our Global Applicant and Candidate Data Privacy Notice. Please note that we may use artificial intelligence (AI) tools to support parts of the hiring process, such as sourcing candidates, reviewing applications, analyzing resumes, or summarizing interviews. These tools assist our recruitment team but do not replace human judgment.
Ready to apply?
Apply to SurveyMonkeyWHO WE ARE 🌍
We help creators get more out of every conversation with Instagram-focused automations and support for other channels like Messenger, WhatsApp, and TikTok. The result? Better engagement, more sales, and real, sustainable growth.
With a diverse team of 350+ people spread across three continents, we’re building the leading Chat Marketing platform that is used — and loved — by more than 1.5 million customers worldwide.
WHO WE'RE LOOKING FOR 🌟
We’re looking for a Senior Site Reliability Engineer who thrives at the crossroads of classic Linux and AWS infrastructure and modern Site Reliability Engineering. This is a high-impact, hybrid role designed for someone who can manage cloud resources, harden Kubernetes clusters, and shape a more reliable and developer-friendly platform.
We need you not just to maintain but to rethink and evolve our infrastructure, balancing hands-on operations with strategic improvements that future-proof our growing AI product landscape.
You’ll take over key responsibilities from our current Infra Lead who is transitioning to a software-focused role, giving you immediate ownership and space to shine.
WHY THE ROLE IS SPECIAL 💡
You won’t be a cog in a massive SRE org. You’ll be the bridge between Infrastructure and Engineering, shaping how we scale Kubernetes, how we approach platform reliability, and how developers ship fast without fear. You’ll get autonomy, ownership, and a smart, humble team excited to learn with you.
WHAT YOU’LL DO 🤖
WHAT WE OFFER 🤗
We care deeply about your growth, well-being, and comfort:
Manychat is an Equal Opportunity Employer. We’re committed to building a diverse and inclusive team. We do not discriminate against qualified employees or applicants because of race, color, religion, gender identity, sex, sexual preference, sexual identity, pregnancy, national origin, ancestry, citizenship, age, marital status, physical disability, mental disability, medical condition, military status, or any other characteristic protected by local law or ordinance.
This commitment is also reflected through our candidate experience. If you have individual needs that may require an accommodation during the interview process, please indicate this in your application. We will do our best to provide assistance throughout your interview process to ensure you’re set up for success.
With my application, I accept the Manychat Privacy Policy.
Ready to apply?
Apply to Manychat
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.