All active Airflow roles based in Czech Republic.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
The Data Engineering team is responsible for building and maintaining robust data infrastructure that powers analytics, and business intelligence across Nebius. We design and implement scalable data pipelines, optimize data storage and processing, and enable data-driven decision making across the organization. The team works closely with product teams and business stakeholders to ensure alignment with company goals.We are looking for a Data Engineer to design, build, and maintain our data infrastructure and pipelines. You will work on processing large-scale datasets, optimizing data workflows, and enabling analytics capabilities that support our rapidly growing cloud platform.
Your responsibilities:
Must-haves:
Nice-to-haves:
We conduct coding interviews as part of the process.
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
We’re looking for a Software Engineer with strong C++ expertise to join the team building and operating Nebius Data Platform — a distributed storage and a processing platform that acts as the company’s “source of truth” and the backbone of many internal (and some external) products.
Nebius Data Platform is a single multi-tenant ecosystem based on YTsaurus — instead of running separate HDFS/Kafka/HBase-style systems, we provide storage, compute, and analytics capabilities inside one platform.
Built on top of the open-source YTsaurus ecosystem, we run and extend our own Nebius distribution and develop significant in-house functionality (core and platform-level). We can design, implement, and roll out features end-to-end on our clusters without waiting for upstream approvals and contribute upstream when it makes sense.
At scale today, this includes~500 servers, ~20k CPU cores and ~10 PB of compressed data in our largest production cluster, supporting workloads ranging from business-critical pipelines and financial transactions to large-scale ML/LLM training datasets and compute.
You’ll work on a system that includes (and ties together):
We’re looking for engineers who combine strong systems skills with product sense: understanding who uses the platform, why certain capabilities matter, and making pragmatic trade-offs to maximize impact. On our team, engineering work is expected to be connected to real users and outcomes — you’ll regularly align with internal stakeholders, clarify requirements, and help drive prioritization.
In this role, you will:
We conduct coding interviews as part of the process.
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
Emplifi is a leading AI-powered social media marketing and customer experience platform, empowering brands to deliver meaningful, connected experiences across digital channels. Recognized as a Leader by renowned analysts and celebrated as a customer favorite, Emplifi provides innovative, data-driven insights and AI-powered tools to help brands optimize social media performance, elevate their influencer marketing strategies, and deliver impactful customer engagement across marketing, commerce, and care.
We're a group of enthusiastic business analysts, data analysts, engineers, and Salesforce developers. We also have a team consisting of developers taking care of our SaaS toolsets. People matter the most. We try to create a culture where people feel productive and creative, can learn and grow, and every voice is heard.
Projects run on agile methodology, and we tackle interesting problems from anomaly detection, to optimal product setup, retention prediction to analyzing macro market trends using gigabytes of data. Our stack consists of Python, SQL, Tableau, Apache Spark, AWS S3, and PostgreSQL. We also manage Salesforce (both marketing and sales processes). While code quality matters, what defines us is our ability to combine strong technical solutions with a sharp business mindset.
At Emplifi, data is more than numbers—it’s insight, innovation, and impact. As our Data Analyst, you’ll be at the forefront of shaping business and product decisions across the organization. We are looking for someone with natural curiosity, strong independence, and a deep sense of accountability—someone who thrives on discovering the "why" behind the data and who is motivated to deliver meaningful outcomes. You’ll uncover trends in the social media landscape that feed into global publications, deliver actionable insights that power our strategy, and support client success through fast, accurate data exports.
In this dynamic role, you'll also support Sales enablement and pipeline building, while mastering powerful technologies like Python, Databricks, PySpark, S3, and Airflow. You’ll work with terabytes of data in our evolving Data Lake, all within a collaborative and flexible Agile (SCRUM) environment where experimentation is encouraged and learning never stops.
On-Demand Insights
Autonomous Insight Generation
Automated Reporting Pipelines
Go Beyond Analytics
Must Have Skills
Your Work Style
Additional Tools
At Emplifi, we are committed to creating a workplace where everyone is valued, respected, and empowered to bring their whole selves to work. We welcome applications from individuals of all ages, races, religions, genders, sexual orientations, gender identities, and LGBTQ+ communities.
Emplifi offers a safe, inclusive, and supportive environment where every employee has the opportunity to thrive and is encouraged to be who they are.
We welcome and encourage applicants with disabilities. Accommodations are available upon request at any stage of the recruitment process.
Ready to apply?
Apply to Emplifi FR
We’re in an unbelievably exciting area of tech and are fundamentally reshaping the data storage industry. Here, you lead with innovative thinking, grow along with us, and join the smartest team in the industry.
This type of work—work that changes the world—is what the tech industry was founded on. So, if you're ready to seize the endless opportunities and leave your mark, come join us.
As a Data Engineer - Agentic AI focus you will architect the high-performance data ecosystem that transforms raw information into actionable intelligence and agentic AI solutions. You will bridge the gap between complex engineering and business impact, collaborating with Data Scientists and Product Managers to build the infrastructure that scales our AI capabilities. This isn't just about moving data; it’s about engineering the foundation for the next generation of automated customer support and operational excellence.
#LIONSITE
WHAT YOU CAN EXPECT FROM US:
And because we understand the value of bringing your full and best self to work, we offer a variety of perks to manage a healthy balance, including flexible time off, wellness resources, and company-sponsored team events. Check out purebenefits.com for more information.
ACCOMMODATIONS AND ACCESSIBILITY:
Candidates with disabilities may request accommodations for all aspects of our hiring process. For more on this, contact us at TA-Ops@purestorage.com if you’re invited to an interview.
OUR COMMITMENT TO A STRONG AND INCLUSIVE TEAM:
We’re forging a future where everyone finds their rightful place and where every voice matters. Where uniqueness isn’t just accepted but embraced. That’s why we are committed to fostering the growth and development of every person, cultivating a sense of community through our Employee Resource Groups and advocating for inclusive leadership.
Everpure is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or any other characteristic legally protected by the laws of the jurisdiction in which you are being considered for hire.
Join us and bring your best.
Bring your bold.
Pure and simple.
Ready to apply?
Apply to Everpure
Share this job
We’re in an unbelievably exciting area of tech and are fundamentally reshaping the data storage industry. Here, you lead with innovative thinking, grow along with us, and join the smartest team in the industry.
This type of work—work that changes the world—is what the tech industry was founded on. So, if you're ready to seize the endless opportunities and leave your mark, come join us.
THE ROLE
We’re building a new capability inside Platform Engineering, within our Infrastructure Shared Services (ISS) organisation: Data Platform Engineering. Our goal is to create a platform that removes friction, providing a consistent way to build, operate, and evolve data-driven services with clear ownership, guardrails, and a strong developer experience. Everpure’s own platform thinking is anchored in the idea of a unified data plane - a single fabric that connects data across locations and delivers consistent, secure data services across protocols and applications.
In this role, you’ll build and operate a trusted, scalable, and intelligent data platform that empowers teams to make faster data-driven decisions while shaping a culture of ownership and innovation to meet the needs of an ever-changing business and technology landscape.
WHAT YOU'LL DO
WHAT YOU BRING
#LIONSITE
WHAT YOU CAN EXPECT FROM US:
And because we understand the value of bringing your full and best self to work, we offer a variety of perks to manage a healthy balance, including flexible time off, wellness resources, and company-sponsored team events. Check out purebenefits.com for more information.
ACCOMMODATIONS AND ACCESSIBILITY:
Candidates with disabilities may request accommodations for all aspects of our hiring process. For more on this, contact us at TA-Ops@purestorage.com if you’re invited to an interview.
OUR COMMITMENT TO A STRONG AND INCLUSIVE TEAM:
We’re forging a future where everyone finds their rightful place and where every voice matters. Where uniqueness isn’t just accepted but embraced. That’s why we are committed to fostering the growth and development of every person, cultivating a sense of community through our Employee Resource Groups and advocating for inclusive leadership.
Everpure is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or any other characteristic legally protected by the laws of the jurisdiction in which you are being considered for hire.
Join us and bring your best.
Bring your bold.
Pure and simple.
Ready to apply?
Apply to Everpure
Share this job
Join the newly form Datacraft team — the team building the next-generation data platform for Bloomreach Engagement. Datacraft owns three interconnected domains:
As a Senior SRE, you will be the reliability backbone of this AI-first data team. Your work will directly impact the deployments, pipelines, reliability, and observability of pipelines and services that hundreds of enterprise customers depend on — from data exports into Databricks and BigQuery, to the AI agent Loomi uses to surface insights.
Datacraft is an AI-first team. We believe code is a commodity and expect every engineer to fluently use coding agents (e.g., Cursor, Claude Code, Copilot, Gemini CLI) as a core part of their daily workflow. The ability to leverage AI tooling to accelerate development, prototyping, and problem-solving is not optional — it's foundational.
For candidates at the P3 / Senior SRE level, starting monthly compensation begins at 3 800 € gross, with the final offer tailored for each candidate based on their skills and experience. Stock options and a comprehensive benefits package are also included. Working in one of our Central European offices (Bratislava, Praha, Brno) or from home on a full-time basis, you'll become a core part of the Engineering team.
As a P3 (Senior) SRE at Bloomreach, you are an independent professional — expert in reliability engineering, able to decompose objectives into actionable infrastructure improvements, and lead initiatives end-to-end with minimal day-to-day guidance.
We need you to build and operate an ecosystem where data engineers can safely and efficiently develop, debug, and operate data-intensive jobs and services — spanning Kafka ingest pipelines, Iceberg data lakes, multi-DWH exports, Databricks deployment and orchestration (Airflow / Cloud Composer), and agentic AI workloads.
Languages: Python (primary), Go, SQL Messaging & streaming: Apache Kafka Storage & databases: Databricks, BigQuery, Apache Iceberg, GCS, Mongo, Redis Data processing & orchestration: Apache Spark, DataFlow, Airflow / Cloud Composer Infrastructure: GCP, Kubernetes, Terraform AI / Agentic: LLM APIs, MCP, agent orchestration frameworks Observability: Grafana, Prometheus, Victoria Metrics, PagerDuty, Sentry, OpenTelemetry CI/CD & tooling: GitLab, Jira, Confluence AI coding agents: Cursor, Claude Code
Impact
Ownership
Systematic approach
Data-driven
Technical skills
In 30 days:
In 90 days:
In 180 days:
#LI-KP1
(*Subject to employment type. Interns are exempt from marked benefits, usually for the first 6 months.)
Excited? Join us and transform the future of commerce experiences!
If this position doesn't suit you, but you know someone who might be a great fit, share it - we will be very grateful!
Any unsolicited resumes/candidate profiles submitted through our website or to personal email accounts of employees of Bloomreach are considered property of Bloomreach and are not subject to payment of agency fees.
#LI-Remote
Ready to apply?
Apply to Bloomreach
Share this job
Join the newly form Datacraft team — the team building the next-generation data platform for Bloomreach Engagement. Datacraft owns three interconnected domains:
As a Senior SRE, you will be the reliability backbone of this AI-first data team. Your work will directly impact the deployments, pipelines, reliability, and observability of pipelines and services that hundreds of enterprise customers depend on — from data exports into Databricks and BigQuery, to the AI agent Loomi uses to surface insights.
Datacraft is an AI-first team. We believe code is a commodity and expect every engineer to fluently use coding agents (e.g., Cursor, Claude Code, Copilot, Gemini CLI) as a core part of their daily workflow. The ability to leverage AI tooling to accelerate development, prototyping, and problem-solving is not optional — it's foundational. Working in one of our Central European offices (Bratislava, Praha, Brno) or from home on a full-time basis, you'll become a core part of the Engineering team.
As a P3 (Senior) SRE at Bloomreach, you are an independent professional — expert in reliability engineering, able to decompose objectives into actionable infrastructure improvements, and lead initiatives end-to-end with minimal day-to-day guidance.
We need you to build and operate an ecosystem where data engineers can safely and efficiently develop, debug, and operate data-intensive jobs and services — spanning Kafka ingest pipelines, Iceberg data lakes, multi-DWH exports, Databricks deployment and orchestration (Airflow / Cloud Composer), and agentic AI workloads.
Languages: Python (primary), Go, SQL Messaging & streaming: Apache Kafka Storage & databases: Databricks, BigQuery, Apache Iceberg, GCS, Mongo, Redis Data processing & orchestration: Apache Spark, DataFlow, Airflow / Cloud Composer Infrastructure: GCP, Kubernetes, Terraform AI / Agentic: LLM APIs, MCP, agent orchestration frameworks Observability: Grafana, Prometheus, Victoria Metrics, PagerDuty, Sentry, OpenTelemetry CI/CD & tooling: GitLab, Jira, Confluence AI coding agents: Cursor, Claude Code
Impact
Ownership
Systematic approach
Data-driven
Technical skills
In 30 days:
In 90 days:
In 180 days:
#LI-KP1
(*Subject to employment type. Interns are exempt from marked benefits, usually for the first 6 months.)
Excited? Join us and transform the future of commerce experiences!
If this position doesn't suit you, but you know someone who might be a great fit, share it - we will be very grateful!
Any unsolicited resumes/candidate profiles submitted through our website or to personal email accounts of employees of Bloomreach are considered property of Bloomreach and are not subject to payment of agency fees.
#LI-Remote
Ready to apply?
Apply to Bloomreach
At Capco, we specialize in management consulting and software development, partnering with leading international banks and insurance companies across Western Europe. Our innovative approach and collaborative culture make us the ideal place for professionals looking to make an impact in financial services technology.
Want to learn more about our culture and people? Visit us: Capco Careers Slovakia
We are building a modern enterprise data platform to support our next phase of digital transformation, including SAP integration, internal application evolution, and advanced BI capabilities.
We are seeking a Data Platform & Architecture Lead to design, own, and evolve how data flows across our organization. This role is focused on data architecture, integration, modeling and database administration.
You will play a critical role in establishing a scalable, well-governed data foundation that enables consistent reporting, operational insights, and future AI capabilities.
Candidates who stand out will be able to clearly articulate:
✅ Screening Call – Introductory chat with our recruiter
✅ Technical Interviews (1-2 rounds, depending on the project) – Discuss your skills with our experienced Capco colleagues
✅ Feedback & Offer – Receive an offer to join our team!
At Capco, we prioritize your career growth and well-being. You will have a personal coach to support your development and a flat hierarchy that encourages you to share and implement your ideas. Plus, our team events, sports activities, and networking opportunities help you stay connected and engaged beyond work.
Curious to learn more? Apply now – it only takes 5 minutes!
📩 Contact: Tomas Vanco – tomas.vanco@capco.com
💻 Connect on LinkedIn: Tomas Vanco | LinkedIn
#fulltime
Ready to apply?
Apply to CapcoAt Capco, we specialize in management consulting and software development, partnering with leading international banks and insurance companies across Western Europe. Our innovative approach and collaborative culture make us the ideal place for professionals looking to make an impact in financial services technology.
Want to learn more about our culture and people? Visit us: Capco Careers Slovakia
We are building a modern enterprise data platform to support our next phase of digital transformation, including SAP integration, internal application evolution, and advanced BI capabilities.
We are seeking a Data Platform & Architecture Lead to design, own, and evolve how data flows across our organization. This role is focused on data architecture, integration, modeling and database administration.
You will play a critical role in establishing a scalable, well-governed data foundation that enables consistent reporting, operational insights, and future AI capabilities.
Candidates who stand out will be able to clearly articulate:
✅ Screening Call – Introductory chat with our recruiter
✅ Technical Interviews (1-2 rounds, depending on the project) – Discuss your skills with our experienced Capco colleagues
✅ Feedback & Offer – Receive an offer to join our team!
At Capco, we prioritize your career growth and well-being. You will have a personal coach to support your development and a flat hierarchy that encourages you to share and implement your ideas. Plus, our team events, sports activities, and networking opportunities help you stay connected and engaged beyond work.
Curious to learn more? Apply now – it only takes 5 minutes!
📩 Contact: Tomas Vanco – tomas.vanco@capco.com
💻 Connect on LinkedIn: Tomas Vanco | LinkedIn
#fulltime
Ready to apply?
Apply to CapcoAt Capco, we specialize in management consulting and software development, partnering with leading international banks and insurance companies across Western Europe. Our innovative approach and collaborative culture make us the ideal place for professionals looking to make an impact in financial services technology.
Want to learn more about our culture and people? Visit us: Capco Careers Slovakia
We are building a modern enterprise data platform to support our next phase of digital transformation, including SAP integration, internal application evolution, and advanced BI capabilities.
We are seeking a Data Platform & Architecture Lead to design, own, and evolve how data flows across our organization. This role is focused on data architecture, integration, modeling and database administration.
You will play a critical role in establishing a scalable, well-governed data foundation that enables consistent reporting, operational insights, and future AI capabilities.
Candidates who stand out will be able to clearly articulate:
✅ Screening Call – Introductory chat with our recruiter
✅ Technical Interviews (1-2 rounds, depending on the project) – Discuss your skills with our experienced Capco colleagues
✅ Feedback & Offer – Receive an offer to join our team!
At Capco, we prioritize your career growth and well-being. You will have a personal coach to support your development and a flat hierarchy that encourages you to share and implement your ideas. Plus, our team events, sports activities, and networking opportunities help you stay connected and engaged beyond work.
Curious to learn more? Apply now – it only takes 5 minutes!
📩 Contact: Tomas Vanco – tomas.vanco@capco.com
💻 Connect on LinkedIn: Tomas Vanco | LinkedIn
#fulltime
Ready to apply?
Apply to CapcoShare this job
At JetBrains, code is our passion. Ever since we started, back in 2000, we’ve been striving to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
Today, AI-powered assistance and agents are becoming a core part of how developers work in our IDEs. The ML Workflows Engineering team is dedicated to removing infrastructure challenges, streamlining machine learning operations (MLOps), and enabling teams to focus on the innovative work that matters most – building impactful ML models and intelligent agents. As part of the team, you'll play a key role in designing tools, automation, and pipelines that make machine learning development seamless and intuitive.
By integrating cutting-edge MLOps practices and engineering excellence, we aim to maximize productivity and remove the complexity of ML infrastructure so that our teams can push the boundaries of what’s possible in AI.
#LI-HYBRID
#LI-MR1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
At JetBrains, code is our passion. Ever since we started, back in 2000, we’ve been striving to make the strongest, most effective developer tools on earth. Today, AI-powered assistance and agents are becoming a core part of how developers work in our IDEs.
We’re building multi-step coding agents that can understand large codebases, plan changes, call tools, and iterate with the user. As a Research Engineer in the Agentic Models team, you’ll be responsible for the models, training loops, and evaluation pipelines that power these agents.
You’ll work at the intersection of SFT and RL-style post-training, and product-driven evaluation, using our distributed GPU and MapReduce clusters to ship models into JetBrains products.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.