All active LLM roles based in Prague.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
The role
Token Factory is a part of Nebius Cloud, one of the world's largest GPU clouds, running tens of thousands of GPUs. We are building a high-performance inference and fine-tuning platform designed to push foundation models to their hardware limits. Our mission is to maximize throughput, minimise latency, and optimise cost-per-token across tens of thousands of GPUs.
Some directions we are currently working on, and which you can be a part of:
We expect you to have:
Nice to have:
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
The role
At Nebius, we’re building a next-generation AI compute platform for large-scale ML training and inference — from a few nodes to thousands of GPUs.
We’re looking for a Technical Product Manager to own product direction for Soperator — our Slurm-on-Kubernetes control plane for GPU clusters.
In this role, you will shape how ML engineers and research teams run, scale, and optimize distributed workloads in production.
If you care about systems that combine performance, reliability, and developer experience at the frontier of AI infrastructure, this role is for you.
Your responsibilities will include:
• Own the full user journey across Soperator clusters: Slurm workflows, dashboards, alerts/notifications, node lifecycle, and training/inference capacity management.
• Define product direction end-to-end: problem discovery → solution design → delivery → adoption.
• Lead deep customer discovery through interviews, usage analytics, and workload analysis to uncover high-impact opportunities.
• Drive execution across platform teams: compute, networking, storage, observability, IAM and etc.
• Translate frontier ML and infrastructure ideas into practical product capabilities for real-world GPU clusters.
• Define success metrics, prioritize roadmap decisions with data, and ensure measurable customer/business impact.
• Lead the open-source strategy and execution for Soperator: shape public roadmap themes, prioritize OSS-facing capabilities, and ensure strong adoption in the community.
We expect you to have:
• 3–5+ years in Product Management, ML infrastructure/MLOps, distributed systems, or cloud platform engineering.
• Strong technical depth in distributed systems, cloud infrastructure, or ML platforms.
• Hands-on familiarity with large-scale ML training and orchestration tools (e.g., Slurm, Kubernetes, Ray).
• Track record of shipping technically complex products with multiple engineering teams.
• Strong communication and stakeholder management across engineering, research, and customers.
• Experience with product analytics, data-informed prioritization, and experimentation.
• High ownership, high learning velocity, and comfort operating in fast-moving AI infrastructure environments.
It will be an added bonus if you have:
• Experience with GPU platforms and HPC primitives: InfiniBand/RDMA, topology-aware scheduling, high-throughput storage.
• Practical understanding of modern ML training stacks: PyTorch, DeepSpeed, FSDP/ZeRO, NCCL.
• Familiarity with efficiency and reliability metrics: Goodput, MFU, failure modes, preemption handling, health checks.
• Exposure to large-scale LLM training/inference systems.
• Experience in observability, performance tuning, or SRE/reliability engineering.
• Customer-facing technical experience (solutioning, support, architecture advisory).
About Nebius
Nebius AI is an AI cloud platform with one of the largest GPU capacities in Europe. Launched in November 2023, the Nebius AI platform provides high-end, training-optimized infrastructure for AI practitioners. As an NVIDIA preferred cloud service provider, Nebius AI offers a variety of NVIDIA GPUs for training and inference, as well as a set of tools for efficient multi-node training.
Nebius AI owns a data center in Finland, built from the ground up by the company’s R&D team and showcasing our commitment to sustainability. The data center is home to ISEG, the most powerful commercially available supercomputer in Europe and the 16th most powerful globally (Top 500 list, November 2023).
Nebius’s headquarters are in Amsterdam, Netherlands, with teams working out of R&D hubs across Europe and the Middle East.
Nebius AI is built with the talent of more than 500 highly skilled engineers with a proven track record in developing sophisticated cloud and ML solutions and designing cutting-edge hardware. This allows all the layers of the Nebius AI cloud – from hardware to UI – to be built in-house, distictly differentiating Nebius AI from the majority of specialized clouds: Nebius customers get a true hyperscaler-cloud experience tailored for AI practitioners. We’re growing and expanding our products every day.
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
This role is for Nebius AI R&D, a team focused on applied research and the development of AI-heavy products. Examples of applied research that we have recently published include:
One example of an AI product that we are deeply involved in is Nebius Token Factory — an inference and fine-tuning platform for AI models.
This role will require expertise in distributed systems to build large-scale LLM training platform.
Your responsibilities will include:
We expect you to have:
Nice to have:
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
We’re looking for a Software Engineer with strong C++ expertise to join the team building and operating Nebius Data Platform — a distributed storage and a processing platform that acts as the company’s “source of truth” and the backbone of many internal (and some external) products.
Nebius Data Platform is a single multi-tenant ecosystem based on YTsaurus — instead of running separate HDFS/Kafka/HBase-style systems, we provide storage, compute, and analytics capabilities inside one platform.
Built on top of the open-source YTsaurus ecosystem, we run and extend our own Nebius distribution and develop significant in-house functionality (core and platform-level). We can design, implement, and roll out features end-to-end on our clusters without waiting for upstream approvals and contribute upstream when it makes sense.
At scale today, this includes~500 servers, ~20k CPU cores and ~10 PB of compressed data in our largest production cluster, supporting workloads ranging from business-critical pipelines and financial transactions to large-scale ML/LLM training datasets and compute.
You’ll work on a system that includes (and ties together):
We’re looking for engineers who combine strong systems skills with product sense: understanding who uses the platform, why certain capabilities matter, and making pragmatic trade-offs to maximize impact. On our team, engineering work is expected to be connected to real users and outcomes — you’ll regularly align with internal stakeholders, clarify requirements, and help drive prioritization.
In this role, you will:
We conduct coding interviews as part of the process.
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
Token Factory is a part of Nebius Cloud, one of the world’s largest GPU clouds, running tens of thousands of GPUs. We are building an inference & fine-tuning platform that makes every kind of foundation model — text, vision, audio, and emerging multimodal architectures — fast, reliable, and effortless to train & deploy at massive scale.
Advanced Fine-Tuning: Enhancing fine-tuning methodologies - both LoRA-based and full-parameter - for cutting-edge LLMs (e.g., GPT-OSS, Kimi K2.5, DeepSeek V3.1/V3.2, GLM-4.7), focusing on both model quality and training efficiency.
We expect you to have:
A profound understanding of theoretical foundations of machine learning and reinforcement learning.
Deep expertise in modern deep learning for language processing and generation
Experience with training large models on multiple computational nodes
Reasonable understanding of performance aspects of large neural network training (sharding strategies, custom kernels, hardware features etc.)
Strong software engineering skills (we mostly use Python)
Deep experience with modern deep learning frameworks (we use JAX)
Proficiency in contemporary software engineering approaches, including CI/CD, version control and unit testing
Strong communication and leadership abilities
Nice to have:
Previous experience working with language models or other similar NLP technologies.
Familiarity with important ideas in LLM space, such as MHA, RoPE, ZeRO/FSDP, Flash Attention, quantization
A track record of building and delivering products (not necessarily ML-related) in a dynamic startup-like environment.
Strong engineering skills, including experience in developing large distributed systems or high-load web services.
Open-source projects that showcase your engineering prowess
Excellent command of the English language, alongside superior writing, articulation, and communication skills.
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
About Nebius:
Nebius is leading a new era in cloud infrastructure for the global AI economy. We are building a full-stack AI cloud platform that supports developers and enterprises from data and model training through to production deployment, without the cost and complexity of building large in-house AI/ML infrastructure.
Built by engineers, for engineers. From large-scale GPU orchestration to inference optimization, we own the hard problems across compute, storage, networking and applied AI.
Listed on Nasdaq (NBIS) and headquartered in Amsterdam, we have a global footprint with R&D hubs across Europe, the UK, North America and Israel. Our team of 1,500+ includes hundreds of engineers with deep expertise across hardware, software and AI R&D.
Nebius Token Factory is a next-generation platform for LLM inference and deployment. It gives companies and developers access to dozens of state-of-the-art open-source models (LLMs, Vision, Embeddings, Image Generation) with enterprise-grade guarantees, including private endpoints, zero-retention data flow, transparent pricing, and easy scaling without GPU ops overhead.
Token Factory is part of Nebius, a company building next-generation cloud infrastructure for the global AI economy, helping teams solve real-world problems and scale AI without massive infrastructure costs or large in-house ML teams.
We are looking for a strong Product Designer who will help turn complex AI infrastructure into a clear, controllable, and thoughtfully designed product for a professional audience. In this role, you’ll work on one of the most technically advanced AI products on the market, immerse yourself in modern LLM and AI infrastructure operating at production scale, and have real influence on UX and product decisions across core user scenarios.
You’re welcome to work in our offices in Amsterdam, Berlin, London or Prague with a hybrid work schedule.
Your responsibilities will include:
We expect you to have:
It will be an added bonus if you have:
Benefits & Perks:
What's it like to work at Nebius:
Fast moving - Bold thinking - Constant growth - Meaningful impact - Trust and real ownership - Opportunity to shape the future of AI
Equal Opportunity Statement:
Nebius is an equal opportunity employer. We are committed to fostering an inclusive and diverse workplace and to providing equal employment opportunities in all aspects of employment. We do not discriminate on the basis of race, color, religion, sex (including pregnancy), national origin, ancestry, age, disability, genetic information, marital status, veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by applicable law.
Applicants must be authorized to work in the country in which they apply and will be required to provide proof of employment eligibility as a condition of hire.
If you need accommodations during the application process, please let us know.
Ready to apply?
Apply to Nebius
Share this job
Groupon is a marketplace where customers discover new experiences and services everyday and local businesses thrive. To date we have worked with over a million merchant partners worldwide, connecting over 16 million customers with deals across various categories. In a world often dominated by e-commerce giants, we stand out as one of the few platforms uniquely committed to helping local businesses succeed on a performance basis.
Groupon is on a radical journey to transform our business with relentless pursuit of results. Even with thousands of employees spread across multiple continents, we still maintain a culture that inspires innovation, rewards risk-taking and celebrates success. The impact here can be immediate due to our scale and the speed of our transformation. We're a "best of both worlds" kind of company. We're big enough to have the resources and scale, but small enough that a single person has a surprising amount of autonomy and can make a meaningful impact.
Groupon is mid-transformation. Seven parallel streams. Real targets: contact deflection, resolution rates, repurchase. The streams are moving - but each one needs a single accountable person who can both diagnose the problem and ship the fix. That's this role. You won't be coordinating. You won't be waiting for an analyst or speccing things out for an engineer. You'll pull the data yourself, write the prompt yourself, build the dashboard yourself, and hold people accountable without having authority over them. If it's blocking you, you do it.
You'll report to Maguelonne, Director of Operational Transformation. Your closest partners will be teams across Customer Service, Merchant Operations, Product, Engineering, Supply, and Content with key stakeholders based in Madrid and Prague.
At 30 days: You've personally pulled and analysed the data on your stream. You've shipped at least one small automation or prompt-driven workflow. You know the landscape — the streams, the blockers, the key players.
At 60 days: You own at least one stream with a clear delivery plan. You've prototyped one AI workflow running on real data. Stakeholders come to you when things are stuck.
At 6 months: Contact deflection rate is improving on your stream. At least one AI workflow is in production and being used daily. Your manager doesn't have to chase status — you surface risks before they become issues.
You can demonstrably:
The transformation agenda is real, the exec visibility is high, and the scope is broader than most roles you'll find at larger companies. If you want to move fast, ship AI into production, and be the person who actually unblocks things - apply.
Groupon is an AI-First Company
We’re committed to building smarter, faster, and more innovative ways of working—and AI plays a key role in how we get there. We encourage candidates to leverage AI tools during the hiring process where it adds value, and we’re always keen to hear how technology improves the way you work. If you’re passionate about AI or curious to explore how it can elevate your role—you’ll be right at home here.
Groupon’s purpose is to build strong communities through thriving small businesses. To learn more about the world’s largest local e-commerce marketplace, click here. You can also find out more about us in the latest Groupon news as well as learning about our DEI approach. If all of this sounds like something that’s a great fit for you, then click apply and join us on a mission to become the ultimate destination for local experiences and services.
Beware of Recruitment Fraud: Groupon follows a merit-based recruitment process without charging job seekers any fees. We've noticed an increase in recruitment fraud, including fake job postings and fraudulent interviews and job offers aimed at stealing personal information or money. Be cautious of individuals falsely representing Groupon's Talent Acquisition team with fake job offers. If you encounter any suspicious job offers or interview calls demanding money, recognize these as scams. Groupon is not responsible for losses from such dealings. For legitimate job openings (and a sneak peek into life at Groupon), always check our official career website at Groupon Careers
Ready to apply?
Apply to GrouponWe’re in an unbelievably exciting area of tech and are fundamentally reshaping the data storage industry. Here, you lead with innovative thinking, grow along with us, and join the smartest team in the industry.
This type of work—work that changes the world—is what the tech industry was founded on. So, if you're ready to seize the endless opportunities and leave your mark, come join us.
THE ROLE
Our Senior Applied AI Engineer builds and operate production-grade AI systems that extract meaning from large-scale unstructured document collections, enabling enterprise data discovery classification, and governance.
This role owns the full lifecycle of graph intelligence solutions — from problem definition and data modelling, to building and enriching knowledge graphs, and deploying ML- and LLM-assisted analytics in production. The focus is on semantic and contextual analysis of unstructured data to uncover relationships, patterns, and insights that support AI safety, security, and compliance requirements.
WHAT YOU'LL DO
WHAT YOU BRING
#LI-ONSITE
WHAT YOU CAN EXPECT FROM US:
And because we understand the value of bringing your full and best self to work, we offer a variety of perks to manage a healthy balance, including flexible time off, wellness resources, and company-sponsored team events. Check out purebenefits.com for more information.
ACCOMMODATIONS AND ACCESSIBILITY:
Candidates with disabilities may request accommodations for all aspects of our hiring process. For more on this, contact us at TA-Ops@purestorage.com if you’re invited to an interview.
OUR COMMITMENT TO A STRONG AND INCLUSIVE TEAM:
We’re forging a future where everyone finds their rightful place and where every voice matters. Where uniqueness isn’t just accepted but embraced. That’s why we are committed to fostering the growth and development of every person, cultivating a sense of community through our Employee Resource Groups and advocating for inclusive leadership.
Everpure is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or any other characteristic legally protected by the laws of the jurisdiction in which you are being considered for hire.
Join us and bring your best.
Bring your bold.
Pure and simple.
Ready to apply?
Apply to Everpure
We’re in an unbelievably exciting area of tech and are fundamentally reshaping the data storage industry. Here, you lead with innovative thinking, grow along with us, and join the smartest team in the industry.
This type of work—work that changes the world—is what the tech industry was founded on. So, if you're ready to seize the endless opportunities and leave your mark, come join us.
As a Data Engineer - Agentic AI focus you will architect the high-performance data ecosystem that transforms raw information into actionable intelligence and agentic AI solutions. You will bridge the gap between complex engineering and business impact, collaborating with Data Scientists and Product Managers to build the infrastructure that scales our AI capabilities. This isn't just about moving data; it’s about engineering the foundation for the next generation of automated customer support and operational excellence.
#LIONSITE
WHAT YOU CAN EXPECT FROM US:
And because we understand the value of bringing your full and best self to work, we offer a variety of perks to manage a healthy balance, including flexible time off, wellness resources, and company-sponsored team events. Check out purebenefits.com for more information.
ACCOMMODATIONS AND ACCESSIBILITY:
Candidates with disabilities may request accommodations for all aspects of our hiring process. For more on this, contact us at TA-Ops@purestorage.com if you’re invited to an interview.
OUR COMMITMENT TO A STRONG AND INCLUSIVE TEAM:
We’re forging a future where everyone finds their rightful place and where every voice matters. Where uniqueness isn’t just accepted but embraced. That’s why we are committed to fostering the growth and development of every person, cultivating a sense of community through our Employee Resource Groups and advocating for inclusive leadership.
Everpure is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or any other characteristic legally protected by the laws of the jurisdiction in which you are being considered for hire.
Join us and bring your best.
Bring your bold.
Pure and simple.
Ready to apply?
Apply to Everpure
Share this job
We’re in an unbelievably exciting area of tech and are fundamentally reshaping the data storage industry. Here, you lead with innovative thinking, grow along with us, and join the smartest team in the industry.
This type of work—work that changes the world—is what the tech industry was founded on. So, if you're ready to seize the endless opportunities and leave your mark, come join us.
THE ROLE
As a Software Systems Architect, you will serve as the technical visionary and bridge between complex global business needs and high-quality technical execution. Based in our Prague R&D, you will define the architectural roadmap for engineering teams across Prague and Santa Clara, transforming requirements from Sales, Marketing, HR, and Customer Service into scalable, AWS-native solutions. Your mission is to elevate our engineering standards by architecting cutting-edge Agentic systems and intelligent orchestration layers. This is a high-impact leadership role where you will drive innovation, mentor a growing organization of 30+ developers, and ensure our internal platforms are resilient, secure, and future-proof.
WHAT YOU'LL DO
WHAT YOU BRING
#LI-ONSITE
WHAT YOU CAN EXPECT FROM US:
And because we understand the value of bringing your full and best self to work, we offer a variety of perks to manage a healthy balance, including flexible time off, wellness resources, and company-sponsored team events. Check out purebenefits.com for more information.
ACCOMMODATIONS AND ACCESSIBILITY:
Candidates with disabilities may request accommodations for all aspects of our hiring process. For more on this, contact us at TA-Ops@purestorage.com if you’re invited to an interview.
OUR COMMITMENT TO A STRONG AND INCLUSIVE TEAM:
We’re forging a future where everyone finds their rightful place and where every voice matters. Where uniqueness isn’t just accepted but embraced. That’s why we are committed to fostering the growth and development of every person, cultivating a sense of community through our Employee Resource Groups and advocating for inclusive leadership.
Everpure is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or any other characteristic legally protected by the laws of the jurisdiction in which you are being considered for hire.
Join us and bring your best.
Bring your bold.
Pure and simple.
Ready to apply?
Apply to Everpure
Share this job
At JetBrains, code is our passion. Ever since we started back in 2000, we have been striving to make the world’s most robust and effective developer tools. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
We are now building services and agentic tools that provide AI coding agents and end users with deeper context about codebases. Our code retrieval service already delivers meaningful improvements in agent speed and task performance, and we aim to push this further – extracting richer insights than snippets alone at the scale of several hundred thousand repositories.
We are looking for an AI Engineer who can design and implement agentic tools from scratch, bring them to the end users, and make coding with agents smarter, faster, and more reliable.
#LI-DNI
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
Software engineers and AI agents alike suffer from the same problem: finding that one person or place that will answer their tough, specific question. Many solutions promise to solve this with similarity search in vector databases. Unfortunately, finding the answer is often a puzzle with pieces to be collected across a myriad of contradictory sources and cannot be solved without surgical search and careful reasoning.
Spectrum collects data from an organization's code, docs, and issues, and organizes knowledge in a unified ontology that AI agents can efficiently search through and reason over. We aim to revolutionize the semantic layer space for software-building organizations and move beyond specs that fall out of sync with code, introducing a living spec – one that's extracted from the whole system and used to keep it aligned. Spectrum is meant to be the single source of truth for all product and architectural knowledge.
A resident of JetBrains' startup incubator, Spectrum enjoys startup speed and autonomy, and is backed by 25 years of developer tooling expertise. We are looking for a Senior ML Researcher to develop the core methods that make Spectrum possible – novel approaches to temporal ontology extraction, contradiction detection, and semantic alignment across heterogeneous software artifacts. You will help define and execute the research agenda, while also collaborating with JetBrains Research and external academic advisors.
*Some benefits may vary depending on location.
#LI-DNI
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
At JetBrains, code is our passion. Ever since we started back in 2000, we have been striving to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
The Python Ecosystem team builds PyCharm – one of the most popular Python IDEs in the world – along with the Python plugin for IntelliJ IDEA. As AI changes how developers write, debug, and ship code, we’re making our Python tools AI-native. We’re looking for an AI Lead to drive this effort by shaping the architecture, building key components hands-on, and guiding the team in making strong decisions around AI-powered product development.
In this role, you will:
We’d love to talk to you if you have:
Nice to have:
Why join JetBrains?
*Some benefits may vary depending on location.
#LI-DNI
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
Software engineers and AI agents alike suffer from the same problem: finding that one person or place that will answer their tough, specific question. Many solutions promise to solve this with similarity search in vector databases. Unfortunately, finding the answer is often a puzzle with pieces to be collected across a myriad of contradictory sources and cannot be solved without surgical search and careful reasoning.
Spectrum collects data from an organization’s code, docs, and issues, and organizes knowledge in a unified ontology that AI agents can efficiently search through and reason over. We aim to revolutionize the semantic layer space for software-building organizations and move beyond specs that fall out of sync with code, introducing a living spec – one that’s extracted from the whole system and used to keep it aligned. Spectrum is meant to be the single source of truth for all product and architectural knowledge.
Spectrum is a resident of JetBrains' startup incubator, with startup speed and autonomy, and backed by 25 years of developer tooling expertise. We are looking for a top-class ML Engineer who will help us shape the future of software development. You will own our AI and ML engineering stack and help define the research agenda for our team. Your technical vision and design decisions will directly shape the product and determine its success.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
We are looking for an experienced AI Engineer who will design and deliver Artificial Intelligence solutions that improve our customers' experience and enable informed, data-driven decision making. We seek someone excited about working with cutting-edge technologies to build pragmatic, performant AI solutions. You'll collaborate with a team of outstanding builders around the world, characterized by a love for innovative technology, a desire to solve problems, and a continuous pursuit of building simple, elegant products that delight customers and simplify all levels of data engagement. You will work primarily with Data Scientists, other AI Engineers, and Machine Learning Operations Engineers, as well as teams across Collibra.
This is a hybrid role based in our Prague office. Our hybrid model means you’ll work from the office at least two days each week. This setup helps us stay connected, work more closely together, and keep making progress as a team.
Collibra recognizes and values that everyone has different needs, interests, and life goals. We built our benefits program with flexibility in mind to support you and your loved ones through a diverse range of circumstances and life events. These flexible offerings sit on a foundation of competitive compensation, health coverage, and time off. Learn more about Collibra’s benefits.
We create inclusion and belonging through how we onboard, meet, connect, engage, and communicate. Learn more about diversity, equity, and inclusion at Collibra.
At Collibra, we’re proud to be an equal opportunity employer. We realize the key to creating a company with a world-class culture and employee experience comes from who we hire and creating a workplace that celebrates everyone.
With this, we proudly consider qualified applicants without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sexual orientation, pregnancy, sex, gender identity, gender expression, genetic information, physical or mental disability, HIV status, registered domestic partner status, caregiver status, marital status, veteran or military status, citizenship status or any other legally protected category. If you have a need that requires accommodation, let us know by completing our Accommodations for Applicants form.
Ready to apply?
Apply to Collibra
Share this job
We’re building a new product from scratch - focused on pragmatic, production-ready LLM-based agents.
The research and architecture phase is happening. Now we need engineers who will help turn those decisions into a real, scalable system.
This is not about blindly playing with prompts.
This is about building backend services, APIs, evaluation pipelines, and infrastructure that make agentic systems reliable in production.
We’re tackling a real-world problem with a clear path to market, backed by SafeQ Cloud - and you’ll also help shape and improve the in-house tools that power the product.
You won’t just get tickets — you’ll be part of discussions, refinements, and decisions.
If you’d like to work remotely, no problem. We’re used to hybrid collaboration based on team agreement.
WHAT WE OFFER:
WHAT WE EXPECT FROM YOU:
NICE TO HAVE:
Ready to apply?
Apply to Y Soft
One of our teams is looking for a new teammate. They are focusing on bringing AI into real products - not as a toy, but as something that actually solves problems.
You’ll be working on practical use of LLMs and AI approaches in a product environment. That means experimenting, validating ideas, and helping teams understand what works and what doesn’t. You won’t be sitting in a research silo - you’ll be working directly with teams and helping turn ideas into something usable.
If you’d like to work remotely, no problem. We’re used to hybrid collaboration based on team agreement.
WHAT WE OFFER:
WHAT WE EXPECT FROM YOU:
NICE TO HAVE:
Ready to apply?
Apply to Y Soft
Make is the leading visual platform for anyone to design, build, and automate anything—from tasks and workflows to apps and systems—without the need for coding skills. We are headquartered in the flourishing tech hub of Prague, Czech Republic, and our teams are spread across the USA, UK, Germany, France, Canada, India and Chile, among other locations.
We are seeking a highly motivated and experienced AI Product Manager to join our rapidly expanding product and engineering organization. Embedded within a team dedicated to our AI capabilities, you will serve as the team's product manager, championing the product AI strategy, driving rigorous prioritization, and ensuring successful execution through rapid iteration and shipping.
The ideal candidate for this role would possess a strong, hands-on understanding of the evolving AI landscape and the Low- and No-Code market, with a bias for action and building. Success in this role implies owning and articulating a compelling product vision to executive stakeholders, driving alignment across the organization, and deep collaboration with AI researchers, engineers, data scientists, and go-to-market teams.
Here’s what you we can expect when you apply:
Step 1: Pre-Application Video Submission. Briefly walk us through something you vibe-coded recently - can be for work, or a personal project. An idea, prototype, SaaS or an app will work. We'd love to hear about your thought process in lovable, v0, cursor or your vibe‑coding tool of your choice. Screen‑record a very short video walkthrough of the user experience, and explain how you iterated and what you learned. This helps us get an understanding of your product development process. No polish needed – just your thinking, tools, and approach in action.
After you’ve submitted your task and CV, we’ll take a look and be in touch soon with next steps.
#careeratmake
#LI-KN1
What we stand for:
🤝 We roll together - We embrace different ideas to grow together and create powerful solutions.
🚀 Customer impact first - We empower our customers to succeed, aiming for sustainable impact.
⚽ Game on! - We're explorers at heart: play is our fuel and creativity has no limits.
For more, feel free to check out our Life at Make Instagram, Meet-up page, or YouTube to get a sense of the vibe.
At Make, we know that exceptional work comes from people who bring different perspectives and experiences. We build a place where everyone feels welcome, heard, and empowered to create, contribute, grow and make an impact. We encourage people of all backgrounds, identities, abilities, and experiences to apply. Our hiring decisions are based on your qualifications, skills, merit, and the needs of our business. We have zero tolerance for discrimination or harassment of any kind.
Ready to apply?
Apply to Make
Share this job
At JetBrains, code is our passion. Ever since we started back in 2000, we have strived to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production and free developers to focus on creativity and problem-solving.
The IntelliJ AI team develops the AI-specific core of JetBrains IDEs. We work on agentic workflows, intelligent editing assistance, and new AI-powered capabilities that redefine how developers interact with the IDEs.
We are looking for a Senior Software Developer to help us build the next generation of AI-powered features in JetBrains IDEs.
In this role, you will:
We are looking for engineers who:
Experience that would be especially valuable:
#LI-IM1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
Kineto is a next-generation platform that enables creators, educators, and small businesses to generate, deploy, and operate fully functional AI-powered web applications – instantly and at scale. It combines LLM-driven code generation, multi-tenant Postgres (Neon), dynamic hosting (GKE and Knative), automated deployments (Flux), analytics, billing, and a seamless chat-based UX to make software creation accessible to everyone. Our team is growing rapidly, and we’re now seeking an experienced Infrastructure Engineer who can design, build, and maintain our cloud-native platform, with a focus on scalability, reliability, and automated operations.
#LI-YY1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
At JetBrains, code is our passion. Ever since we started, back in 2000, we’ve been striving to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
Today, AI-powered assistance and agents are becoming a core part of how developers work in our IDEs. The ML Workflows Engineering team is dedicated to removing infrastructure challenges, streamlining machine learning operations (MLOps), and enabling teams to focus on the innovative work that matters most – building impactful ML models and intelligent agents. As part of the team, you'll play a key role in designing tools, automation, and pipelines that make machine learning development seamless and intuitive.
By integrating cutting-edge MLOps practices and engineering excellence, we aim to maximize productivity and remove the complexity of ML infrastructure so that our teams can push the boundaries of what’s possible in AI.
#LI-HYBRID
#LI-MR1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
At JetBrains, code is our passion. Ever since we started back in 2000, we have been striving to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
We’re looking for a Research Engineer who will own the training stack and model architecture for our Mellum LLM family. Your job is easier said than done: make training faster, cheaper, and more stable at a large scale. You’ll profile, design, and implement changes to the training pipeline – from architecture to custom GPU kernels, as needed.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
At JetBrains, code is our passion. Ever since we started back in 2000, we have been striving to make the world’s most robust and effective developer tools. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
We are working on an ambitious new platform that provides AI capabilities to all JetBrains products. Our platform is based on models developed in-house for writing and coding assistance, as well as integration with our strategic partners.
We are looking for a Research Engineer who can contribute to training foundation models for coding tasks. You’ll be working on developing Large Language Models from scratch and deploying them into production environments where they will be accessible by end users across the globe.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
At JetBrains, code is our passion. Ever since we started, back in 2000, we’ve been striving to make the strongest, most effective developer tools on earth. Today, AI-powered assistance and agents are becoming a core part of how developers work in our IDEs.
We’re building multi-step coding agents that can understand large codebases, plan changes, call tools, and iterate with the user. As a Research Engineer in the Agentic Models team, you’ll be responsible for the models, training loops, and evaluation pipelines that power these agents.
You’ll work at the intersection of SFT and RL-style post-training, and product-driven evaluation, using our distributed GPU and MapReduce clusters to ship models into JetBrains products.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
At JetBrains, code is our passion. Ever since we started, back in 2000, we have strived to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
We just unveiled JetBrains Central, a platform for agent-driven software development that connects tools, agents, and infrastructure. It enables automated AI workflows to run, be monitored, and be managed across teams – with clear visibility into results, costs, and performance.
JetBrains Central provides three core capabilities:
Governance and control: Policy enforcement, identity and access management, observability, auditability, and cost attribution for agent-driven work. Some of these functionalities are already available via the JetBrains Central Console.
Agent execution infrastructure: Cloud agent runtimes and computation provisioning that allow agents to run reliably across development environments.
Agent optimization and context: Shared semantic context across repositories and projects, enabling agents to access relevant knowledge, and task routing to the most appropriate models or tools.
Inside the Professional Services department, we are building a team of Forward Deployed Engineers to help our customers adopt AI-native development with JetBrains Central and enable JetBrains consulting partners to transform their customers’ AI-native software development lifecycle.
As a Principal Forward Deployed Engineer, you will work directly with early adopter customers to deploy JetBrains Central platform into complex enterprise environments and help them move from experimental AI usage to governed, scalable AI-native development.
You will act as a trusted advisor to customer engineering teams and consulting partners, while also mentoring other Forward Deployed Engineers and helping with complex deployments. In this role, you will help establish best practices for enterprise AI adoption and influence how organizations implement AI-driven software development.
This role is often a strong fit for engineers who have previously served as Principal or Staff Engineers, Heads of Engineering, CTOs, or Solutions Architects and who enjoy working directly with customers while staying deeply involved with complex technical systems.
This role combines consulting, architecture, and hands-on engineering. Your work in the field will directly influence how our platform evolves and how enterprises adopt AI-driven development.
You will work directly with the world’s most innovative companies — our client base includes 420 of the Fortune 500 — ensuring they maximize the value of their JetBrains investments.
#LI-KP1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
At JetBrains, code is our passion. Ever since we started, back in 2000, we have strived to make the strongest, most effective developer tools on earth. By automating routine checks and corrections, our tools speed up production, freeing developers to grow, discover, and create.
AI features in JetBrains IDEs, developed by the IntelliJ AI team, have quickly become a core part of how developers work inside our IDEs. The IntelliJ AI team partners with product groups across JetBrains to embed advanced AI features that accelerate developer workflows and deliver real value to software engineers.
We are currently looking to hire a Senior Machine Learning Engineer to help us realize our ambitious vision of creating AI assistance that supports the entire development lifecycle across JetBrains IDEs. If selected, you will join the ML subteam within IntelliJ AI, driving the development of our ML system from end to end by defining evaluation and metrics, shaping context orchestration, and helping product teams tailor AI capabilities to their needs.
In this role, you will:
We’d be happy to have you on our team if you:
We’d be especially thrilled if you have:
#LI-MR1
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
At JetBrains, code is our passion. Ever since we started, back in 2000, we've been striving to make the strongest, most effective developer tools on earth. Today, AI-powered coding agents are becoming a core part of how developers write Kotlin – and we want to make sure they write it well.
The Kotlin AI Value Stream team is responsible for how AI agents understand, generate, and improve Kotlin code across all platforms: Android, Kotlin Multiplatform, server-side, web, desktop, and others. We build the evaluation infrastructure, error analysis tools, and post-training pipelines that measure and improve agent behavior on real Kotlin developer tasks.
As a Research Engineer on this team, you'll own the end-to-end loop: Analyze how agents fail on Kotlin → build evals that capture those failures → research and implement methods to fix them → measure the improvement. Your work will directly shape how millions of developers experience Kotlin through AI coding agents.
Build tools for agentic error analysis
Build evaluation pipelines
Research methods for improving agent and model behavior on Kotlin
Build public Kotlin benchmarks
Don't check every box? That's okay – if you're excited about this work and bring strong fundamentals, we'd love to hear from you. We're happy to talk and provide the training you need to grow into the role.
*Some benefits may vary depending on location.
#LI-DNI
We are an equal opportunity employer
We know great ideas can come from anyone, anywhere. That’s why we do our best to create an open and inclusive workplace – one that welcomes everyone regardless of their background, identity, religion, age, accessibility needs, or orientation.
We process the data provided in your job application in accordance with the Recruitment Privacy Policy.
Ready to apply?
Apply to JetBrains
Share this job
Welcome to the future of cloud networking and security!
Cato Networks is the first company to converge enterprise networking and security into one centralized and global service that is delivered by cloud. It is led by networking and security pioneer Shlomo Kramer (Check Point, Imperva) and early investor (Palo Alto Networks, Exabeam, Trusteer and more). Cato’s unique technology inspired a brand-new product category, later named “SASE” by Gartner and a market expected to reach $28.5 billion by 2028.
This is your opportunity to get on the rocket ship and join a company that is building a cutting-edge enterprise network and secure cloud platform, and is on a fast track to becoming the worldwide market leader – don’t miss it!
Ready to apply?
Apply to Cato Networks
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.