All active GitLab roles based in Ukraine.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
Our client is a global technology company transforming the way brands and agencies make marketing decisions. Its platform enables marketers to plan and activate cross-channel, programmatic media campaigns using real-time research, proprietary audience data, advanced analytics, and more than 150 integrated partners including Facebook, Instagram, Pinterest, Snapchat, and Twitter. The company operates across North America, Europe, the Middle East, Asia, and Australia.
As a member of the DevOps team, you will be responsible for delivering solutions to your assigned team and scalable, reusable solutions across the entire engineering organization. You will operate infrastructure and ensure services are monitored, reliable, automated, and scalable, supporting tasks from server provisioning and application deployments to troubleshooting. You will also help bridge the gap between development and operations, collaborating across engineering and business teams.
The global benefits package includes:
Ready to apply?
Apply to Coherent Solutions
Share this job
Our client is a global technology company transforming the way brands and agencies make marketing decisions. Its platform enables marketers to plan and activate cross-channel, programmatic media campaigns using real-time research, proprietary audience data, advanced analytics, and more than 150 integrated partners including Facebook, Instagram, Pinterest, Snapchat, and Twitter. The company operates across North America, Europe, the Middle East, Asia, and Australia.
Data Management Platform is the core system that receives, processes and provides information to other external DMPs and real time platforms. The purpose of DMP is to store the connection between users/devices and their respective behavioral categories. Further this data is used by the other teams for the analytics and digital ad targeting. Team roadmap includes developing a new system based on the streaming architecture with a relevant tech stack.
The global benefits package includes:
Ready to apply?
Apply to Coherent Solutions
Share this job
Kimchi is the AI platform inside CAST AI. We started by helping companies run LLMs on their own Kubernetes clusters - now we're building the execution layer where agents do real work.
Our Infrastructure today: multi-model inference (MiniMax, Kimi, GLM-5, Nemotron, DeepSeek) with intelligent routing, an OpenAI-compatible API, and deployment flexibility from our GPUs to your VPC. The inference layer is the foundation. What we're hiring for sits on top of it: coding agents, agent runtimes, orchestration systems, and the reliability engineering that makes them actually finish things.
Tech Stack: TypeScript, Go, Kubernetes, AWS/GCP/Azure, MCP, Prometheus/Grafana/Loki, GitLab CI, ArgoCD.
Why harness engineering matters here
OpenAI and Anthropic ship models. They also ship one harness each - the scaffolding that turns a raw model into something that can plan, execute, recover, and complete work. We ship a different kind of harness: one built for cost-conscious, long-horizon autonomy, running on inference infrastructure we control end-to-end.
A decent model with a great harness beats a great model with a bad harness. We've watched this play out. The gap between what today's models can do and what you see them doing is largely a harness gap - and that gap is where we operate.
What you'll build
The ratchet.
Every time our agent makes a mistake, we engineer a solution so it never makes that mistake again. That means hooks that enforce constraints the model "knows" but forgets: pre-commit lint checks, permission gates, context compaction before the window fills. Success is silent, failures are verbose.
Long-horizon execution.
Our harness is built around spec-driven autonomy: meta-prompting, fresh context per task, worktree-per-slice git strategy, automatic replanning, crash recovery, stuck detection. We're implementing Ralph loops - when the model tries to exit, we intercept and reinject the goal into a fresh context. The agent reads state from disk and continues. Multi-session, multi-day work, without context rot.
Planner/executor splits.
Planning with a reasoning model, executing with a fast one, evaluating with a third. Separating generation from evaluation beats self-verification because agents reliably skew positive when grading their own work.
The harness surface.
CLI, TUI, MCP integration, sandboxed execution, telemetry. Our AGENTS.md is short - every line traces to a specific thing that went wrong. TypeScript on the surface, Go where it matters.
Memory and context.
Moving agents off laptops, giving them state that survives across sessions, managing context so information lands where it's actionable. Compaction, tool-call offloading, progressive skill disclosure.
What makes this different (with receipts)
You've seen the pitch: "we route to the best model." Everyone says that. Here's what we actually have:
What success looks like (after 6 months):
This is a location-specific opportunity. We are currently accepting applications from candidates residing in the following European countries: Bulgaria, Croatia, Estonia, Greece, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, Slovenia, and Ukraine.
*As part of our standard hiring process, we would like to inform you that a background check may be conducted at the final stage of recruitment through our third-party provider, Checkr.
*Please note that Cast AI does not provide any form of visa sponsorship/work permit.
#LI-Remote
Ready to apply?
Apply to Cast AI
Share this job
Cast AI is an automation platform that operates cloud-native and AI infrastructure at scale. By embedding autonomous decision-making directly into Kubernetes and cloud environments, Cast AI continuously optimizes performance, reliability, and efficiency in production.
The old way doesn't work. As Kubernetes and AI environments grow, manual decisions don’t. Cast AI replaces tickets, alerts, and manual tuning with continuous automation that adapts infrastructure as conditions change. Efficiency and cost savings follow naturally from that automation.
Over 2,100 companies already rely on Cast AI, including Akamai, BMW, Cisco, FICO, HuggingFace, NielsenIQ, Swisscom, and TGS.
Global team, diverse perspectives
We're headquartered in Miami, but our impact is international. We take a global and intentional approach to diversity. Today, Cast AI operates across 34 countries spanning Europe, North America, Latin America, and APAC, bringing a wide range of perspectives into how we build and lead.
Unicorn momentum
In January 2026, we achieved unicorn status with a strategic investment from Pacific Alliance Ventures, the corporate venture arm of Shinsegae Group (a $50+ billion Korean conglomerate). Our valuation now exceeds $1 billion, and we're just getting started.
Join us as we build the future of autonomous infrastructure.
This is a location-specific opportunity. We are currently accepting applications from candidates residing in the following European countries: Bulgaria, Croatia, Estonia, Greece, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, Slovenia, and Ukraine.
As a Senior Software Engineer, you will have the opportunity to work on different key features of our product. We are currently hiring Senior Software Engineers for the following teams:
- Workload Optimization - Automates workload resource management by dynamically adjusting resource allocations, helping developers significantly reduce costs and improve application reliability.
- Karpenter - The Karpenter team powers the integration between Karpenter and Cast AI, bringing enterprise capabilities to the most popular open source Kubernetes autoscaler. We enhance Karpenter with advanced features that improve application reliability and performance while optimizing costs. By joining the team, you’ll bridge open source innovation with enterprise requirements, directly impacting how organizations run Karpenter at scale.
- Reporting - Builds a scalable reporting system that ingests millions of rows per second into our time-series databases, providing insights into cost savings, workload efficiencies, and Cast AI automation impact.
- Pricing - Drives the synchronization of public and customer cloud resources, availability, and dynamic pricing across all major cloud providers. Empowers autoscaling by leveraging discounts, commitments, and cross-cluster tracking to maximize savings. Provides a reliable source of truth for node pricing, resources, components, discounts, and commitments.
- Autoscaler - Automates Kubernetes node autoscaling to optimize clusters, balance workloads, remove underutilized nodes, and dynamically allocate capacity in real-time, thereby reducing cluster costs by half.
- Identity - Builds and maintains the trust and access foundation for the entire platform, ensuring every user, service, and workload authenticates and interacts securely and seamlessly at scale.
- Billy - Powers the critical day-2 operations layer of the platform - from billing and audit trails to notifications and feature flags - ensuring the platform runs reliably, transparently, and at scale for every customer, every day.
*As part of our standard hiring process, we would like to inform you that a background check may be conducted at the final stage of recruitment through our third-party provider, Checkr.
*Please note that Cast AI does not provide any form of visa sponsorship/work permit.
#LI-Remote
Ready to apply?
Apply to Cast AI
About Sisense
Sisense is a leading AI-powered Analytics Platform as a Service (AnPaaS) that empowers product teams and developers to embed conversational, predictive, and agentic intelligence directly into applications and workflows. Our API-first, developer-first platform turns complex data into faster, smarter, actionable decisions for over 2,000 global customers across financial services, retail, healthcare, and technology.
With the recent launch of Sisense Intelligence (Intelligence Assistant, MCP server, Managed LLM), we are accelerating innovation in embedded analytics at scale.
The Role
As a DevOps Engineer, you will play a key role in designing, building, and maintaining the scalable, reliable infrastructure that powers Sisense’s AI-powered embedded analytics platform. You will work on cloud-native environments, CI/CD pipelines, automation, monitoring, and production stability — helping ensure our platform delivers high performance and developer experience at scale.
This is a hands-on role with significant ownership and impact on both internal development velocity and customer-facing reliability.
Key Responsibilities
Requirements
Nice to Have
Why Join Sisense
This is a high-impact role where you’ll work on the infrastructure behind cutting-edge embedded AI analytics. You’ll join a talented global engineering team, enjoy competitive compensation, meaningful equity, and the opportunity to solve complex technical challenges in a fast-growing company.
If you’re a strong DevOps Engineer who loves building reliable, scalable systems and wants to work on AI-powered data solutions, we’d love to hear from you.
Ready to apply?
Apply to Sisense
JobsRadar was built for real people having a rough time in their job search — not for automated requests. You're clicking way too fast and you're now temporarily blocked.
Come back later. If you're genuinely job hunting, we've got your back — just act like a human.
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.