All active Kubernetes roles based in Latvia.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Cast AI is an automation platform that operates cloud-native and AI infrastructure at scale. By embedding autonomous decision-making directly into Kubernetes and cloud environments, Cast AI continuously optimizes performance, reliability, and efficiency in production.
The old way doesn't work. As Kubernetes and AI environments grow, manual decisions don’t. Cast AI replaces tickets, alerts, and manual tuning with continuous automation that adapts infrastructure as conditions change. Efficiency and cost savings follow naturally from that automation.
Over 2,100 companies already rely on Cast AI, including Akamai, BMW, Cisco, FICO, HuggingFace, NielsenIQ, Swisscom, and TGS.
Global team, diverse perspectives
We're headquartered in Miami, but our impact is international. We take a global and intentional approach to diversity. Today, Cast AI operates across 34 countries spanning Europe, North America, Latin America, and APAC, bringing a wide range of perspectives into how we build and lead.
Unicorn momentum
In January 2026, we achieved unicorn status with a strategic investment from Pacific Alliance Ventures, the corporate venture arm of Shinsegae Group (a $50+ billion Korean conglomerate). Our valuation now exceeds $1 billion, and we're just getting started.
Join us as we build the future of autonomous infrastructure.
Kimchi is an open-source AI inference platform built for teams running serious agentic coding workloads. We're building the harness, the routing, and the infrastructure layer that makes running your own AI coding stack possible - and affordable. We're early, moving fast, and the developer community is central to how we grow.
What you'll do
What we're looking for
*As part of our standard hiring process, we would like to inform you that a background check may be conducted at the final stage of recruitment through our third-party provider, Checkr.
*Please note that Cast AI does not provide any form of visa sponsorship/work permit.
#LI-Remote
Ready to apply?
Apply to Cast AI
Cast AI is an automation platform that operates cloud-native and AI infrastructure at scale. By embedding autonomous decision-making directly into Kubernetes and cloud environments, Cast AI continuously optimizes performance, reliability, and efficiency in production.
The old way doesn't work. As Kubernetes and AI environments grow, manual decisions don’t. Cast AI replaces tickets, alerts, and manual tuning with continuous automation that adapts infrastructure as conditions change. Efficiency and cost savings follow naturally from that automation.
Over 2,100 companies already rely on Cast AI, including Akamai, BMW, Cisco, FICO, HuggingFace, NielsenIQ, Swisscom, and TGS.
Global team, diverse perspectives
We're headquartered in Miami, but our impact is international. We take a global and intentional approach to diversity. Today, Cast AI operates across 34 countries spanning Europe, North America, Latin America, and APAC, bringing a wide range of perspectives into how we build and lead.
Unicorn momentum
In January 2026, we achieved unicorn status with a strategic investment from Pacific Alliance Ventures, the corporate venture arm of Shinsegae Group (a $50+ billion Korean conglomerate). Our valuation now exceeds $1 billion, and we're just getting started.
Join us as we build the future of autonomous infrastructure.
This is a location-specific opportunity. We are currently accepting applications from candidates residing in the following European countries: Bulgaria, Croatia, Estonia, Greece, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, Slovenia, and Ukraine.
You'll join one of several teams building the low-level systems behind CAST AI's multi-cloud automation platform. The work sits at the intersection of Kubernetes, cloud infrastructure, and Linux.
Depending on your strengths, you'll work in one of these areas:
Common across all teams: Go, deep Kubernetes, multi-cloud, building beyond what the ecosystem offers.
How we build
We invest heavily in agentic development and AI-powered tooling. Engineers work with code agents and automated workflows daily. We expect you to shape how these tools evolve.
What would make you stand out
*As part of our standard hiring process, we would like to inform you that a background check may be conducted at the final stage of recruitment through our third-party provider, Checkr.
*Please note that Cast AI does not provide any form of visa sponsorship/work permit.
#LI-Remote
Ready to apply?
Apply to Cast AI
Kimchi is the AI platform inside CAST AI. We started by helping companies run LLMs on their own Kubernetes clusters - now we're building the execution layer where agents do real work.
Our Infrastructure today: multi-model inference (MiniMax, Kimi, GLM-5, Nemotron, DeepSeek) with intelligent routing, an OpenAI-compatible API, and deployment flexibility from our GPUs to your VPC. The inference layer is the foundation. What we're hiring for sits on top of it: coding agents, agent runtimes, orchestration systems, and the reliability engineering that makes them actually finish things.
Tech Stack: TypeScript, Go, Kubernetes, AWS/GCP/Azure, MCP, Prometheus/Grafana/Loki, GitLab CI, ArgoCD.
Why harness engineering matters here
OpenAI and Anthropic ship models. They also ship one harness each - the scaffolding that turns a raw model into something that can plan, execute, recover, and complete work. We ship a different kind of harness: one built for cost-conscious, long-horizon autonomy, running on inference infrastructure we control end-to-end.
A decent model with a great harness beats a great model with a bad harness. We've watched this play out. The gap between what today's models can do and what you see them doing is largely a harness gap - and that gap is where we operate.
What you'll build
The ratchet.
Every time our agent makes a mistake, we engineer a solution so it never makes that mistake again. That means hooks that enforce constraints the model "knows" but forgets: pre-commit lint checks, permission gates, context compaction before the window fills. Success is silent, failures are verbose.
Long-horizon execution.
Our harness is built around spec-driven autonomy: meta-prompting, fresh context per task, worktree-per-slice git strategy, automatic replanning, crash recovery, stuck detection. We're implementing Ralph loops - when the model tries to exit, we intercept and reinject the goal into a fresh context. The agent reads state from disk and continues. Multi-session, multi-day work, without context rot.
Planner/executor splits.
Planning with a reasoning model, executing with a fast one, evaluating with a third. Separating generation from evaluation beats self-verification because agents reliably skew positive when grading their own work.
The harness surface.
CLI, TUI, MCP integration, sandboxed execution, telemetry. Our AGENTS.md is short - every line traces to a specific thing that went wrong. TypeScript on the surface, Go where it matters.
Memory and context.
Moving agents off laptops, giving them state that survives across sessions, managing context so information lands where it's actionable. Compaction, tool-call offloading, progressive skill disclosure.
What makes this different (with receipts)
You've seen the pitch: "we route to the best model." Everyone says that. Here's what we actually have:
What success looks like (after 6 months):
This is a location-specific opportunity. We are currently accepting applications from candidates residing in the following European countries: Bulgaria, Croatia, Estonia, Greece, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, Slovenia, and Ukraine.
*As part of our standard hiring process, we would like to inform you that a background check may be conducted at the final stage of recruitment through our third-party provider, Checkr.
*Please note that Cast AI does not provide any form of visa sponsorship/work permit.
#LI-Remote
Ready to apply?
Apply to Cast AI
Cast AI is an automation platform that operates cloud-native and AI infrastructure at scale. By embedding autonomous decision-making directly into Kubernetes and cloud environments, Cast AI continuously optimizes performance, reliability, and efficiency in production.
The old way doesn't work. As Kubernetes and AI environments grow, manual decisions don’t. Cast AI replaces tickets, alerts, and manual tuning with continuous automation that adapts infrastructure as conditions change. Efficiency and cost savings follow naturally from that automation.
Over 2,100 companies already rely on Cast AI, including Akamai, BMW, Cisco, FICO, HuggingFace, NielsenIQ, Swisscom, and TGS.
Global team, diverse perspectives
We're headquartered in Miami, but our impact is international. We take a global and intentional approach to diversity. Today, Cast AI operates across 34 countries spanning Europe, North America, Latin America, and APAC, bringing a wide range of perspectives into how we build and lead.
Unicorn momentum
In January 2026, we achieved unicorn status with a strategic investment from Pacific Alliance Ventures, the corporate venture arm of Shinsegae Group (a $50+ billion Korean conglomerate). Our valuation now exceeds $1 billion, and we're just getting started.
Join us as we build the future of autonomous infrastructure.
This is a location-specific opportunity. We are currently accepting applications from candidates residing in the following European countries: Bulgaria, Croatia, Estonia, Greece, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, Slovenia, and Ukraine.
As a Senior Software Engineer, you will have the opportunity to work on different key features of our product. We are currently hiring Senior Software Engineers for the following teams:
- Workload Optimization - Automates workload resource management by dynamically adjusting resource allocations, helping developers significantly reduce costs and improve application reliability.
- Karpenter - The Karpenter team powers the integration between Karpenter and Cast AI, bringing enterprise capabilities to the most popular open source Kubernetes autoscaler. We enhance Karpenter with advanced features that improve application reliability and performance while optimizing costs. By joining the team, you’ll bridge open source innovation with enterprise requirements, directly impacting how organizations run Karpenter at scale.
- Reporting - Builds a scalable reporting system that ingests millions of rows per second into our time-series databases, providing insights into cost savings, workload efficiencies, and Cast AI automation impact.
- Pricing - Drives the synchronization of public and customer cloud resources, availability, and dynamic pricing across all major cloud providers. Empowers autoscaling by leveraging discounts, commitments, and cross-cluster tracking to maximize savings. Provides a reliable source of truth for node pricing, resources, components, discounts, and commitments.
- Autoscaler - Automates Kubernetes node autoscaling to optimize clusters, balance workloads, remove underutilized nodes, and dynamically allocate capacity in real-time, thereby reducing cluster costs by half.
- Identity - Builds and maintains the trust and access foundation for the entire platform, ensuring every user, service, and workload authenticates and interacts securely and seamlessly at scale.
- Billy - Powers the critical day-2 operations layer of the platform - from billing and audit trails to notifications and feature flags - ensuring the platform runs reliably, transparently, and at scale for every customer, every day.
*As part of our standard hiring process, we would like to inform you that a background check may be conducted at the final stage of recruitment through our third-party provider, Checkr.
*Please note that Cast AI does not provide any form of visa sponsorship/work permit.
#LI-Remote
Ready to apply?
Apply to Cast AI
For more than 30 years, Verifone has established a remarkable record of leadership in the electronic payment technology industry. Verifone is one of the leading electronic payment solutions brands and among the largest providers of electronic payment systems worldwide.
Verifone has a diverse, dynamic, and fast-paced work environment in which employees are focused on results and have opportunities to excel. We take pride in working with leading retailers, merchants, banks, and third-party partners to invent and deliver innovative payment solutions around the world. We strive for excellence in our products and services and are obsessed with customer happiness.
Across the globe, Verifone employees are leading the payments industry through experience, innovation, and an ambitious spirit. Whether it’s developing the next generation of secure payment systems or finding new ways to bring electronic payments to emerging markets, the Verifone team is dedicated to the success of our customers, partners, and investors. It is this passion for innovation that drives every Verifone employee toward personal and professional success.
Verifone is proudly an in-office work culture as we see immense benefits to career development and business results from our colleagues being physically co-located.
What’s Exciting About the Role
Verifone is seeking a Kafka DevOps Engineer to join our Platform Engineering team. This is an operations-first role with a strong emphasis on scripting, automation, and pipeline development. You’ll be hands-on with day-to-day Kafka operations, reliability, tuning, and high availability for payment gateway solutions that process billions of transactions annually on-prem and in AWS Cloud. Beyond keeping the lights on, you’ll play a key role in building the data pipelines—of which Kafka is a core component—that power Verifone’s new AI, machine learning, and analytics initiatives. You’ll also be part of an active effort to migrate Kafka and related services to Kubernetes, giving you hands-on experience with a meaningful infrastructure modernization project. The technology footprint is broad: Redis, MongoDB, PostgreSQL, MySQL, Snowflake, and more—so you’ll grow well beyond a single-technology niche.
Key Responsibilities
Kafka Operations & Reliability
Scripting, Automation & DevOps
Data Pipeline Development
Cloud & Infrastructure
Required Qualifications / Skills
Preferred Skills (Highly Desired)
Data Engineering & Pipelines
Database & Caching Technologies
Infrastructure & Security
What We Offer
Verifone is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. Verifone is also committed to compliance with all fair employment practices regarding citizenship and immigration status.
Ready to apply?
Apply to Verifone
GR8 Tech builds B2B iGaming platforms for operators who play to lead.
We deliver full-cycle, high-impact tech designed to scale — from seamless integrations and expert consulting to long-term operational support. Our platform powers millions of active players and drives real business growth. Call it what it is: the iGaming Platform for Champions.
With 1000+ GR8 people across locations and time zones, we don’t just ship technology — we help operators build success stories across brands, markets, and geos.
Our ambition drives us. Our people make it real.
If you’re a challenger in spirit and a champion in action — join us.
We are looking for an Engineering Manager to lead engineering teams within the Sports Business Unit, responsible for delivering all sports betting content and functionality for our B2B platform. The Sports BU owns end-to-end development of sports-related capabilities, including trading, feeds, risk management, mathematical models, odds services, and sportsbook content. This role combines people leadership, technical delivery ownership, and cross-functional collaboration with Product, Trading, and Platform teams to ensure scalable, reliable, and high-performance sportsbook solutions.
Team management & people development
Project planning & delivery management
Quality assurance & technical governance
Process improvement & best practices
Communication & stakeholder management
Must-have
Nice-to-have
Benefits Cafeteria — annual budget you allocate to:
Sports • Medical • Mental health • Home office • Languages.
Work-life & support
GR8 Tech culture is how we win — through trust, ownership, and a growth mindset. We move fast, stay curious, and keep it real, with open feedback, room to experiment, and a team that’s got your back.
FUELLED BY TRUST: we’re open, honest, and have each other’s backs.
OWN YOUR GAME: we take initiative and own what we do.
ACCELER8: we move fast, focus smart, and keep it simple.
CHALLENGE ACCEPTED: we grow through challenges and stay curious.
BULLETPROOF: we’re resilient, ready, and always have a plan.
Ready to apply?
Apply to GR8 Tech
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its 2025 Design Award winner for Inclusivity.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
Overview
The responsibilities of our Platform team include building and maintaining all backend services, including, but not limited to, payments, analytics, subscriptions, new products, text to speech, and external APIs.
This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, is passionate about making product decisions, and has experience building great user experiences that delight users.
We are a flat organization that allows anyone to become a leader by showing excellent technical skills and delivering results consistently and fast. Work ethic, solid communication skills, and obsession with winning are paramount.
Our interview process involves several technical interviews and we aim to complete them within 1 week.
What You’ll Do
An Ideal Candidate Should Have
What We Offer
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
Ready to apply?
Apply to Speechify
At Bondora, our mission is simple: to make finance easy, transparent, and accessible for everyone.
Founded in 2008, we’ve helped more than 700,000 customers across Europe borrow and invest effortlessly through our trusted digital products.
As a rapidly growing financial technology company, we’re reaching new heights with a bold vision. We’re set to acquire a banking license, unlocking a world of possibilities for our customers. This transition will allow us to expand our lending across more EU countries and broaden our product suite to deliver even more value.
Join us on this journey and let’s build the future of finance together!
As the Lead Machine Learning Engineer at Bondora, you will be the backbone of our Data Science delivery. Your mission is to build the robust infrastructure that powers automated model pipelines, ensures deployment reliability, and governs the full ML lifecycle from experimentation to production.
This is a strategic and hands on engineering role. You will collaborate closely with Data Science, Data Engineering, and Development teams to remove friction, improve scalability, and bring stable, high quality ML solutions into everyday decision making.
You will guide the evolution of our ML engineering stack, lead high impact initiatives, and mentor engineers while shaping a culture of technical excellence 🌿
Your main responsibilities 🎯
If you have the guts, ideas, and a team vision, apply now!
Our hiring process:
1️⃣ Meet Kerli H., our recruiter, for an initial conversation about your background and ambitions.
2️⃣ Complete our personality and logic assessments via Alva Labs.
3️⃣ Meet Rafael, our Head of Data Science.
4️⃣ Complete case study.
5️⃣ Chat with Juris, our Chief Credit Officer.
6️⃣ If everything aligns, welcome to Bondora 🎉
Join our team of Bondorians, and let's make it happen!
If you'd like to read how Bondora collects, uses, and protects your personal data during recruitment, please see HERE.
Ready to apply?
Apply to BondoraCookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.