Why Verifone
For more than 30 years, Verifone has established a remarkable record of leadership in the electronic payment technology industry. Verifone is one of the leading electronic payment solutions brands and among the largest providers of electronic payment systems worldwide.
Verifone has a diverse, dynamic, and fast-paced work environment in which employees are focused on results and have opportunities to excel. We take pride in working with leading retailers, merchants, banks, and third-party partners to invent and deliver innovative payment solutions around the world. We strive for excellence in our products and services and are obsessed with customer happiness.
Across the globe, Verifone employees are leading the payments industry through experience, innovation, and an ambitious spirit. Whether it’s developing the next generation of secure payment systems or finding new ways to bring electronic payments to emerging markets, the Verifone team is dedicated to the success of our customers, partners, and investors. It is this passion for innovation that drives every Verifone employee toward personal and professional success.
Verifone is proudly an in-office work culture as we see immense benefits to career development and business results from our colleagues being physically co-located.
What’s Exciting About the Role
Verifone is seeking a Kafka DevOps Engineer to join our Platform Engineering team. This is an operations-first role with a strong emphasis on scripting, automation, and pipeline development. You’ll be hands-on with day-to-day Kafka operations, reliability, tuning, and high availability for payment gateway solutions that process billions of transactions annually on-prem and in AWS Cloud. Beyond keeping the lights on, you’ll play a key role in building the data pipelines—of which Kafka is a core component—that power Verifone’s new AI, machine learning, and analytics initiatives. You’ll also be part of an active effort to migrate Kafka and related services to Kubernetes, giving you hands-on experience with a meaningful infrastructure modernization project. The technology footprint is broad: Redis, MongoDB, PostgreSQL, MySQL, Snowflake, and more—so you’ll grow well beyond a single-technology niche.
Key Responsibilities
Kafka Operations & Reliability
- Manage and support Apache Kafka clusters (including MSK), Kafka Connect ecosystem, and KSQL for high-throughput, fault-tolerant messaging and event streaming pipelines.
- Monitor system health, set up alerts, and drive incident response and root cause analysis (RCA).
- Handle day-to-day operational tasks including offset management, lag monitoring, consumer group management, and cluster rebalancing.
- Perform routine cluster maintenance: upgrades, configuration changes, scaling, and health checks.
Scripting, Automation & DevOps
- Develop and manage automation scripts using Python and Shell scripting to reduce manual toil and improve operational efficiency.
- Design, implement, and maintain robust CI/CD pipelines for patching automation.
- Build and maintain Infrastructure as Code (IaC) using tools such as Terraform, Ansible, or similar.
- Containerize and orchestrate workloads using Docker and Kubernetes, with the opportunity to help migrate Kafka and related services to Kubernetes.
Data Pipeline Development
- Build and optimize end-to-end data pipelines, with Kafka as a core streaming component alongside batch and ETL processes, enabling new AI, ML, and analytics use cases.
- Collaborate with development teams to implement best practices for data flow, security, scalability, and performance.
- Continuously identify and implement improvements to the software development lifecycle (SDLC).
Cloud & Infrastructure
- Manage cloud infrastructure on AWS (VMs, networking, storage, IAM, etc.).
- Support cloud migration tasks and assist in moving workloads between on-prem and cloud environments.
Required Qualifications / Skills
- 4+ years of overall engineering experience with 2+ years of hands-on experience supporting Kafka infrastructure in a large-scale production environment.
- Kafka Producer/Consumer microservices concepts and Kafka distributed architecture.
- Solid Linux fundamentals: networking basics, logs, system troubleshooting, process/memory, disk.
- Strong scripting and automation skills (Python, Bash) with a track record of reducing operational toil.
- CI/CD pipeline development and Infrastructure-as-Code experience (Terraform preferred).
- Cloud engineering skills, preferably AWS (EC2, VPC, IAM, MSK/ElastiCache, CloudWatch).
- Familiarity with observability tools (metrics/logs/tracing concepts) and incident response practices.
- Basic understanding of distributed systems tradeoffs (availability, consistency, partitions, backpressure).
- Strong communication and presentation skills with emphasis on executive communication.
- Flexible with regards to working shifts; on-call and weekends.
Preferred Skills (Highly Desired)
Data Engineering & Pipelines
- Data engineering skills including data analytics, data processing, ETL, and data lake technologies (batch and streaming, file formats like Parquet, table formats like Iceberg/Delta/Hudi, basic orchestration).
- Experience with AWS data tools (Athena, Glue, Iceberg, Redshift, etc.).
- Exposure to Kafka Streams, Apache Flink, or similar stream processing frameworks.
Database & Caching Technologies
- Operate Redis deployments for caching, ephemeral state, queues/streams, and rate limiting use cases.
- Relational DB experience: PostgreSQL and/or MySQL (indexing basics, vacuum/analyze, query plans, replication fundamentals).
- MongoDB operational familiarity (replica sets, elections, oplog basics, backup/restore).
Infrastructure & Security
- Container/Kubernetes familiarity (deployments, stateful workloads, storage classes).
- On-prem experience (VMware/KVM, storage, networking).
- Security fundamentals: least privilege, secrets management, encryption-in-transit/at-rest concepts.
- Experience working with PCI (Payment Card Industry Data Security) standards.
What We Offer
- Direct impact on Verifone’s global payment infrastructure—your work keeps billions of transactions flowing reliably every year.
- Ground-floor involvement in building the data pipelines that will drive Verifone’s AI, machine learning, and analytics strategy.
- Hands-on participation in a Kafka-to-Kubernetes migration—a resume-building modernization project from day one.
- Multi-technology exposure—Kafka, Redis, MongoDB, PostgreSQL, MySQL, Snowflake—so you grow as a well-rounded platform engineer, not a single-tool specialist.
- Mentorship from senior engineers and Kafka architects with deep production experience.
- Clear career progression path from mid-level to senior and lead engineer.
- Funded certification paths—Confluent (CCDAK, Certified Administrator), AWS (Solutions Architect, MSK Specialty), CKA—plus budget for courses and technical conferences.
- Hybrid/remote working flexibility with a collaborative team culture.
- Competitive compensation with performance-based incentives.
Our Commitment
Verifone is committed to creating a diverse environment and is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. Verifone is also committed to compliance with all fair employment practices regarding citizenship and immigration status.