All active Databricks roles based in Taipei.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
About Appier
Appier is an AI-native Agentic AI as a Service (AaaS) company that uses artificial intelligence (AI) to power business decision-making. Founded in 2012 with a vision of democratizing AI, Appier’s mission is turning AI into ROI by making software intelligent. Appier now has 17 offices across APAC, Europe and U.S., and is listed on the Tokyo Stock Exchange (Ticker number: 4180). Visit www.appier.com for more information. Our EDP team builds core data backend capabilities that power critical product experiences across ingestion, profile processing, and serving.
About the role
We are hiring a Senior Software Engineer, Data Backend to design and operate production-grade data backend systems at scale. This role combines backend service engineering and data platform engineering, with strong focus on reliability, performance, and cost efficiency in a cloud-native environment.
You will work on high-throughput APIs/services in Go, as well as data processing and workflow components in Python/Spark/SQL, partnering closely with product, platform, and data teams.
Design, build, and maintain backend services and data platform components.
Develop and optimize streaming and batch data workflows.
Improve reliability and operability (monitoring, alerting, incident prevention/recovery).
Improve data quality and consistency through schema/contracts and validation mechanisms.
Diagnose and resolve production issues across service and data layers.
Drive performance and cost optimization initiatives for large-scale workloads.
Collaborate across teams to deliver roadmap milestones.
Mentor engineers and contribute to engineering standards and review culture.
About you
[Minimum qualifications]
6+ years of software engineering experience in backend/data-intensive systems.
Strong proficiency in Go (required).
Strong proficiency in Python (required).
Strong proficiency in SQL (required).
Hands-on experience with distributed data systems (e.g., Kafka, Spark/Databricks).
Experience operating services/pipelines in cloud production environments (GCP preferred).
Solid understanding of reliability practices: observability, on-call, incident response.
Strong communication and ownership; able to drive ambiguous projects to completion.
[Preferred qualifications]
Experience with Databricks/Delta Lake/Lakehouse architectures.
Experience with Kubernetes/GKE and Terraform/IaC.
Experience with data governance (lineage, access control, PII handling).
Experience with identity/profile systems (entity resolution, merge logic).
Experience with FinOps/cost optimization for platform workloads.
[Language]
Fluent in English.
Work on high-impact, business-critical data backend systems.
Own real production outcomes across scale, quality, and cost.
Collaborate with strong cross-functional teams in AI + data.
Grow technical leadership while staying hands-on.
Open to overseas candidates/Visa Support
This position is open to based in Taipei, Taiwan or Tokyo, Japan. For international candidates, Appier's Japan office provides visa sponsorship to ensure a smooth transition to Japan.
#LI-AK1
Ready to apply?
Apply to Appier
Share this job
【 About the company 】
At Tomofun, we LOVE pets and technology. Our vision is to bring joy and innovation to every pet lover in the world.
We are pioneers in the pet tech industry. Our debut product, Furbo - the world's first AIoT treat-tossing pet camera, is the #1 bestseller on Amazon in 15 countries, including the US, UK, and JP and have millions of users in 133 countries. In 2019, we launched an AI-powered subscription service, the Furbo Pet Nanny, which sends smart alerts to our users to notify them to check on the safety of their furbabies when they are home alone. Since the launch of Furbo Pet Nanny, we have saved hundreds of pets' lives and are continuing to innovate and develop new features with our state-of-the-art technology.
The learning curve is steep. But the work is exhilarating. Tomofun is on the cusp of a pet-tech boom and we’re looking for smart, innovative thinkers who aspire to incredible things.
> Know more about us: furbo.com | LinkedIn | Medium Tech Blog
【 About the role 】
We are looking for a Senior Analytics Engineer to join our centralized data team and take ownership of domain-level data modeling, metric standardization, and dashboard delivery that power both product innovation and business growth.
You will collaborate closely with engineers, product managers, and data scientists — actively participating in product development scrums — and will play a critical role in designing data instrumentation, enabling trustworthy insights, and scaling decision-making infrastructure.
【What You’ll Do 】
【 Who you are 】
【Benefits】
【 Perks 】
Ready to apply?
Apply to Tomofun | Furbo Pet Camera
About Appier
Appier is a leading SaaS company empowering businesses with cutting-edge artificial intelligence (AI) to drive smarter decision-making. Founded in 2012 with a mission to democratize AI, we transform complex data into actionable insights, making AI accessible and profitable. With 17 offices across APAC, Europe, and the U.S., and listed on the Tokyo Stock Exchange (Ticker: 4180), Appier is at the forefront of AI innovation. Visit www.appier.com for more information.
About the Role
We’re on the lookout for an ambitious and technically outstanding Senior Software Engineer, Machine Learning to join our Enterprise Solution Engineering Team, AIXON. This elite team leverages state-of-the-art ML technologies to solve real-world marketing challenges by integrating omnichannel customer data at scale.
In this role, you’ll be the vital bridge between cutting-edge research and production-grade deployment. You’ll design, build, and optimize scalable, high-performance ML infrastructure—including data pipelines, APIs, monitoring systems, and workflow orchestration—that power transformative AI solutions.
What You’ll Do
Architect and operate resilient ML job execution frameworks covering training, inference, and post-processing workflows.
Develop and maintain API services and developer tooling to orchestrate ML workflows on Kubernetes using Argo Workflows, Helm, Terraform.
Build scalable, efficient batch pipelines with Apache Spark to support large-scale ML training and evaluation.
Design and maintain robust data infrastructures using Trino, Databricks and other modern database technologies, monitored with Prometheus and Grafana for high availability and observability.
Develop tooling that streamlines ML experimentation, accelerates production workflows, and empowers cross-functional teams to innovate rapidly.
Collaborate deeply with ML scientists to transform research prototypes into reliable, scalable, user-facing AI products.
Lead cloud infrastructure design and operations on GCP, leveraging managed services such as Google Compute Engine (GCE) , Google Kubernetes Engine (GKE) , Cloud Storage, Cloud Functions, Cloud Pub/Sub, Cloud SQL, BigQuery, and more.
Define and implement CI/CD pipelines with tools like Jenkins, Github Action, or ArgoCD to enable seamless, automated deployments.
Harness distributed computing and parallel programming principles to optimize system resource utilization and performance.
What You Bring
Bachelor’s degree in Computer Science, Engineering, or a related technical field (Master’s degree preferred).
5+ years of hands-on experience in ML platform engineering, MLOps, or data infrastructure, deploying enterprise-grade machine learning systems at scale.
Expert proficiency in Python, Java, or Go, with solid foundations in data structures and algorithm design.
In-depth experience with cloud environments (AWS or GCP) and cloud-native service management.
Proven mastery of Docker containers and Kubernetes cluster management, including resource provisioning, autoscaling, and deployment best practices.
Strong understanding of the ML lifecycle—from training and prediction to evaluation, backtesting, and feedback loops.
Familiarity with Git workflows and Linux-based development environments.
Passionate about continual learning and innovation, leveraging AI-powered developer tools like GitHub Copilot and ChatGPT to boost productivity.
What Will Set You Apart
Experience in the MarTech industry or other customer-centric domains, eager to deliver products that delight users and drive business impact.
Demonstrated architectural leadership and ownership, skillfully driving complex, cross-team platform initiatives.
Strong grasp of deep learning fundamentals and end-to-end ML workflow platforms such as Kubeflow, MLflow, or AWS SageMaker.
Hands-on experience with distributed data processing frameworks like Apache Spark, and pipeline orchestration tools such as Apache Airflow, Argo Workflow, or Luigi.
Expertise in production-level ML applications, including handling data imbalance, preventing data leakage, and optimizing resource consumption for large-scale training and serving.
Familiarity with real-time online inference architectures and batch processing trade-offs.
Enthusiastic adopter of “vibe coding” culture—collaborative, transparent, and always pushing technical excellence together.
Prior experience building or developing applications related to large language models (LLM), multi-agent LLM systems, or natural language processing (NLP).
Why Join Appier?
At Appier, you’ll stand at the frontier of AI innovation, working alongside world-class engineers and researchers to create products that transform entire industries. Here, your engineering expertise will directly impact millions of users and drive revolutionary advances in marketing technology. If you’re ready to tackle challenging ML infrastructure problems with passion and creativity, Appier is your ultimate playground!
#LI-AK1
Ready to apply?
Apply to Appier
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.