Portfolio intelligence for hedge funds and asset managers — analyze risk, decompose performance, and surface alpha.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
About Arcana
Arcana is a portfolio intelligence platform used by hedge funds and asset managers to analyze performance and risk. We’re rethinking the tools institutional investors rely on—and we’re hiring analysts who want to drive that transformation.
What You’ll Do
1. Build & Scale High-Impact Teams
2. Drive Technical Strategy & Execution
3. Lead Engineering Delivery at Scale
4. Champion Engineering Excellence
5. Coach & Develop Talent
What We’re Looking For
Ready to apply?
Apply to Arcana Analytics
Share this job
Staff Software Engineer (Data Platform)
Join our dynamic team at the forefront of cutting-edge technology as we seek a seasoned Staff Data Engineer. Embark on a journey where your deep-rooted expertise in distributed systems, data architectures, and large-scale processing becomes the cornerstone of building high-performance data platforms. This pivotal role demands proficiency in designing and scaling compute and I/O-intensive data systems, ensuring reliability, efficiency, and cost optimization across the data lifecycle.
Responsibilities:
● Design and build scalable data platform components for batch and real-time data processing.
● Architect, develop, and operationalize large-scale data systems across ingestion, transformation, and serving layers.
● Build and manage robust data pipelines ensuring high reliability, scalability, and cost efficiency.
● Develop reusable frameworks and tooling to accelerate productivity for data engineers and data scientists.
● Leverage expertise in Python, Airflow, SQL, and cloud platforms to build production-grade data solutions.
● Optimize query performance and data models using strong understanding of columnar OLAP systems such as ClickHouse, Doris, and StarRocks.
● Implement streaming and near real-time data processing systems.
● Translate complex business requirements into scalable and efficient data platform solutions.
● Work collaboratively with cross-functional teams and provide technical leadership and mentorship.
● Drive architectural decisions by evaluating tradeoffs and selecting the right tools for the problem.
Requirements:
● Bachelor's Degree in Computer Science, Information Technology, or a similar discipline.
● 8+ years of professional experience in data engineering, backend systems, or distributed systems.
● Proven experience building scalable data platforms and large-scale data systems.
● Strong experience with ETL pipelines, data integration, and workflow orchestration systems such as Airflow or Temporal.
● Hands-on experience in Python and SQL with strong understanding of data warehouse concepts.
● Experience working with distributed OLTP/OLAP databases such as ClickHouse, PostgreSQL, Cassandra, or Elasticsearch.
● Knowledge of messaging and streaming systems such as Kafka.
● Experience with cloud platforms (AWS/GCP) and big data tools such as Spark.
● Strong understanding of columnar storage systems and query optimization techniques.
● Solid understanding of distributed systems fundamentals and associated tradeoffs.
● Experience working with containers and orchestration tools such as Docker and Kubernetes.
● Strong Linux fundamentals and system-level debugging skills.
● Familiarity with modern data architectures such as Lakehouse (Iceberg, Hudi, Delta) is a plus.
Ready to apply?
Apply to Arcana Analytics
Share this job
Title: Senior AI Engineer — Inference & Agent Systems
Location:
- BLR/Remote India
What We're Building
Arcana is building AI agents that synthesize information across heterogeneous sources and deliver structured, reasoned answers in real time. The product only works if the agents are fast, reliable, and correct, not approximately correct.
Our stack: Go + Temporal for orchestration, a Plan-Execute-Synthesize agent architecture, and an evaluation harness we use to measure every regression. The problems are hard. The latency bar is aggressive. The accuracy requirements are unforgiving.
The Work
Inference Optimization
- Drive TTFT below 400ms for multi-step agent pipelines
- Streaming optimization: first token to user while sub-agents are still running
- KV cache strategy, prompt compression, dynamic context window management
- Multi-provider routing: model selection by latency, cost, and task type across OpenAI, Anthropic, Gemini, and open-weight models
Agent Architecture
- Design and implement Plan-Execute-Synthesize pipelines that run sub-agents in parallel DAGs, not sequential chains
- Build reliable orchestration on top of Temporal: retries, timeouts, partial failure recovery, idempotency
- Structured output enforcement: JSON schema validation, retry loops on malformed LLM output, graceful degradation
- Tool call design: schema design that LLMs actually follow reliably across providers
Evaluation & Harness
- Own the eval framework end to end: ground truth datasets, automated scoring pipelines, regression detection on every PR
- LLM-as-judge pipelines for qualitative output assessment
- Latency regression testing - p50/p95/p99 tracked across every deployment
- Adversarial test case design: ambiguous queries, missing data, conflicting sources, malformed tool responses
Infrastructure
- Model serving and cold start optimization
- Async worker architecture for parallel sub-agent execution
- Observability: trace every token, every tool call, every synthesis step
What We're Looking For
You've built something that runs in production at a meaningful scale and you understand why it's fast (or why it isn't).
Strong signal:
- You've worked on inference pipelines where TTFT was the primary metric and you moved it meaningfully
- You've built multi-step agent systems and you know where they break not from reading papers but from watching them fail in production
- You've written eval harnesses from scratch and you have opinions about what makes a ground truth dataset actually useful
- You've debugged LLM non-determinism in production and built systems resilient to it
- You've worked with streaming LLM responses and built infrastructure around partial output handling
Weaker signal (but not disqualifying):
- You've fine-tuned models but haven't shipped inference systems
- You've used LangChain/LlamaIndex but haven't built the layer underneath
- Strong ML research background without systems exposure
Stack familiarity (we care more about depth than match): Go, Python, Temporal, Kafka, PostgreSQL, Docker
Why This Role
The problems here don't have blog posts about them yet. Parallel agent DAG execution under hard latency budgets, streaming synthesis across partial sub-agent results, eval harnesses for non-deterministic multi-step systems: these are genuinely unsolved at production quality. Small team. High ownership. Every engineer's decisions ship to production.
Who We Want to Hear From
You've shipped inference systems at:
- A real-time AI product (search, coding assistant, chat at scale)
- A model serving infrastructure company
- An agent platform (any domain)
Or you've built eval/harness infrastructure that a team of 10+ engineers actually trusted to catch regressions.
Apply
Send to: [careers@arcana.io]
We respond to every application.
Ready to apply?
Apply to Arcana Analytics
Share this job
Title: Applied AI Engineer — Inference & Agent Systems
Location:
United States
What We're Building
Arcana is building AI agents that synthesize information across heterogeneous sources and deliver structured, reasoned answers in real time. The product only works if the agents are fast, reliable, and correct, not approximately correct.
Our stack: Go + Temporal for orchestration, a Plan-Execute-Synthesize agent architecture, and an evaluation harness we use to measure every regression. The problems are hard. The latency bar is aggressive. The accuracy requirements are unforgiving.
The Work
Inference Optimization
- Drive TTFT below 400ms for multi-step agent pipelines
- Streaming optimization: first token to user while sub-agents are still running
- KV cache strategy, prompt compression, dynamic context window management
- Multi-provider routing: model selection by latency, cost, and task type across OpenAI, Anthropic, Gemini, and open-weight models
Agent Architecture
- Design and implement Plan-Execute-Synthesize pipelines that run sub-agents in parallel DAGs, not sequential chains
- Build reliable orchestration on top of Temporal: retries, timeouts, partial failure recovery, idempotency
- Structured output enforcement: JSON schema validation, retry loops on malformed LLM output, graceful degradation
- Tool call design: schema design that LLMs actually follow reliably across providers
Evaluation & Harness
- Own the eval framework end to end: ground truth datasets, automated scoring pipelines, regression detection on every PR
- LLM-as-judge pipelines for qualitative output assessment
- Latency regression testing - p50/p95/p99 tracked across every deployment
- Adversarial test case design: ambiguous queries, missing data, conflicting sources, malformed tool responses
Infrastructure
- Model serving and cold start optimization
- Async worker architecture for parallel sub-agent execution
- Observability: trace every token, every tool call, every synthesis step
What We're Looking For
You've built something that runs in production at a meaningful scale and you understand why it's fast (or why it isn't).
Strong signal:
- You've worked on inference pipelines where TTFT was the primary metric and you moved it meaningfully
- You've built multi-step agent systems and you know where they break not from reading papers but from watching them fail in production
- You've written eval harnesses from scratch and you have opinions about what makes a ground truth dataset actually useful
- You've debugged LLM non-determinism in production and built systems resilient to it
- You've worked with streaming LLM responses and built infrastructure around partial output handling
Weaker signal (but not disqualifying):
- You've fine-tuned models but haven't shipped inference systems
- You've used LangChain/LlamaIndex but haven't built the layer underneath
- Strong ML research background without systems exposure
Stack familiarity (we care more about depth than match): Go, Python, Temporal, Kafka, PostgreSQL, Docker
Why This Role
The problems here don't have blog posts about them yet. Parallel agent DAG execution under hard latency budgets, streaming synthesis across partial sub-agent results, eval harnesses for non-deterministic multi-step systems: these are genuinely unsolved at production quality. Small team. High ownership. Every engineer's decisions ship to production.
Who We Want to Hear From
You've shipped inference systems at:
- A real-time AI product (search, coding assistant, chat at scale)
- A model serving infrastructure company
- An agent platform (any domain)
Or you've built eval/harness infrastructure that a team of 10+ engineers actually trusted to catch regressions.
Apply
Send to: [careers@arcana.io]
Include:
We respond to every application.
Ready to apply?
Apply to Arcana Analytics
Share this job
Responsibilities:
Requirements:
Ready to apply?
Apply to Arcana Analytics
Share this job
As our Lead Security Engineer, you'll own and elevate Arcana's overall security posture - cloud, on-prem, and everything in between. You'll design and enforce policies, automate controls, and harden infrastructure end-to-end. While your primary focus will be on our GCP resources, you'll also partner with teams across networking, applications, and compliance to ensure we're secure by design and resistant to drift.
Responsibilities:
Requirements:
Helpful Experience:
Ready to apply?
Apply to Arcana Analytics
Share this job
As our Lead Security Engineer, you'll own and elevate Arcana's overall security posture - cloud, on-prem, and everything in between. You'll design and enforce policies, automate controls, and harden infrastructure end-to-end. While your primary focus will be on our GCP resources, you'll also partner with teams across networking, applications, and compliance to ensure we're secure by design and resistant to drift.
Responsibilities:
Requirements:
Helpful Experience:
Ready to apply?
Apply to Arcana Analytics
Share this job
We're seeking a passionate Manual QA Engineer to drive the quality and performance of our software products. You'll take charge of test plans, hunt down bugs, and collaborate with development and product teams to ensure high-quality releases. Your work will be key in maintaining the high standards of our platform and enabling product development at Arcana.
Ready to apply?
Apply to Arcana Analytics
Share this job
Ready to apply?
Apply to Arcana Analytics
Share this job
Join our dynamic team at the forefront of cutting-edge technology as we seek a seasoned DevOps Engineer (Bangalore, Coimbatore). Embark on a journey where your deep-rooted expertise in computer science fundamentals, alongside an intricate understanding of data structures, algorithms, and system design, becomes the cornerstone of innovative solutions. This pivotal role not only demands your proficiency in developing and elevating compute and I/O-intensive applications but also ensures their peak performance and unwavering reliability.
Ready to apply?
Apply to Arcana Analytics
Share this job
Ready to apply?
Apply to Arcana Analytics
Share this job
We're looking for a dynamic Senior Data Scientist to join our team, working alongside our founders, financial analysts, product managers, and engineers. This is your chance to get hands-on with massive datasets, crafting the metrics that matter to institutional investors worldwide. For those who are curious and passionate about financial technologies, this is a great opportunity to work at an extremely well-capitalized startup with a proven team of senior financial and tech industry talent to build the future of investing tools.
Responsibilities:
Requirements:
Ready to apply?
Apply to Arcana Analytics
Share this job
Responsibilities:
Requirements:
Ready to apply?
Apply to Arcana Analytics
Share this job
Join our dynamic team at the forefront of cutting-edge technology as we seek a seasoned Senior Data Engineer. Embark on a journey where your deep-rooted expertise in computer science fundamentals, alongside an intricate understanding of data structures, algorithms, and system design, becomes the cornerstone of innovative solutions. This pivotal role not only demands your proficiency in developing and elevating compute and I/O-intensive applications but also ensures their peak performance and unwavering reliability.
Responsibilities:
Requirements:
Ready to apply?
Apply to Arcana Analytics
Share this job
We are looking for an IT Support Engineer to provide day-to-day on-site and remote support to employees, ensuring smooth operations across hardware, software, and core IT services. This role requires a security-first mindset, strong troubleshooting skills, and the ability to work collaboratively with teams to support business operations at scale.
Key Responsibilities
Requirements
Nice to Have
Ready to apply?
Apply to Arcana Analytics
Share this job
Join our dynamic team at the forefront of cutting-edge technology as we seek a seasoned Staff/Lead Backend Engineer. Embark on a journey where your deep-rooted expertise in computer science fundamentals, alongside an intricate understanding of data structures, algorithms, and system design, becomes the cornerstone of innovative solutions. This pivotal role not only demands your proficiency in developing and elevating compute and I/O-intensive applications but also ensures their peak performance and unwavering reliability.
Responsibilities:
Requirements:
Ready to apply?
Apply to Arcana Analytics
Share this job
We're looking for a passionate and experienced Lead iOS Engineer who values clean code, sound design, and collaboration. We're building a next-gen ultra-fast, secure portfolio intelligence platform that blends speed, delight, and reliability, and now, we're ready to bring that experience to iOS. This is a unique opportunity to build the first version of the iOS application from the ground up for Arcana.
Responsibilities:
Requirements:
Must-Have Skill:
Ready to apply?
Apply to Arcana Analytics
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.