Lead the maturation of Securly's content classification system — building the ML infrastructure that determines, at scale, whether web content is appropriate for K-12 students, and establishing the rigorous evaluation framework that product and leadership teams depend on.
This is applied ML with direct student safety impact — not research. You will lead a significant uplift of Securly's classification models: refactoring binary models to proper multiclass classification, building labeled evaluation datasets, and producing standardized model cards with per-category precision, recall, F1, and confusion matrix analysis.
At L5, you are the technical leader of the data science function for content safety. You will define the evaluation methodology the team follows, set the standard for what a model card must contain before a model ships, mentor the team on applied ML rigor, and serve as the interface between data science and engineering on production integration constraints.
Level: L5
Experience: 8–15 Years
Location: Pune, India
Work Type: Hybrid (2 days onsite)
Reports To: Engineering Manager, Data Platform
What It Means to Be L5 at Securly
L5 at Securly is a Staff Engineer. You are the technical owner, not just an implementer.
Drive technical direction for your initiative end-to-end: from architecture to production, with minimal oversight from your engineering manager.
Identify and resolve ambiguity in requirements, system boundaries, and design tradeoffs without waiting for a fully-formed spec.
Mentor L3/L4 engineers on the team: code reviews, design feedback, pairing, and raising the bar for what production-quality work looks like.
Partner with your L6 technical lead and the Distinguished Engineer on architectural decisions, surfacing tradeoffs clearly rather than deferring them upward.
Contribute to cross-team engineering standards: you are expected to influence practices beyond your immediate squad.
Translate technical context into clear written artifacts that non-engineers (PM, Support, Leadership) can act on.
Participate in on-call rotation and own the full incident lifecycle for your system: detection, diagnosis, resolution, and retrospective.
What You'll Do
Define the evaluation methodology for content classification at Securly: establish what a model card must contain and hold every model release to that standard before it ships.
Lead the multiclass refactor of Securly's content classification models: redesign binary models to handle multi-label, multi-class content categories (Adult Content, Violence, Self-Harm, Social Media, and others).
Build and maintain labeled evaluation datasets with robust annotation workflows; address class imbalance and label noise systematically; document dataset curation decisions in a versioned data card.
Connect offline evaluation to production monitoring — surface classification drift and error patterns before they become customer-facing issues.
Investigate and resolve misclassification errors: false positives (over-blocking) and false negatives (under-blocking); produce written root cause analyses.
Build and maintain training data pipelines: ingestion, cleaning, labeling, and versioning at scale.
Mentor the existing AI team on evaluation methodology, model development practices, and data science communication rigor.
Communicate precision/recall tradeoffs to product managers and engineers; produce executive-level summaries of classification quality for leadership.
Collaborate with engineering to integrate model outputs into the production filtering stack with appropriate latency and reliability constraints.
Research and prototype improvements: feature representations, model architectures, active learning for label efficiency, domain adaptation for emerging content categories.
Skills & Requirements
Must-Have
Machine learning — multi-label/multi-class classification, model evaluation methodology, handling class imbalance, feature engineering for text and URL data. 5+ years in applied ML roles.
Python (ML stack) — production-quality code: scikit-learn, PyTorch or TensorFlow, pandas, numpy. Notebooks for exploration; production-grade pipelines for delivery.
Text / NLP feature engineering — URL tokenization, domain analysis, HTML content features, TF-IDF or embedding-based representations for web content classification.
ML evaluation rigor — precision/recall tradeoffs, confusion matrix analysis, offline vs. online evaluation, A/B testing, reproducible model cards. At L5, you define the evaluation standard.
Data engineering for ML — training data pipelines, data versioning, handling noisy and partially labeled datasets, annotation workflow design.
Technical communication and stakeholder influence — ability to present quantitative model quality findings to both engineering and non-technical leadership.
Strongly Preferred
Large-scale classification in production — shipping models with latency and throughput constraints; understanding the gap between offline eval metrics and live production behavior.
Active learning / annotation workflows — strategies for efficient label acquisition on large, imbalanced datasets.
Cloud ML infrastructure — AWS SageMaker, GCP Vertex AI, or equivalent for training pipelines, experiment tracking, and model deployment.
Nice to Have
Web content / URL classification domain — prior work on web categorization, safe browsing, or content policy systems.
K-12 / CIPA compliance — understanding of regulated content categories and compliance requirements around false negative rates.
LLM-based classification — zero-shot or few-shot content classification for emerging categories without labeled training data.
Graph / network features — domain co-occurrence, DNS graph signals, or network-based features for domain classification at scale.
Who You Are
You have shipped ML models to production and lived with the consequences — you know what model drift looks like and how to catch it before it becomes a customer issue.
You treat evaluation as a first-class engineering artifact. A model without a model card is not finished — and you set and enforce that standard for the team.
You define the methodology, not just apply it. You produce the evaluation framework that other data scientists use, and you hold them to it.
You can communicate precision/recall tradeoffs to a product manager and to a senior engineer in the same conversation, calibrated to each audience.
You are energized by problems with real stakes: a false negative in Self-Harm classification is not an acceptable error rate.
You mentor by example and by expectation: your code, your analysis, and your documentation set the standard.
About Securly
Securly processes over 1.1 billion requests per day and 54 TB of data daily, protecting more than 20 million students across 20,000+ schools globally. Since pioneering the first cloud-based web filter for K-12 in 2013, Securly has built one of the most trusted, high-scale platforms for student safety, wellness, and engagement. By turning data into meaningful, actionable intelligence, Securly enables schools to identify risk earlier, reduce harmful incidents, and strengthen student support.
We are proud to be consistently recognized as a Top Place to Work, named a Top 40 Most Used EdTech platform, and included on the GSV 150 list as one of the most transformational growth companies in digital learning and workforce skills.
Benefits
Comprehensive Health Insurance (employee, parents, spouse, children)
Accidental & Term Life Insurance
Learning & Development reimbursement
Paid Time Off
Public Holidays (10+ per year)
Retirement Benefits (EPF & gratuity)
Parental Leave (as per statutory norms)
Equal Opportunity Employer Securly is an Equal Opportunity Employer committed to inclusion, fairness, and respect. We welcome applicants from all backgrounds, identities, and experiences. #LI-REMOTE #LI-DO1
The Senior Data Scientist will lead the maturation of Securly's content classification system — building the ML infrastructure that determines, at scale, whether web content is appropriate for K-12 students, and establishing the rigorous evaluation framework that product and leadership teams depend on.
This is applied ML with direct student safety impact — not research. You will lead a significant uplift of Securly's classification models: refactoring binary models to proper multiclass classification, building labeled evaluation datasets, and producing standardized model cards with per-category precision, recall, F1, and confusion matrix analysis.
At L5, you are the technical leader of the data science function for content safety. You will define the evaluation methodology the team follows, set the standard for what a model card must contain before a model ships, mentor the team on applied ML rigor, and serve as the interface between data science and engineering on production integration constraints.
Level: L5
Experience: 8–15 Years
Location: Pune, India
Work Type: Hybrid (2 days onsite)
Reports To: Engineering Manager, Data Platform
What It Means to Be L5 at Securly
L5 at Securly is a Staff Engineer. You are the technical owner, not just an implementer.
Drive technical direction for your initiative end-to-end: from architecture to production, with minimal oversight from your engineering manager.
Identify and resolve ambiguity in requirements, system boundaries, and design tradeoffs without waiting for a fully-formed spec.
Mentor L3/L4 engineers on the team: code reviews, design feedback, pairing, and raising the bar for what production-quality work looks like.
Partner with your L6 technical lead and the Distinguished Engineer on architectural decisions, surfacing tradeoffs clearly rather than deferring them upward.
Contribute to cross-team engineering standards: you are expected to influence practices beyond your immediate squad.
Translate technical context into clear written artifacts that non-engineers (PM, Support, Leadership) can act on.
Participate in on-call rotation and own the full incident lifecycle for your system: detection, diagnosis, resolution, and retrospective.
What You'll Do
Define the evaluation methodology for content classification at Securly: establish what a model card must contain and hold every model release to that standard before it ships.
Lead the multiclass refactor of Securly's content classification models: redesign binary models to handle multi-label, multi-class content categories (Adult Content, Violence, Self-Harm, Social Media, and others).
Build and maintain labeled evaluation datasets with robust annotation workflows; address class imbalance and label noise systematically; document dataset curation decisions in a versioned data card.
Connect offline evaluation to production monitoring — surface classification drift and error patterns before they become customer-facing issues.
Investigate and resolve misclassification errors: false positives (over-blocking) and false negatives (under-blocking); produce written root cause analyses.
Build and maintain training data pipelines: ingestion, cleaning, labeling, and versioning at scale.
Mentor the existing AI team on evaluation methodology, model development practices, and data science communication rigor.
Communicate precision/recall tradeoffs to product managers and engineers; produce executive-level summaries of classification quality for leadership.
Collaborate with engineering to integrate model outputs into the production filtering stack with appropriate latency and reliability constraints.
Research and prototype improvements: feature representations, model architectures, active learning for label efficiency, domain adaptation for emerging content categories.
Skills & Requirements
Must-Have
Machine learning — multi-label/multi-class classification, model evaluation methodology, handling class imbalance, feature engineering for text and URL data. 5+ years in applied ML roles.
Python (ML stack) — production-quality code: scikit-learn, PyTorch or TensorFlow, pandas, numpy. Notebooks for exploration; production-grade pipelines for delivery.
Text / NLP feature engineering — URL tokenization, domain analysis, HTML content features, TF-IDF or embedding-based representations for web content classification.
ML evaluation rigor — precision/recall tradeoffs, confusion matrix analysis, offline vs. online evaluation, A/B testing, reproducible model cards. At L5, you define the evaluation standard.
Data engineering for ML — training data pipelines, data versioning, handling noisy and partially labeled datasets, annotation workflow design.
Technical communication and stakeholder influence — ability to present quantitative model quality findings to both engineering and non-technical leadership.
Strongly Preferred
Large-scale classification in production — shipping models with latency and throughput constraints; understanding the gap between offline eval metrics and live production behavior.
Active learning / annotation workflows — strategies for efficient label acquisition on large, imbalanced datasets.
Cloud ML infrastructure — AWS SageMaker, GCP Vertex AI, or equivalent for training pipelines, experiment tracking, and model deployment.
Nice to Have
Web content / URL classification domain — prior work on web categorization, safe browsing, or content policy systems.
K-12 / CIPA compliance — understanding of regulated content categories and compliance requirements around false negative rates.
LLM-based classification — zero-shot or few-shot content classification for emerging categories without labeled training data.
Graph / network features — domain co-occurrence, DNS graph signals, or network-based features for domain classification at scale.
Who You Are
You have shipped ML models to production and lived with the consequences — you know what model drift looks like and how to catch it before it becomes a customer issue.
You treat evaluation as a first-class engineering artifact. A model without a model card is not finished — and you set and enforce that standard for the team.
You define the methodology, not just apply it. You produce the evaluation framework that other data scientists use, and you hold them to it.
You can communicate precision/recall tradeoffs to a product manager and to a senior engineer in the same conversation, calibrated to each audience.
You are energized by problems with real stakes: a false negative in Self-Harm classification is not an acceptable error rate.
You mentor by example and by expectation: your code, your analysis, and your documentation set the standard.
About Securly
Securly processes over 1.1 billion requests per day and 54 TB of data daily, protecting more than 20 million students across 20,000+ schools globally. Since pioneering the first cloud-based web filter for K-12 in 2013, Securly has built one of the most trusted, high-scale platforms for student safety, wellness, and engagement. By turning data into meaningful, actionable intelligence, Securly enables schools to identify risk earlier, reduce harmful incidents, and strengthen student support.
We are proud to be consistently recognized as a Top Place to Work, named a Top 40 Most Used EdTech platform, and included on the GSV 150 list as one of the most transformational growth companies in digital learning and workforce skills.
Benefits
Comprehensive Health Insurance (employee, parents, spouse, children)
Accidental & Term Life Insurance
Learning & Development reimbursement
Paid Time Off
Public Holidays (10+ per year)
Retirement Benefits (EPF & gratuity)
Parental Leave (as per statutory norms)
Equal Opportunity Employer Securly is an Equal Opportunity Employer committed to inclusion, fairness, and respect. We welcome applicants from all backgrounds, identities, and experiences. #LI-REMOTE #LI-DO1
•Are you curious, excited by experimentation and always looking to innovate? •Do you want to work in embedded payments where you can keep learning and developing whilst getting hands-on experience? •Do you want to have the opportunity to play an important role in a rapidly growing and exciting Fintech business? If so, we would love to connect and collaborate!
About us At Modulr, our vision is a world where all businesses are powered by embedded payments. Modulr enables businesses, from SMEs to Enterprise, initially across the UK and Europe to efficiently pay-in, collect and disburse funds instantly via a range of payment schemes, accounts, and card products. We have created an industry-leading API platform with comprehensive online tools and access, to meet the demands of daily business payments. We have two routes to market. Our Core Business Payments product allows customers in any sector to connect to us and our expanding network of accounting and payroll platforms, including Sage, Xero, BrightPay and IRIS to automate payments. Our Vertical Solutions targets a growing range of industry verticals which directly connect their IT platforms to our APIs and webhooks. We solve complex payment problems for hundreds of clients in a range of industries, including Travel, Lending, Wage Advance, and Investment & Wealth. We are deeply integrated into the payment eco-system. In the UK, we are direct participants of Faster Payments and Bacs. Modulr hold settlement accounts at the Bank of England. Our payment network connectivity includes CHAPS, Open Banking, SEPA, SWIFT and account issuance in multiple currencies. We are principal issuing members of Visa and Mastercard schemes across UK and Europe. Our regulatory permissions and governance structure are the foundations of our business. We are regulated and supervised as an Authorised Electronic Money Institution (AEMI) in the UK by the Financial Conduct Authority and in the Netherlands by De Nederlandsche Bank. Our founding team has a wealth of experience in the payments industry and growing successful businesses. Modulr is backed by the venture arms of payments giants PayPal and FIS, as well as growth investors Blenheim Chalcot, General Atlantic, Frog Capital and Highland Europe. Modulr now has over 400 employees spread globally across offices in London, Edinburgh, Amsterdam, and Mumbai.
Modulr values •Building the extraordinary; going that extra mile. •Owning the opportunity; be passionate and proud of the time you invest. •Move at pace; reach goals faster whilst supported on your career journey. •Achieve it together, working collaboratively and being a Modulite.
About the Role: We are looking for a passionate Data Scientist with 2–3 years of experience to build ML/DL and Generative AI solutions for high-impact financial use cases.
Key Responsibilities: • Develop ML/DL models for fraud detection, risk scoring, and personalization • Build and deploy GenAI applications using LLMs • Design and implement Agentic AI workflows • Work with LangChain and LangGraph for orchestration • Use tools like Claude, Cursor/CoPilot for AI-assisted development • Implement monitoring using LangSmith • Build APIs for model deployment • Collaborate with cross-functional teams • Maintain MLOps pipelines and CI/CD workflows
Key Requirements: • 2–3 years experience in Data Science / ML / DL • Strong Python skills • Experience with ML frameworks (PyTorch, TensorFlow, XGBoost) • Hands-on with GenAI, LLMs, prompt engineering • Experience with LangChain, LangGraph • Familiarity with Agentic AI concepts • Exposure to tools like Claude, Cursor/CoPilot • Experience with LangSmith for debugging/monitoring • Knowledge of APIs, FastAPI/Flask, Docker • Experience with MLOps tools (MLflow, DVC) • FinTech domain experience preferred
ModInclusion
We believe that by seeing Modulr, and the world, from all sorts of angles, we can make life better for all. We want you to know that the things that make you, you — like your identity, age, ability, and background — are things that we will always celebrate and support with open arms. As such, we are keen to maximise the diversity of our workforce and actively encourage applications from anyone and everyone.
Headquartered in Santa Barbara, California, HG Insights is the global leader in technology intelligence. We help the world’s most innovative companies accelerate their go-to-market efforts with precision through advanced data science methodologies and proprietary data assets. We offer a culture that blends innovation, collaboration, and growth, where each team member is empowered to make a measurable impact.
Role Overview
We are looking for a Staff/Senior Machine Learning Engineer to join our growing AI and Data Platform team in Pune, India. In this role, you will be responsible for designing, building, and scaling ML systems that power our core data intelligence products. You’ll work at the intersection of data engineering and machine learning, collaborating closely with data scientists, software engineers, and product teams to turn models and agents into robust, production-ready systems.
This is a high-impact role suited for someone who thrives on ownership, scalability, and deploying real-world ML and AI solutions at scale.
Key Responsibilities
ML & AI System Development
Design and build end-to-end ML pipelines, from data ingestion and feature engineering to model training, serving, and monitoring.
Productionize AI agents, including LLM-backed workflows, multi-step tool use, and API orchestration.
Collaborate with data scientists to bring experimental models into stable, high-performing production systems.
MLOps & Infrastructure
Implement and maintain MLOps workflows (CI/CD for ML), tracking experiments, managing model versions, and automating retraining.
Set up observability for models and agents, including tracing, logging, latency/error tracking, and fallback logic.
Ensure infrastructure supports scalability, performance, and compliance requirements.
Technical Leadership & Collaboration
Lead technical architecture for key AI and ML systems, ensuring maintainability and extensibility.
Partner with product and platform teams to translate business needs into reliable and performant ML solutions.
Mentor junior engineers and help elevate technical practices across the engineering org.
Minimum Qualifications
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
8+ years of industry experience, with at least 4+ years in ML engineering or MLOps.
Strong programming skills in Python and experience with ML libraries such as PyTorch, TensorFlow, and scikit-learn.
Experience deploying AI agents in production, including retrieval-augmented generation (RAG), multi-step reasoning, and tool-use orchestration.
Familiarity with LangChain, LlamaIndex, Haystack, or similar frameworks for agent development.
Hands-on with LLMs (OpenAI, Anthropic, Cohere, etc.), embedding models, and vector store integration.
Expertise in MLOps tools like MLflow, Airflow, SageMaker, or Kubeflow.
Experience with cloud platforms (preferably AWS), Docker/Kubernetes, and large-scale data systems (e.g., Snowflake, Databricks).
Excellent problem-solving and system design skills, including CI/CD, testing, and infrastructure-as-code.
Preferred Qualifications
Experience with streaming pipelines, online inference, and model monitoring at scale.
Exposure to agent memory, feedback loops, or AI workflow orchestration in customer-facing products.
Prior contributions to open-source ML or LLM projects are a plus.
Why Join HG Insights?
Work on high-impact AI products used by global enterprise customers.
Lead the ML engineering strategy and shape our AI platform.
Competitive compensation and benefits tailored for India-based employees.
A collaborative and innovation-focused team environment.
Ready to build the future of AI infrastructure? Apply now to join HG Insights as a Staff Machine Learning Engineer in Pune.
“Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery.
WHY JOIN CAPCO?
You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry.
MAKE AN IMPACT
Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services.
#BEYOURSELFATWORK
Capco has a tolerant, open culture that values diversity, inclusivity, and creativity.
CAREER ADVANCEMENT
With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands.
DIVERSITY & INCLUSION
We believe that diversity of people and perspective gives us a competitive advantage.
MAKE AN IMPACT
Job Details
Lead AI Engineer
Location: Pune (Hybrid)
Role Purpose Design and deliver AI/ML and Generative AI solutions that enhance client intelligence, automation, and decision-making across Private Banking platforms, leveraging data sourced from core banking systems (e.g. Avaloq), CRM, and digital channels.
Key Responsibilities
Develop and deploy AI/ML and GenAI models (e.g. NLP, summarization, recommendation engines)
Build scalable data pipelines and feature engineering workflows using structured and unstructured data
Integrate AI capabilities into enterprise platforms via APIs and microservices.
Work with data originating from core banking platforms (e.g. Avaloq) including client, portfolio, and transaction data.
Enable AI-driven use cases such as client insights, next-best-action, and automation of RM workflows.
Implement model monitoring, governance, and performance optimization aligned to enterprise standards.
Collaborate with engineering and business teams to translate wealth use cases into scalable solutions.
Experience working with wealth management data (clients, portfolios, transactions)
Understanding of data flows from core banking systems such as Avaloq
Awareness of AI governance, model risk, and regulatory considerations in banking
Abacus Insights is transforming how data works for health plans. Our mission is simple: make healthcare data usable, so the people responsible for care and cost decisions can act faster, with confidence. We help health plans break down data silos to create a single, trusted data foundation. That foundation powers better decisions —so plans can improve outcomes, reduce waste, and deliver better experiences for members and providers alike.
Backed by $100M from top investors, we’re tackling big challenges in an industry that’s ready for change. Our platform enables GenAI use cases by delivering clean, connected, and reliable healthcare data that can support automation, prioritization, and decision workflows—and it’s why we are leading the way.
Our innovation begins with people. We are bold, curious, and collaborative—because the best ideas come from working together. Ready to make an impact? Join us and let's build the future together.
About the Role: As a Senior AI Engineer, you will help architect, build and implement our AI platforms, working closely with data quality engineers, data/software engineers, platform engineers, DevOps, product owners and business. You provide technical leadership for multiple, diverse, and geographically distributed teams. You will coach and guide teams to enable future success. Also be a partner with engineering managers, directors and VP on strategic initiatives and future business growth opportunities especially around AI and ensuring the engineering organization can scale to meet the need.
As a member of the engineering team, you are responsible for coaching, mentoring, and driving change to enable us to better deliver and innovate in an agile environment.
Your day to day:
Help shape the direction of our applied AI areas and intelligence features in our products. Drive the deployment of state-of-the-art AI models and systems that directly impacts on the capabilities and performance of Abacus’s products and services (e.g., Abacus Agents, MCP, SQL and RAG agents and tools, etc.)
Develop novel data collection, fine-tuning, and LLM technologies that achieve optimal performance on specific tasks and domains
Design and implement ML/AI pipelines for data preprocessing, feature engineering, model training, hyperparameter tuning, and model evaluation, enabling rapid experimentation, iteration and self-healing
Work closely with cross-functional teams, including AI engineers, data engineers, and product teams, to deliver impactful AI solutions that enhance user productivity and satisfaction
Build scalable, reusable backend systems to support GenAI products across the company. Develop robust logging, telemetry, and evaluation harnesses to ensure reliable model and GenAI system performance
Skillful at driving critical projects (e.g. AI and data engineering Innovation, Development Productivity and Benchmarking)
Meaningful experience in the world of Data and AI
Capable of credible customer interactions
Mentoring development team members to ensure delivered solutions adhere to the software architecture strategy, coding standards, and established organizational policies and procedures
Participating in software architectural discussions, influencing decisions, and collaborating with peers to maintain consistency across the organization
Identifying people and process improvement strategies for the Agile/Scrum team(s), especially for AI projects/teams where innovation and iteration are critical
Communicate organizational updates to ensure teams adhere to the established policies and procedures
Ensure projects are completed on time and according to our quality standards
Facilitates communication upward around architecture, design, and implementation objectives
Proven experience in leading software development teams or projects
Excellent knowledge of software development design, QA and test automation, and experience with agile development methodologies
Demonstrated knowledge of Cloud Architecture, AI Architecture, Agents/Agentic systems, Massive Parallel Processing (MPP) compute frameworks for Data+AI platforms, Security, and MCP/API based services
Good understanding of Incident Management, Configuration Management, Operational efficiency and Customer Escalation Management preferred. Can manage the balance of deliveries on our roadmap commitments while dealing with interruptions and client escalations
Good experience with open source
Coach and mentor software and AI developers, develop staff skills, provide continuous feedback
Has relentlessly high standards (is never satisfied with the status quo)
Responsible for protecting, securing, and proper handling of all confidential data held by Abacus to ensure against unauthorized access, improper transmission, and/or unapproved disclosure of information that could result in harm to Abacus or our clients.
What you bring to the team:
Minimum of 3 years of ML/AI engineering experience in high-velocity, high-growth companies. Alternatively, a strong background in relevant ML/AI research in academia will be considered as contributing qualification
Minimum of 3 years of experience as a software developer at some point in career
Minimum of 2 years of experience with Databricks, Mosaic AI Gateway and associated technologies in Databricks stack related to Data and AI engineering
Strong track record of working with language modeling technologies and GenAI. This could include the following: Developing generative and embedding techniques, modern model architectures, fine tuning / pre-training datasets, evaluation benchmarks, Agents and Agentic workflows (e.g. orchestration, workflow management, observability, debugging), RAG, SQL Agents, MCP, etc.
Proficiency in Python, TensorFlow/PyTorch, and scalable ML/AI architecture
Ability to drive end-to-end model and system development, from research and prototyping to deployment and monitoring
Strong analytical and problem-solving skills, with a passion for improving AI-driven user experiences
Strong coding and software engineering skills, and familiarity with software engineering principles around testing, code reviews and deployment
Experience and good understanding of designing scalable, distributed systems for running small to medium scale data processing applications and services (10s – 100s of TBs of data)
Possesses a level of breadth and depth of software and AI development experience that allows for influence and competence in technical discussions with internal and external stakeholders
Solid understanding of roles adjacent to the software development (product management, project management, client delivery, operations etc.). Ability to adapt to these roles as defined at Abacus and work with others in these roles
Proven record of success of driving and/or leading engineering teams/projects towards developing production grade systems supporting customers
Nice to have experience with, but not required: Snowflake, FHIR, Healthcare Data exposure
What you’ll get in return
Competitive Leave & Benefits
Comprehensive health coverage
Equity for every employee – share in our success
Growth-focused environment – your development matters here
Work arrangements
Standard hours: 8 hours/day, 5 days/week
Location: Pune, Hybrid (3 days a week in office)
Shift: 12pm-9pm IST
Our Commitment as an Equal Opportunity Employer
As a mission-led technology company helping to drive better healthcare outcomes, Abacus Insights believes that the best innovation and value we can bring to our customers comes from diverse ideas, thoughts, experiences, and perspectives. Therefore, we dedicate resources to building diverse teams and providing equal employment opportunities to all applicants. Abacus prohibits discrimination and harassment regarding race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws.
At the heart of who we are is a commitment to continuously and intentionally building an inclusive culture—one that empowers every team member across the globe to do their best work and bring their authentic selves. We carry that same commitment into our hiring process, aiming to create an interview experience where you feel comfortable and confident showcasing your strengths. If there’s anything we can do to support that—big or small—please let us know.
The world’s most sophisticated companies rely on AlphaSense to remove uncertainty from decision-making. With market intelligence and search built on proven AI, AlphaSense delivers insights that matter from content you can trust. Our universe of public and private content includes equity research, company filings, event transcripts, expert calls, news, trade journals, and clients’ own research content.
The acquisition of Tegus by AlphaSense in 2024 advances our shared mission to empower professionals to make smarter decisions through AI-driven market intelligence. Together, AlphaSense and Tegus will accelerate growth, innovation, and content expansion, with complementary product and content capabilities that enable users to unearth even more comprehensive insights from thousands of content sets. Our platform is trusted by over 6,000 enterprise customers, including a majority of the S&P 500. Founded in 2011, AlphaSense is headquartered in New York City with more than 2,000 employees across the globe and offices in the U.S., U.K., Finland, India, Singapore, Canada, and Ireland. Come join us!
In India, we have our teams based in Pune, Mumbai, Delhi and, Bangalore. We pride ourselves on being client-focused and collaborative, on the pioneering edge of technology. We also have fun. Whether it’s office happy hours, team outings, or game nights, we believe in working with people we like to hang out with.
Ideal candidate traits
If you identify as a team player and you are humble, hungry, and smart — we would love to get to know you. We believe these qualities to be vital to building a strong, productive, and collaborative work environment, where everyone is committed to achieving common goals and working towards shared success.
We are looking for people who are willing to learn from others, who can put their egos aside and prioritise the needs of the team. You should also be highly motivated, driven to succeed, and always looking for ways to improve. We value emotional intelligence, excellent communication skills, and a collaborative spirit that enables them to build strong relationships with teammates and work effectively towards shared goals.
At AlphaSense we are committed to fostering a work culture that encourages these qualities and values the contributions of team players who exhibit them. We seek to match the ambition of ideal team players by offering opportunities for personal and professional growth, providing the resources and support necessary to achieve their goals, and creating an environment where hard work, dedication, and teamwork are celebrated and rewarded. At AlphaSense we are building teams of ideal players who embody the core values of Sisu, driving the company’s success and helping us to create a positive impact on the world.
Technologies that we work with:
Frontend
ReactJS, TypeScript, Micro Frontends
Applications
NodeJS, TypeScript, GraphQL, Java 17, Spring Boot, Python, FastAPI
Join our talent community, hear about exciting career opportunities, and get tailored job recommendations based on your interests.
If you’re looking for a challenge, if you love finding things that need interesting solutions, AlphaSense is a great place to come. It is a place for builders, for people who want to create the next thing.
Want to hear more?
Contact us and let’s figure out together if we can find a common path. Tell us about yourself and your background in your own wor
AlphaSense is an equal-opportunity employer. We are committed to a work environment that supports, inspires, and respects all individuals. All employees share in the responsibility for fulfilling AlphaSense’s commitment to equal employment opportunity. AlphaSense does not discriminate against any employee or applicant on the basis of race, color, sex (including pregnancy), national origin, age, religion, marital status, sexual orientation, gender identity, gender expression, military or veteran status, disability, or any other non-merit factor. This policy applies to every aspect of employment at AlphaSense, including recruitment, hiring, training, advancement, and termination.
In addition, it is the policy of AlphaSense to provide reasonable accommodation to qualified employees who have protected disabilities to the extent required by applicable laws, regulations, and ordinances where a particular employee works.
Recruiting Scams and Fraud
We at AlphaSense have been made aware of fraudulent job postings and individuals impersonating AlphaSense recruiters. These scams may involve fake job offers, requests for sensitive personal information, or demands for payment. Please note:
AlphaSense never asks candidates to pay for job applications, equipment, or training.
All official communications will come from an @alpha-sense.com email address.
If you’re unsure about a job posting or recruiter, verify it on our Careers page.
If you believe you’ve been targeted by a scam or have any doubts regarding the authenticity of any job listing purportedly from or on behalf of AlphaSense please contact us. Your security and trust matter to us.
“Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery.
WHY JOIN CAPCO?
You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry.
MAKE AN IMPACT
Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services.
#BEYOURSELFATWORK
Capco has a tolerant, open culture that values diversity, inclusivity, and creativity.
CAREER ADVANCEMENT
With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands.
DIVERSITY & INCLUSION
We believe that diversity of people and perspective gives us a competitive advantage.
Machine Learning and AI: Snowflake Cortex, TensorFlow, PyTorch
Key Responsibilities:
Develop scalable Python applications focused on GenAI
Knowledge of Snowflake Cortex is a big plus.
Learn and apply techniques like document chunking and embeddings
Proficient in both SQL (e.g., PostgreSQL, MySQL) and NoSQL (e.g., MongoDB, Cassandra) databases, including schema design and optimization.
Design, develop, and fine-tune applications built around foundation models (e.g., LLMs, multimodal models) for specific business needs
Implement and experiment with generative AI techniques, including but not limited to Retrieval-Augmented Generation (RAG) and prompt engineering
Lead the implementation of intelligent autonomous agents and multi-agent systems capable of complex decision-making, task execution, and interaction with external systems
Utilize and contribute to agentic AI frameworks (e.g., LangChain, AutoGen, Semantic Kernel, Crew AI) to build robust and scalable AI agents
Develop and integrate Model Context Protocol (MCP) solutions to standardize how AI applications access and utilize external data sources, tools, and real-time information
Lead the development of full-stack applications that integrate generative AI models and agentic systems, ensuring seamless user experiences
Work with front-end technologies (e.g., React, Angular, Vue.js, JavaScript, TypeScript, HTML, CSS) and back-end frameworks (e.g., Python with Flask/Django/FastAPI, Node.js, Go)
Design and implement scalable RESTful APIs and microservices to expose AI functionalities
Deploy, manage, and optimize AI/ML workloads and full-stack applications on at least one major cloud platform (AWS, Azure, or GCP)
Implement LLMOps/MLOps and DevOps best practices for continuous integration, continuous delivery (CI/CD), model deployment, monitoring, and lifecycle management
Follow coding best practices and contribute to testing and documentation
Stay abreast of the latest advancements in generative AI, agentic systems, foundation models, and related fields through continuous research and experimentation
Collaborate effectively with cross-functional teams, including product managers, data scientists, researchers, and other engineers
Communicate effectively with both technical and non-technical team members, ensuring alignment and understanding across all project phases.