All active Computer Vision roles based in Jerusalem.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its 2025 Design Award winner for Inclusivity.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
Overview
We're looking to hire for our Data side of our AI team at Speechify. This role is responsible for all aspects of data collection to support our model training operations. We are able to build high-quality datasets at petabyte-scale and low cost through a tight integration of infrastructure, engineering, and research work. We are looking for a skilled Software Engineer to join us.
What You’ll Do
An Ideal Candidate Should Have
What we offer
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
Ready to apply?
Apply to Speechify
Lightricks is an AI-first company creating next-generation content creation technology for businesses, enterprises, and studios with a mission to bridge the gap between imagination and creation. At our core is LTX-2, an open-source generative video model, built to deliver expressive, high-fidelity video at unmatched speed. It powers both our own products and a growing ecosystem of partners through API integration.
The company is also known globally for pioneering consumer creativity through products like Facetune, one of the world’s most recognized creative brands, which helped introduce AI-powered visual expression to hundreds of millions of users worldwide. We combine deep research, user-first design, and end-to-end execution from concept to final render to bring the future of expression to all.
Lightricks is an AI-first company creating next-generation content-creation technology for businesses, enterprises, and studios, with a mission to bridge the gap between imagination and creation. At our core is LTX-2, an open-source generative video model, built to deliver expressive, high-fidelity video at unmatched speed. It powers both our own products and a growing ecosystem of partners through API integration.
Following the success of LTX-2, our widely adopted open-source text-to-audio+video model, we are expanding our efforts to develop cutting-edge audio+video generation models and are hiring Research Scientists to join our Model Evaluation team - part of LTX Foundational Model group.
The Model Evaluation team is the central nervous system of the LTX Foundation Model group. We don't just measure performance; we define what "good" looks like across a vast array of use cases. While we power the next generation of creative tools, LTX is also a foundational engine for simulation pipelines, game engines, synthetic data generation, architectural rendering, and digital avatars. We act as the critical bridge between raw research and industrial-grade reliability, building the benchmarks that ensure our models are world-class for both artists and engineers.
As a Research Scientist in Model Evaluation, you are the ultimate authority on model quality and utility. You will design the automated judges, reward models, evaluation datasets, and benchmarking ecosystems that determine the future of LTX. Your mission is to provide the "ground truth" for our pre-training and post-training teams. You will blend the rigor of a researcher with the intuition of a product-thinker, developing metrics that capture both the aesthetic soul of a video and the functional precision required for high-stakes professional use.
Ready to apply?
Apply to Lightricks
Lightricks is an AI-first company creating next-generation content-creation technology for businesses, enterprises, and studios, with a mission to bridge the gap between imagination and creation. At our core is LTX-2, an open-source generative video model, built to deliver expressive, high-fidelity video at unmatched speed. It powers both our own products and a growing ecosystem of partners through API integration.
The company is also known globally for pioneering consumer creativity through products like Facetune, one of the world’s most recognized creative brands, which helped introduce AI-powered visual expression to hundreds of millions of users worldwide. We combine deep research, user-first design, and end-to-end execution from concept to final render to bring the future of expression to all.
The Core Generative AI team at Lightricks Research is a unified group of researchers and engineers dedicated to developing our generative foundational models that serve LTX Studio, our AI-based video creation platform. Our focus is on creating a controllable, cutting-edge video generative model by merging cutting-edge algorithms with exceptional engineering. This involves enhancing machine learning components within our sophisticated internal training framework, crucial for developing advanced models. We specialize in both research and engineering that enable efficient and scalable training and inference, allowing us to deliver state-of-the-art AI-generated video models.
As a Large Scale Video Understanding Research Scientist, you will play a key role in improving video generation quality and efficiency by improving video and audio understanding pipelines used for both training data construction and model evaluation.. This role demands hands-on work with large-scale Video Language Models (VLLMs), including fine-tuning, post-training, and control, alongside implementing classic computer vision and signal processing algorithms and applying strong research skills. Your expertise in post-training and controlling large scale foundational models, understanding statistics, implementing complex systems and eliminating bugs will be crucial, as our video training sets consist of petabytes of data processed across hundreds to thousands of virtual machines.
This role is designed for individuals who are not only technically proficient but also deeply passionate about pushing the boundaries of AI and machine learning through innovative engineering and collaborative research.
We’re here to push the boundaries of what’s possible with AI and video - not for the buzz, but for the craft, the challenge, and the chance to make something genuinely new.
We believe in an environment where people are encouraged to think, create and explore. Real impact happens when people are empowered to experiment, evolve, and elevate together.
At Lightricks, every breakthrough starts with great people and a collaborative mindset. If you're looking for a place that combines deep tech, creative energy, and zero buzzword culture, you might be in the right place.
Ready to apply?
Apply to Lightricks
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.