Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Role
EnCharge AI is seeking a highly skilled and experienced AI Compiler Engineer to spearhead the efforts in developing and optimizing graph compilers tailored to cutting-edge AI and ML workloads. You will collaborate with hardware architects, and AI researchers to enhance performance, optimize computation graphs, and enable efficient model deployment on EnCharge’s Inference Accelerators.
Responsibilities
Qualifications
EnchargeAI is an equal employment opportunity employer in the United States.
Ready to apply?
Apply to EnCharge AIShare this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Position:
The Silicon Operations Manager is responsible for driving operational activities from product definition and production ramp to sustained production. The individual will drive new product introduction activities, operational readiness for production and product lifecycle management. The silicon Operations Manager will be responsible for development, qualification and manufacturing ramp up of the company’s silicon products. The Silicon Operations Manager will report to the COO.
Responsibilities:
Qualifications:
The salary range for this position is $125,000 to $200,000 per year. Actual compensation offered will be determined based by factors as job-related knowledge, skills and experience.
Ready to apply?
Apply to EnCharge AIShare this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
Principal DFT Engineer
Job Description:
Developing silicon for AI Computing isn't just about speed; it’s about balancing high-performance data processing with extreme power efficiency and reliability in remote environments.
As a Principal DFT(Design for Test) Engineer, you will lead our testing strategy, ensuring the manufacturing quality, reliability, and test efficiency of complex, high-performance AI accelerators.
Key Responsibilities:
Architectural Leadership: Define and implement the end-to-end DFT architecture for complex SoCs, including Hierarchical DFT, Scan compression, Boundary Scan and MBIST.
Edge-Specific Reliability: Develop strategies for In-System Test (IST) and power-on self-test (POST) to ensure chip health in remote edge data centers.
Implementation & Flow: Oversee scan insertion, ATPG (Stuck-at, Transition, Path Delay), and Memory/Logic BIST.
Cross-Functional Synergy: Collaborate with Design, Physical Design, and Yield teams to ensure high test coverage while minimizing area overhead and power impact as well as timing analysis.
Post-Silicon Validation: Lead the bring-up and debug phase on ATE (Automated Test Equipment) to root-cause silicon failures and optimize test time.
Technical Requirements:
Experience: 10+ years in DFT, with at least 2 years in a leadership or principal role.
Tools: Mastery of industry-standard tools (e.g., Synopsys TestMAX, Siemens/Mentor Tessent, or Cadence Genus/Modus).
Memory & Logic Test: Deep expertise in MBIST (Memory Built-In Self-Test) with repair capabilities, SCAN, IJTAG (IEEE 1687) and boundary scan (IEEE 1149.1/6).
Advanced Nodes: Proven track record with FinFET nodes (7nm, 5nm, or below).
Low Power: Experience managing DFT in multi-voltage/power-gated designs—crucial for edge efficiency.
The salary range for this position is $180,000 to $220,000 per year. Actual compensation offered will be determined based by factors as job-related knowledge, skills and experience.
Ready to apply?
Apply to EnCharge AIShare this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Role
EnCharge AI is looking for an experienced AI Research Engineer to optimize deep learning models for deployment on edge AI platforms. You will work on model compression, quantization strategies, and efficient inference techniques to improve the performance of AI workloads.
Responsibilities
Research and develop quantization-aware training (QAT) and post-training quantization (PTQ) techniques for deep learning models.
Implement low-bit precision optimizations (e.g., INT8, BF16).
Design and optimize efficient inference algorithms for AI workloads, focusing on latency, memory footprint, and power efficiency.
Work with frameworks such as PyTorch, ONNX Runtime, and TVM to deploy optimized models.
Analyze accuracy trade-offs and develop calibration techniques to mitigate precision loss in quantized models.
Collaborate with hardware engineers to optimize model execution for edge devices, and NPUs.
Contribute to research on knowledge distillation, sparsity, pruning, and model compression techniques.
Benchmark performance across different hardware and software stacks.
Stay updated with the latest advancements in AI efficiency, model compression, and hardware acceleration.
Qualifications
Master’s or Ph.D. in Computer Science, Electrical Engineering, or a related field.
Strong expertise in deep learning, model optimization, and numerical precision analysis.
Hands-on experience with model quantization techniques (QAT, PTQ, mixed precision).
Proficiency in Python, C++, CUDA, or OpenCL for performance optimization.
Experience with AI frameworks: PyTorch, TensorFlow, ONNX Runtime, TVM, TensorRT, or OpenVINO.
Understanding of low-level hardware acceleration (e.g., SIMD, AVX, Tensor Cores, VNNI).
Familiarity with compiler optimizations for ML workloads (e.g., XLA, MLIR, LLVM).
EnchargeAI is an equal employment opportunity employer in the United States.
Ready to apply?
Apply to EnCharge AIShare this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Position:
Hardware Technical Writer (Part-time)
The Hardware Technical Writer will report to the Chief Operating Officer and is a Remote Virtual, 1099 contract position.
Responsibilities:
Qualifications:
This is a 1099 contractor role. With an estimated 10 hours a week scope. The pay range for this position is $50 to $55 USD per hour if located in US. Actual compensation offered will be determined based by factors as job-related knowledge, skills and experience.
Ready to apply?
Apply to EnCharge AIShare this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Role
EnCharge AI is seeking an AI Runtime Engineer to develop and optimize the execution stack for our next-generation AI accelerator. In this role, you will work on low-latency, high-performance runtime software that enables efficient execution of deep learning models on specialized hardware. You will collaborate with hardware, compiler, and AI framework teams to deliver optimized AI inference and training performance across cloud and edge environments.
Responsibilities
Qualifications
EnchargeAI is an equal employment opportunity employer in the United States.
Ready to apply?
Apply to EnCharge AIShare this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Role
EnCharge AI is seeking an LLM Inference Deployment Engineer to optimize, deploy, and scale large language models (LLMs) for high-performance inference on its energy efficient AI accelerators. You will work at the intersection of AI frameworks, model optimization, and runtime execution to ensure efficient model execution and low-latency AI inference.
Responsibilities
Qualifications
EnchargeAI is an equal employment opportunity employer in the United States.
Ready to apply?
Apply to EnCharge AIShare this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Role
EnCharge AI is seeking a highly skilled Device Driver Engineer to design and implement high-performance driver stack for our cutting-edge AI accelerator hardware. In this role, you will work closely with hardware, firmware, and AI software teams to develop low-latency, high-bandwidth communication between the host system and AI accelerator.
Responsibilities
Qualifications
EnchargeAI is an equal employment opportunity employer in the United States.
Ready to apply?
Apply to EnCharge AIShare this job
EnCharge AI is a leader in advanced AI hardware and software systems for edge-to-cloud computing. EnCharge’s robust and scalable next-generation in-memory computing technology provides orders-of-magnitude higher compute efficiency and density compared to today’s best-in-class solutions. The high-performance architecture is coupled with seamless software integration and will enable the immense potential of AI to be accessible in power, energy, and space constrained applications. EnCharge AI launched in 2022 and is led by veteran technologists with backgrounds in semiconductor design and AI systems.
About the Role
EnCharge AI is looking for an Embedded SW Engineer to develop the firmware for our Edge AI processors. The candidate must possess an excellent understanding of computer architecture and operating system concepts including, but not limited to, memory management, virtualization and PCIe address space. The role includes designing and developing the core Firmware for various parts of the SOC. The candidate must possess strong communication skills to interface with Runtime, Architecture and H/W teams.
Responsibilities
Qualifications
EnchargeAI is an equal employment opportunity employer in the United States.
Ready to apply?
Apply to EnCharge AICookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.