All active Linux roles based in Toronto.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
Share this job
We are hiring an experienced Security Software Engineer (Staff or Senior) for our Infrastructure Security team to design and build scalable security controls and services within MongoDB Atlas multi-cloud infrastructure.
The team sits within the Site Reliability Engineering organization and works with other engineering teams to ensure that our infrastructure adheres to the highest security standards.
This role can be based out of our New York City, Austin, Seattle or San Francisco offices, or work fully remotely on standard East Coast business hours.
You might be a great fit if you match some of the following:
MongoDB is built for change, empowering our customers and our people to innovate at the speed of the market. We have redefined the database for the AI era, enabling innovators to create, transform, and disrupt industries with software. MongoDB’s unified database platform—the most widely available, globally distributed database on the market—helps organizations modernize legacy workloads, embrace innovation, and unleash AI. Our cloud-native platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available across AWS, Google Cloud, and Microsoft Azure.
With offices worldwide and nearly 60,000 customers—including 75% of the Fortune 100 and AI-native startups—relying on MongoDB for their most important applications, we’re powering the next era of software.
Our compass at MongoDB is our Leadership Commitment, guiding how and why we make decisions, show up for each other, and win. It’s what makes us MongoDB.
To drive the personal growth and business impact of our employees, we’re committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees’ wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it’s like to work at MongoDB, and help us make an impact on the world!
MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter.
MongoDB, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type and makes all hiring decisions without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Req ID: 2263171228
MongoDB’s base salary range for this role is posted below. Compensation at the time of offer is unique to each candidate and based on a variety of factors such as skill set, experience, qualifications, and work location. Salary is one part of MongoDB’s total compensation and benefits package. Other benefits for eligible employees may include: equity, participation in the employee stock purchase program, flexible paid time off, 20 weeks fully-paid gender-neutral parental leave, fertility and adoption assistance, 401(k) plan, mental health counseling, access to transgender-inclusive health insurance coverage, and health benefits offerings. Please note, the base salary range listed below and the benefits in this paragraph are only applicable to U.S.-based candidates.
Ready to apply?
Apply to MongoDB
Share this job
We are hiring an experienced Security Software Engineer (Staff or Senior) for our Infrastructure Security team to design and build scalable security controls and services within MongoDB Atlas multi-cloud infrastructure.
The team sits within the Site Reliability Engineering organization and works with other engineering teams to ensure that our infrastructure adheres to the highest security standards.
This role can be based out of our Dublin office, or work fully remotely in Ireland.
You might be a great fit if you match some of the following:
MongoDB is built for change, empowering our customers and our people to innovate at the speed of the market. We have redefined the database for the AI era, enabling innovators to create, transform, and disrupt industries with software. MongoDB’s unified database platform, the most widely available, globally distributed database on the market, helps organizations modernize legacy workloads, embrace innovation, and unleash AI. Our cloud-native platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available across AWS, Google Cloud, and Microsoft Azure.
With offices worldwide and over 60,000 customers, including 75% of the Fortune 100 and AI-native startups, relying on MongoDB for their most important applications, we’re powering the next era of software.
Our compass at MongoDB is our Leadership Commitment, guiding how and why we make decisions, show up for each other, and win. It’s what makes us MongoDB.
To drive the personal growth and business impact of our employees, we’re committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees’ wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it’s like to work at MongoDB, and help us make an impact on the world!
MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter.
MongoDB is an equal opportunities employer.
Req ID: 426183
Ready to apply?
Apply to MongoDB
Share this job
Platform Engineering is the department within SRE that is responsible for a range of critical infrastructure and operational functions that support the broader engineering organization. Among these are our multi-cloud-provider Kubernetes infrastructure, networking, load balancing (including our public-facing edge and internal service mesh), and observability and alerting systems.
The Fleet Management team provides the core runtime environment that empowers our developers to build and ship products to delight our customers. We manage the end-to-end lifecycle of our Kubernetes fleet, alongside the critical components that ensure cluster reliability and security (e.g., CoreDNS, cert-manager, and Gatekeeper). As our infrastructure scales to support new use cases and products, we are spearheading a migration from Terraform-based Infrastructure as Code (IaC) to an Operator-driven lifecycle management model.
This role can be based out of our Austin, Boston, Los Angeles, New York City, Raleigh, or San Francisco offices, remotely in the United States region, or our European office in Dublin.
MongoDB is built for change, empowering our customers and our people to innovate at the speed of the market. We have redefined the database for the AI era, enabling innovators to create, transform, and disrupt industries with software. MongoDB’s unified database platform, the most widely available, globally distributed database on the market, helps organizations modernize legacy workloads, embrace innovation, and unleash AI. Our cloud-native platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available across AWS, Google Cloud, and Microsoft Azure.
With offices worldwide and over 60,000 customers, including 75% of the Fortune 100 and AI-native startups, relying on MongoDB for their most important applications, we’re powering the next era of software.
Our compass at MongoDB is our Leadership Commitment, guiding how and why we make decisions, show up for each other, and win. It’s what makes us MongoDB.
To drive the personal growth and business impact of our employees, we’re committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees’ wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it’s like to work at MongoDB, and help us make an impact on the world!
MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter.
MongoDB, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type and makes all hiring decisions without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Req ID: 426182
MongoDB’s base salary range for this role is posted below. Compensation at the time of offer is unique to each candidate and based on a variety of factors such as skill set, experience, qualifications, and work location. Salary is one part of MongoDB’s total compensation and benefits package. Other benefits for eligible employees may include: equity, participation in the employee stock purchase program, flexible paid time off, 20 weeks fully-paid gender-neutral parental leave, fertility and adoption assistance, 401(k) plan, mental health counseling, access to transgender-inclusive health insurance coverage, and health benefits offerings. Please note, the base salary range listed below and the benefits in this paragraph are only applicable to U.S.-based candidates.
Ready to apply?
Apply to MongoDB
Share this job
We are looking for an experienced Senior or Staff Engineer for our SRE, InfraSec team, to guide the security of our cloud-based infrastructure. As a Staff SRE, you will be very hands-on technically while also mentoring a small team of SREs.
The InfraSec team collaborates closely with other engineering teams to ensure that our infrastructure adheres to the highest security standards. They build essential security infrastructure and implement controls that reinforce the platform’s security posture.
This is an SRE team, which means you can expect a highly hands-on approach, tackling the technical challenges of implementing large scale solutions.This team is deeply involved in the technical aspects of security and the nuances of its actual implementation.
This role can sit in our New York City, Austin, Seattle or San Francisco offices on a hybrid basis, or it can be fully remote while working from a location based in either Eastern or Central time zones.
Cloud Security Design and Implementation:
Automation and Monitoring:
Security Tooling:
Experience:
Security Mindset:
Cloud Expertise:
Coding/Automation:
Linux and Networking
Communication and Leadership Skills:
MongoDB is built for change, empowering our customers and our people to innovate at the speed of the market. We have redefined the database for the AI era, enabling innovators to create, transform, and disrupt industries with software. MongoDB’s unified database platform—the most widely available, globally distributed database on the market—helps organizations modernize legacy workloads, embrace innovation, and unleash AI. Our cloud-native platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available across AWS, Google Cloud, and Microsoft Azure.
With offices worldwide and nearly 60,000 customers—including 75% of the Fortune 100 and AI-native startups—relying on MongoDB for their most important applications, we’re powering the next era of software.
Our compass at MongoDB is our Leadership Commitment, guiding how and why we make decisions, show up for each other, and win. It’s what makes us MongoDB.
To drive the personal growth and business impact of our employees, we’re committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees’ wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it’s like to work at MongoDB, and help us make an impact on the world!
MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter.
MongoDB, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type and makes all hiring decisions without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Req ID: 1263064630
MongoDB’s base salary range for this role is posted below. Compensation at the time of offer is unique to each candidate and based on a variety of factors such as skill set, experience, qualifications, and work location. Salary is one part of MongoDB’s total compensation and benefits package. Other benefits for eligible employees may include: equity, participation in the employee stock purchase program, flexible paid time off, 20 weeks fully-paid gender-neutral parental leave, fertility and adoption assistance, 401(k) plan, mental health counseling, access to transgender-inclusive health insurance coverage, and health benefits offerings. Please note, the base salary range listed below and the benefits in this paragraph are only applicable to U.S.-based candidates.
Ready to apply?
Apply to MongoDB
Share this job
MongoDB’s Storage Layer Services (SLS) team is re-architecting the MongoDB cloud storage layer and sits at the heart of our next-generation cloud storage architecture. This relatively new team is building performant, multi-tenant distributed storage services that both enhance today’s Atlas storage stack and enable more customer workloads to run more efficiently.
You will partner with the teams building these storage services to define SLOs, shape capacity plans, and ensure the reliability, durability, and operational safety of the storage layer that underpins Atlas. You’ll join a small, senior team of SREs as founding members of this organization, playing a crucial role in executing on a multi-year roadmap for MongoDB’s cloud storage architecture.
This role can be based out of our Toronto or Montreal office or remotely in the Canada while physically based in an Eastern or Central time zone location.
MongoDB is built for change, empowering our customers and our people to innovate at the speed of the market. We have redefined the database for the AI era, enabling innovators to create, transform, and disrupt industries with software. MongoDB’s unified database platform, the most widely available, globally distributed database on the market, helps organizations modernize legacy workloads, embrace innovation, and unleash AI. Our cloud-native platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available across AWS, Google Cloud, and Microsoft Azure.
With offices worldwide and over 60,000 customers, including 75% of the Fortune 100 and AI-native startups, relying on MongoDB for their most important applications, we’re powering the next era of software.
Our compass at MongoDB is our Leadership Commitment, guiding how and why we make decisions, show up for each other, and win. It’s what makes us MongoDB.
To drive the personal growth and business impact of our employees, we’re committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees’ wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it’s like to work at MongoDB, and help us make an impact on the world!
MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter.
MongoDB, Inc. provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type and makes all hiring decisions without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Req ID: 1273396252
AI is used to review applications based on job-related criteria and does not replace human decision-making. The hiring team decide who moves forward.
MongoDB’s base salary range for this role is posted below. Compensation at the time of offer is unique to each candidate and based on a variety of factors such as skill set, experience, qualifications, and work location. Salary is one part of MongoDB’s total compensation and benefits package. Other benefits for eligible employees may include: equity, participation in the employee stock purchase program, flexible paid time off, 20 weeks fully-paid gender-neutral parental leave, fertility and adoption assistance, Registered Retirement Savings Plan (RRSP) with employer match, mental health counseling, backup child and elder care, and health, dental, and vision benefits offerings. Please note, the base salary range listed below and the benefits in this paragraph are only applicable to candidates based in Canada.
Ready to apply?
Apply to MongoDB
Secure Every Identity, from AI to Human
Identity is the key to unlocking the potential of AI. Okta secures AI by building the trusted, neutral infrastructure that enables organizations to safely embrace this new era. This work requires a relentless drive to solve complex challenges with real-world stakes. We are looking for builders and owners who operate with speed and urgency and execute with excellence.
This is an opportunity to do career-defining work. We're all in on this mission. If you are too, let's talk.
Staff Security Infrastructure Engineer, Red Team
Within the Product Security team, our Red Team delivers robust security assurance for Okta's products, services, and infrastructure. You will be the team's dedicated infrastructure and tooling engineer, the first person in this role for a small team of operators. You will work alongside operators but not report through an operator chain; you'll collaborate as a peer focused on a different discipline.
We seek a Staff Security Infrastructure Engineer to own the engineering backbone that enables our operations. This is not a traditional operator role but a dedicated infrastructure, tooling, and automation engineering position embedded within the Red Team. You will design, build, maintain, and continuously improve the platforms, infrastructure, and custom tooling that our operators depend on to execute engagements.
Your work directly enables the team to operate at a higher maturity level: faster infrastructure deployment, more resilient and OPSEC-aware architecture, automated workflows, and reliable custom tooling, freeing operators to focus on the mission. Your role will also extend to cultivating stakeholder collaboration and elevating our company’s security posture through strategic engagement and proactive measures. As the team matures, this role can evolve toward platform leadership, custom capability development, or a hybrid operator/engineer path.
Required
Strongly Preferred
Nice to Have
Note: This is not an operator role. You will not be the person running hands-on-keyboard engagements as your primary function. While you may participate in operations to understand requirements or provide support, your core mission is ensuring the team's infrastructure, workflows, tooling, and automation are reliable, repeatable, and mature. You are the engineering foundation the operators build on.
#LI-TM
#LI-Hybrid
(P22302_3403905)
Below is the annual salary range for candidates located in Canada. Your actual salary will depend on factors such as your skills, qualifications, and experience. In addition, Okta offers equity (where applicable), bonus, and benefits, including health, dental, and vision insurance, RRSP with a match, healthcare spending, telemedicine, and paid leave (including PTO and parental leave) in accordance with our applicable plans and policies. To learn more about our Total Rewards program, please visit: https://rewards.okta.com/can.
The Okta Experience
We are intentional about connection. Our global community, spanning over 20 offices worldwide, is united by a drive to innovate. Your journey begins with an immersive, in-person onboarding experience designed to accelerate your impact and connect you to our mission and team from day one.
Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws.
If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation.
Notice for New York City Applicants & Employees: Okta may use Automated Employment Decision Tools (AEDT), as defined by New York City Local Law 144, that use artificial intelligence, machine learning, or other automated processes to assist in our recruitment and hiring process. In accordance with NYC Local Law 144, if you are an applicant or employee residing in New York City, please click here to view our full NYC AEDT Notice.
Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Personnel and Job Candidate Privacy Notice at https://www.okta.com/legal/personnel-policy/.
Ready to apply?
Apply to Okta
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Our Tensix team is building custom AI compute cores, RISC-V CPUs, and chiplet-based architectures for datacenter, edge, and automotive AI. Design Verification Engineers on this team validate compute IP and subsystems and build scalable DV infrastructure to keep verification fast, automated, and production-grade.
This role is hybrid, based out of Toronto, ON or Austin, TX.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Tenstorrent is building large-scale AI systems across internal clusters and customer deployments. This role sits at the intersection of site reliability, infrastructure operations, and customer engineering, ensuring our systems are reliable, observable, and production-ready.
This role is hybrid, based out of Toronto, ON; Austin, TX; or Santa Clara, CA.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Join our innovative team as a Director, Systems & Solutions where you'll play a pivotal role in deploying and optimizing cutting-edge computer systems and AI hardware. If you have a passion for hardware technologies, including accelerators, CPUs, GPUs, and thrive in a dynamic, hands-on technical environment, we would love to hear from you. In this role you will be leading a team and helping us scale our customer base.
This role is hybrid, based out of Santa Clara, CA; Austin, TX or Toronto, ON.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Tenstorrent’s AI Software Infrastructure team builds the platforms that power internal development, orchestrate workloads, and manage large-scale AI hardware across on-prem data centers. This team develops and productionizes infrastructure used both internally and externally on Tenstorrent systems.
This role is hybrid, based out of Toronto, ON; Santa Clara, CA; Austin, TX; Belgrade, Serbia; Warsaw, Poland; or Gdańsk, Poland.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Tenstorrent is leading the industry on cutting-edge AI technology, revolutionizing performance expectations, ease of use, and cost efficiency. With AI redefining the computing paradigm, solutions must evolve to unify innovations in software models, compilers, platforms, networking, and semiconductors. Our diverse team of technologists have developed a high performance RISC-V CPU from scratch, and share a passion for AI and a deep desire to build the best AI platform possible. We value collaboration, curiosity, and a commitment to solving hard problems. We are growing our team and looking for contributors of all seniorities.
Tenstorrent is building large-scale AI infrastructure across on-prem data centers and accelerator clusters. This role focuses on automation of provisioning, configuration, and deployment workflows to ensure systems are scalable, reliable, and repeatable.
This role is hybrid in Toronto, ON; Austin, TX; or Santa Clara, CA.
We welcome candidates at various experience levels for this role. During the interview process, candidates will be assessed for the appropriate level, and offers will align with that level, which may differ from the one in this posting.
Who You Are
What We Need
What You Will Learn
Compensation for all engineers at Tenstorrent ranges from $100k - $500k including base and variable compensation targets. Experience, skills, education, background and location all impact the actual offer made.
Tenstorrent offers a highly competitive compensation package and benefits, and we are an equal opportunity employer.
This offer of employment is contingent upon the applicant being eligible to access U.S. export-controlled technology. Due to U.S. export laws, including those codified in the U.S. Export Administration Regulations (EAR), the Company is required to ensure compliance with these laws when transferring technology to nationals of certain countries (such as EAR Country Groups D:1, E1, and E2). These requirements apply to persons located in the U.S. and all countries outside the U.S. As the position offered will have direct and/or indirect access to information, systems, or technologies subject to these laws, the offer may be contingent upon your citizenship/permanent residency status or ability to obtain prior license approval from the U.S. Commerce Department or applicable federal agency. If employment is not possible due to U.S. export laws, any offer of employment will be rescinded.
Ready to apply?
Apply to Tenstorrent
Share this job
Lightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
We are hiring a talented software engineer to help us build the next generation of photonic AI processors and interconnects. In this role, you will be responsible for developing and extending the device software and firmware stack for Photonic Compute and Photonic interconnect products. You will collaborate with other software teams and hardware systems teams to develop security, telemetry, virtualization, and remote administration functionality.
We offer competitive compensation. The base salary range for this role determined based on location, experience, educational background, and market data.
Benefits eligibility may vary depending on your employment status and location. Lightmatter recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, sex, disability, age, veteran status, and other protected status as required by applicable law.
Export Control
Candidates should have capacity to comply with the federally mandated requirements of U.S. export control laws.
Ready to apply?
Apply to Lightmatter
Share this job
Lightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
We are hiring a talented software engineer to help us build the next generation of photonic AI processors and interconnects. In this role, you will be responsible for developing and extending the device software and firmware stack for Photonic Compute and Photonic interconnect products. You will collaborate with other software teams and hardware systems teams to develop security, telemetry, virtualization, and remote administration functionality.
We offer competitive compensation. The base salary range for this role determined based on location, experience, educational background, and market data.
Benefits eligibility may vary depending on your employment status and location. Lightmatter recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, sex, disability, age, veteran status, and other protected status as required by applicable law.
Export Control
Candidates should have capacity to comply with the federally mandated requirements of U.S. export control laws.
Ready to apply?
Apply to Lightmatter
Share this job
Lightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
We are hiring a talented software engineer to help us build the next generation of photonic AI processors and interconnects. In this role, you will be responsible for developing and extending the device software and firmware stack for Photonic Compute and Photonic interconnect products. You will collaborate with other software teams and hardware systems teams to develop security, telemetry, virtualization, and remote administration functionality.
We offer competitive compensation. The base salary range for this role determined based on location, experience, educational background, and market data.
Benefits eligibility may vary depending on your employment status and location. Lightmatter recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, sex, disability, age, veteran status, and other protected status as required by applicable law.
Export Control
Candidates should have capacity to comply with the federally mandated requirements of U.S. export control laws.
Ready to apply?
Apply to Lightmatter
Share this job
Rumble is the Freedom-First technology platform. We proudly offer a video platform, cloud services, advertising solutions, and a non-custodial cryptocurrency wallet.
The Cloud Support Administrator (Odoo) serves as the primary administrator of our Odoo-based self-service portal and billing system while providing hands-on cloud support to customers. This role combines Odoo platform administration, direct customer engagement, and cloud resource troubleshooting to ensure customer satisfaction and minimize escalations to the backend engineering team. The ideal candidate has experience administering Odoo systems and a strong understanding of cloud environments to serve as the first point of resolution for incoming customer tickets.
Annual Compensation Range:
$65,000 - $85,000 CAD base + benefits + equity (If based in Canada)
$78,000 - $95,000 USD base + benefits + equity (If based in the United States)
Note: The salary range listed for this position is a good faith estimate based on experience, qualifications, and internal compensation structure. The actual salary offered varies depending on the candidate's skill level and experience. This posting refers to an active vacancy within the organization.
Why Our Team Loves Working Here:
EEO Statement:
Rumble is an equal opportunity employer. We promote an equal playing field where everyone has the same opportunities regardless of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability status, or any other applicable characteristics protected by law. Rumble is an active participant in the e-verify program.
Physical demands of the position:
While performing the duties of this job, the employee is regularly required to sit for prolonged periods of time while using a computer and/or keyboard. The employee is required to communicate verbally and hear. The employee may be required to walk, reach with hands and arms, balance, and stoop or kneel. The employee may occasionally be required to lift and/or move up to 15 pounds. Specific vision abilities required by this job include clarity of vision at approximately 20 inches or less (i.e., working with small objects or reading small print), including the use of computers.
Ready to apply?
Apply to Rumble - Career PageShare this job
About Keyfactor
Our mission is to build a connected society, rooted in trust, with identity-first security for every machine and human. Keyfactor helps organizations move fast to establish digital trust at scale — and then maintain it. With decades of cybersecurity experience, Keyfactor is trusted by more than 1,500 companies across the globe. We are proud to continually earn recognition as a Best Place to Work, and we achieve that through our amazing people who cultivate our culture as we grow. We hope you will trust your future with Keyfactor!
Title: Senior Software Engineer
Location: Sweden, Stockholm or Canada; Toronto (hybrid work model)
Experience: Senior
Job Function: Engineering
Employment Type: Full-Time
Industry: Computer Network & Security
About the position
Keyfactor is looking for a highly capable and motivated Senior (Java) Software Engineer to join one of our agile teams. Your primary focus, together with the team is to support the continued growth of our flagship PKI Product offering. You have demonstrable experience in developing professional Java-based systems and have probably had a core engineering role in a product business. While an experience or an interest in this PKI, is meriting, it is more important for us to find someone who is passionate and capable, with the right mindset to be a part of helping us scale and become increasingly self-organizing.
Job Responsibilities
Minimum Qualification, Education, and Skills
Capabilities
Leadership & Ownership
Who You Are
Experience
#LI-NA1
Compensation
Salary will be commensurate with experience.
Culture, Career Opportunities and Benefits
We build teams that continually strive to get better than the day before. You will be challenged daily and given opportunities to grow personally and professionally. We balance autonomy and structure to create an entrepreneurial environment to spur creativity and new ideas.
Here are just some of the initiatives that make our culture special:
Our Core Values
Our core values are extremely important to how we run our business and what we look for in every team member:
Trust is paramount.
We deliver security software and solutions where trust and openness are of the highest importance for our customers. We are honest and a trusted partner in every aspect of business.
Customers are core.
We strategize, operate, and execute through a customer-centric view. We prioritize the security interests of our customers, and we act as if their data were our own.
Innovation never stops, it only accelerates.
The speed of change is accelerating. We are committed, through investment and focus, to stay ahead of the innovation curve.
We deliver with agility.
We thrive in high-paced and continually changing environments. We navigate through newly added variables, adjust accordingly, while driving towards our strategic goals.
United by respect.
Respect for all is what unites us. We promote diversity, inclusivity, equity, and acting with empathy and openness, both in our business and in our communities.
Teams make “it” happen.
Vision and goals are not individually achievable – they require teamwork. We pride ourselves in operating as a cohesive team, creating promoters and partners, and winning as one.
Keyfactor is a proud equal opportunity employer including but not limited to veterans and individuals with disabilities.
REASONABLE ACCOMMODATION: Applicants with disabilities may contact a member of Keyfactor’s People team via people@keyfactor.com and/or telephone at 1.216.785.2990 to request and arrange for accommodations at any time.
Ready to apply?
Apply to Keyfactor, Inc.
At Lyft, our purpose is to serve and connect. We aim to achieve this by cultivating a work environment where all team members belong and have the opportunity to thrive.
Lyft is looking for experienced software engineers from a scope of disciplines. We are growing our team with people who want to build, improve and incorporate technologies that make the lives of our community more enriched. As an engineer at Lyft, you'll collaborate with teams like product, data science, analytics, and operations on code that empower us to iterate quickly, while focusing on delighting our riders and drivers.
As a Software Engineer for Lyft Ads - you will work on one of Lyft’s newest lines of business focused on building the world’s largest transportation media network. We build products that allow brands to engage with our unique audience throughout their transportation journeys and beyond. For this role we are seeking software engineers who are passionate about backend and data engineering. You will join our Ad Infra Engineering team and contribute to building the systems and pipelines powering our ad-serving, measurement and audience platform. We work on technologies that let brands engage with our unique audience throughout their transportation journeys. This role is a great opportunity for an early-career engineer to gain experience working with distributed backend systems and large-scale data workflows, collaborating closely with product, analytics, and data science teams.
Why Lyft Ads?
Lyft is committed to creating an inclusive workforce that fosters belonging. Lyft believes that every person has a right to equal employment opportunities without discrimination because of race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, gender identity, gender expression, age, marital status, family status, disability, pardoned record of offences, or any other basis protected by applicable law or by Company policy. Lyft also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. Accommodation for persons with disabilities will be provided upon request in accordance with applicable law during the application and hiring process. Please contact your recruiter if you wish to make such a request.
Lyft highly values having employees working in-office to foster a collaborative work environment and company culture. This role will be in-office on a hybrid schedule — Team Members will be expected to work in the office at least 3 days per week, including on Mondays, Wednesdays, and Thursdays. Lyft considers working in the office at least 3 days per week to be an essential function of this hybrid role. Your recruiter can share more information about the various in-office perks Lyft offers. Additionally, hybrid roles have the flexibility to work from anywhere for up to 4 weeks per year. #Hybrid
The expected base pay range for this position in the Toronto area is CAD $108,000 - CAD $135,000, not inclusive of potential equity offering, bonus or benefits. Salary ranges are dependent on a variety of factors, including qualifications, experience and geographic location. Your recruiter can share more information about the salary range specific to your working location and other factors during the hiring process.
Lyft may use artificial intelligence to screen applicants, however, Lyft employees make the ultimate selection and hiring decisions.
This job fills an existing vacancy.
Ready to apply?
Apply to Lyft
Share this job
Veeam is the Data and AI Trust Company, specializing in helping organizations ensure their data and AI are fully understood, secured, and resilient to enable the acceleration of safe AI at scale. As the market leader in both data resilience and data security posture management, Veeam is built for the convergence of identity, data, security, and AI risk. Headquartered in Seattle with offices in more than 30 countries, Veeam protects over 550,000 customers worldwide, who trust Veeam to keep their businesses running. Join us as we go fearlessly forward together, growing, learning, and making a real impact for some of the world’s biggest brands.
About the Role:
Veeam, following its acquisition of Securiti AI - the leader in AI-powered data security posture management (DSPM) - is seeking a Senior Sales Engineer to drive technical leadership in our sales team, focusing on the Securiti AI portfolio.
You will guide customers from needs assessment to solution design, delivering hands-on demos and proof-of-concepts to showcase value. Success in this role requires strong technical skills and the ability to build trust with clients.
You’re already a technical leader trusted at executive levels when the conversation shifts from “we need to secure our data” to “we need to enable safe, compliant AI at scale.”
You’re already an expert on data/AI security and a proven enterprise sales engineer. Now it’s time to join the home of Data and AI Trust.
You’ve already seen our new release of Agent Commander, but you haven’t seen what’s coming next. After the acquisition of Securiti AI, Veeam is even further into the forefront of data resilience, protection, trust, and the acceleration of safe AI at scale.
Are you ready for an exciting new challenge at enterprise scale?
What You’ll Do:
What You’ll Bring (Required):
Preferred (strong differentiators):
#LI-JC3
What you'll get
Compensation Transparency
Veeam is committed to pay transparency and equitable compensation. For this role, the compensation range below reflects the expected total target compensation (TTC), inclusive of base pay and a competitive performance-based bonus. For roles with a commission plan, the compensation range represents On Target Earnings (OTE), which includes base salary plus variable commission. When determining compensation, Veeam takes into consideration factors such as experience, education, and skills. Offers are typically made below the midpoint of the range.
Please note that any personal data collected from you during the recruitment process will be processed in accordance with our Recruiting Privacy Notice.
The Privacy Notice sets out the basis on which the personal data collected from you, or that you provide to us, will be processed by us in connection with our recruitment processes.
By applying for this position, you consent to the processing of your personal data in accordance with our Recruiting Privacy Notice.
By submitting your application, you acknowledge that the information provided in your job application and any supporting documents is complete and accurate to the best of your knowledge. Any misrepresentation, omission, or falsification of information may result in disqualification from consideration for employment or, if discovered after employment begins, termination of employment.
Ready to apply?
Apply to Veeam Software
Founded in 2004, NetBrain is the leader in no-code network automation. Its ground-breaking Next-Gen platform provides IT operations teams with the ability to scale their hybrid multi-cloud connected networks by automating the processes associated with Diagnostic Troubleshooting, Outage Prevention and Protected Change Management. Today, over 2,500 of the world’s largest enterprises and managed services providers leverage NetBrain’s platform.
What We Need
The NetBrain release team is responsible for management of release of NetBrain products. We are seeking a Software Release Engineer to join our team. The Software Release Engineer will report to Architect team and will work on the design, build and implementation of CI/CD flows and installation packages. The world’s leading enterprises rely on our products to automatically diagram, troubleshoot and design their networks. The software development team at NetBrain fosters a challenging work environment and encourages innovation, teamwork, and creativity.
What You'll Do
What You Bring
What We Offer
Our comprehensive compensation package is vital in how we recognize the impact that our people make in helping us achieve our goals.
For this role, the estimated base salary range is between CAD $78,000 - CAD $93,000 plus potential bonus. The actual salary may vary based on a range of factors, including market and individual qualifications objectively assessed during the interview process.
Please note that the range provided is a guideline, and may be modified. People Experience offers a comprehensive benefits package in addition to cash compensation that includes, but is not limited to, RRSP and medical/dental coverage. Speak with your Recruiter for more details on our Total Rewards philosophy.
NetBrain invites all interested and qualified candidates to apply for employment opportunities.
Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or other characteristics protected by law.
If you have a disability that prevents or limits your ability to use or access the site, or if you require any other accommodation in the application process due to a disability, you may request a reasonable accommodation. To make a request, please contact our People Team at: people@netbraintech.com and we will be happy to assist you.
In compliance with applicable laws, NetBrain conducts holistic, individual background reviews in support of all hiring decisions.
It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability.
Ready to apply?
Apply to NetBrainShare this job
The mission of Speechify is to make sure that reading is never a barrier to learning.
Over 50 million people use Speechify’s text-to-speech products to turn whatever they’re reading – PDFs, books, Google Docs, news articles, websites – into audio, so they can read faster, read more, and remember more. Speechify’s text-to-speech reading products include its iOS app, Android App, Mac App, Chrome Extension, and Web App. Google recently named Speechify the Chrome Extension of the Year and Apple named Speechify its 2025 Design Award winner for Inclusivity.
Today, nearly 200 people around the globe work on Speechify in a 100% distributed setting – Speechify has no office. These include frontend and backend engineers, AI research scientists, and others from Amazon, Microsoft, and Google, leading PhD programs like Stanford, high growth startups like Stripe, Vercel, Bolt, and many founders of their own companies.
Overview
We're looking to hire for our Data side of our AI team at Speechify. This role is responsible for all aspects of data collection to support our model training operations. We are able to build high-quality datasets at petabyte-scale and low cost through a tight integration of infrastructure, engineering, and research work. We are looking for a skilled Software Engineer to join us.
What You’ll Do
An Ideal Candidate Should Have
What we offer
Think you’re a good fit for this job?
Tell us more about yourself and why you're interested in the role when you apply.
And don’t forget to include links to your portfolio and LinkedIn.
Not looking but know someone who would make a great fit?
Refer them!
Speechify is committed to a diverse and inclusive workplace.
Speechify does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.
Ready to apply?
Apply to Speechify
Our mission is to democratize finance for all. An estimated $124 trillion of assets will be inherited by younger generations in the next two decades. The largest transfer of wealth in human history. If you’re ready to be at the epicenter of this historic cultural and financial shift, keep reading.
We are building an elite team, applying frontier technologies to the world’s biggest financial problems. We’re looking for thoughtful problem-solvers and builders who want to make a meaningful contribution. Robinhood is a place where people take ownership of their work and help improve financial access for all. We operate with high standards, clear accountability, and a strong focus on security and ethics in everything we build!
The Red Team’s mission is to identify and reduce real-world security risks across Robinhood by simulating adversary behavior and testing defenses. As a Staff Offensive Security Engineer, you will plan and execute security assessments across applications, infrastructure, and physical environments, and partner closely with engineering and security teams to strengthen detection and response capabilities. You will help prioritize risk, contribute to remediation efforts, and develop tools and techniques that improve how we test and secure our systems. Your work will directly support the safety and reliability of products used by millions of customers.
This role is based in our Menlo Park, CA office, with in-person attendance expected at least 3 days per week.
At Robinhood, we believe in the power of in-person work to accelerate progress, spark innovation, and strengthen community. Our office experience is intentional, energizing, and designed to fully support high-performing teams.
In addition to the base pay range listed below, this role is also eligible for bonus opportunities + equity + benefits.
Base pay for the successful applicant will depend on a variety of job-related factors, which may include education, training, experience, location, business needs, or market demands. The expected base pay range for this role is based on the location where the work will be performed and is aligned to one of 3 compensation zones. For other locations not listed, compensation can be discussed with your recruiter during the interview process.
Base Pay Range:
Click here to learn more about our Total Rewards, which vary by region and entity.
If our mission energizes you and you’re ready to build the future of finance, we look forward to seeing your application.
Robinhood provides equal opportunity for all applicants, offers reasonable accommodations upon request, and complies with applicable equal employment and privacy laws. Inclusion is built into how we hire and work—welcoming different backgrounds, perspectives, and experiences so everyone can do their best. Please review the Privacy Policy for your country of application.
Ready to apply?
Apply to Robinhood
Share this job
As a Production Support Engineer I at Marqeta, you will play a pivotal role in our commitment to customer satisfaction and the seamless operation of our products and services. You will serve as the first line of contact for our customers, adeptly handling and resolving technical issues, using known procedural documents with some technical analysis performed while translating technical jargon into user-friendly language. In addition, you will collaborate with our Engineering teams to manage software updates.
Your role will also involve handling problems in all areas of Marqeta's products and services and ensuring that our customers get the best support. For complex issues, you would follow escalation procedures engaging Senior members and Engineering teams. At Marqeta, we value the essential role our Production Support Engineers play in our service delivery chain and look forward to welcoming you to our team.
We work Flexible First. This role can be performed remotely anywhere within Ontario or British Columbia, Canada. We’d love for you to join us!
This position is for an existing vacancy.
The Impact You’ll Have
Who You Are
Nice to haves
Compensation and Benefits
Marqeta is a Flex First company which allows you to choose your best working environment, whether that be from home or at a company office. To support Flex First, we calibrate pay to a competitive value according to working location.
When determining salaries, we consider several factors including, but not limited to, skills, prior experience, and work location. The new-hire base salary range for this position, reflected in CAD, is: 60,600 - 75,800
We also believe in recognizing the contributions of our people. That's why we award annual bonuses to eligible employees, rewarding both individual performance and the success of the entire company.
Along with monetary compensation, Marqeta offers
Ready to apply?
Apply to MQ Referrals Only
Nubank is one of the largest digital financial platforms in the world, with more than 122 million customers across Brazil, Mexico, and Colombia. Guided by our mission to fight complexity and empower people, we are redefining financial services in Latin America and this is still just the beginning of the purple future we're building.
Listed on the New York Stock Exchange (NYSE: NU), we combine proprietary technology, data intelligence, and an efficient operating model to deliver financial products that are simple, accessible, and human.
Our impact has been recognized by global rankings such as Time 100 Companies, Fast Company’s Most Innovative Companies, and Forbes World’s Best Bank. Visit our institutional page https://international.nubank.com.br/careers/
Senior System Engineer - Systems Performance Team The Systems Performance team is part of the Computing Squad (Foundation / Runtime Platforms). You will be part of a team focused on building deep diagnostic tools and performing high-level analysis to reduce latency, infrastructure costs, and increase services efficiency. You will execute performance investigations and identify systemic bottlenecks across one of the largest JVM-based microservice architectures in the world, interacting with everything from the Linux Kernel to cloud-wide orchestration.
We are looking for a person who has
Nubank operates in a hybrid model, where teams collaborate remotely and periodically come together for about one week of in-person sessions. For Canadian team members, these sessions typically take place in one of our hubs (Brazil, Mexico, Colombia, or the United States) and are communicated well in advance to allow proper planning, with travel support provided to ensure equitable access to these global collaboration opportunities.
Our recruitment process may involve the use of artificial intelligence–enabled tools, such as automated interview transcription and analysis, to support the evaluation process. Artificial intelligence is not used to make final hiring decisions; all decisions are made by human reviewers.
Ready to apply?
Apply to Nubank
Share this job
IXL Learning, developer of personalized learning products used by millions of people globally, is seeking Software Developers who have a passion for technology and education. You will be helping us build the tools needed to provide highly-demanded new integrations for IXL’s largest school districts.
At IXL Learning, we are dedicated to creating first-of-their-kind products that leverage cutting-edge technologies to solve seemingly intractable problems in education. Our users count on us to make learning as effective as can be, so we're looking for exceptional people who are committed to solving the real-world challenges faced by students and teachers around the world.
As a Software Developer, New Grad on our Integrations team, you will design and develop the tools and systems needed to simplify the setup of IXL for schools and districts. These integrations will help IXL work seamlessly with the technology ecosystems of all of our customers. This is an amazing opportunity for you to join a mission-driven high-growth company. At IXL, we find it immensely satisfying to develop products that impact the lives of students everywhere, and we are eager to have you join our team. #LI-EM1
This position requires you to be in our Toronto, Ontario, Canada, office.
The base salary range for this full-time position is $90,000 to $92,000 CAD + benefits. Our pay ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hire pay for the position. Individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training.
IXL Learning is the country's largest EdTech company. We reach millions of learners through our diverse range of products. For example:
Our mission is to create innovative products that will make a real, positive difference for learners and educators and we're looking for passionate, mission-minded people to join us in achieving this goal. We have a unique culture at IXL that fosters collaboration and the open exchange of ideas. We value our team and treat one another with kindness and respect. We approach our work with passion, tenacity, and authenticity. We find it immensely satisfying to develop products that impact the lives of millions and we are eager to have you join our team.
At IXL, we value diversity in age, race, ethnicity, gender, sexual orientation, physical and mental ability, political and religious beliefs, and life experience, and we are proud to promote a work environment where everyone, from any background, can do their best work. IXL Learning is an equal opportunity employer and does not discriminate against applicants and employees based on any legally protected category.
Ready to apply?
Apply to IXL Learning
Share this job
As Marqeta’s Security Operations Intern, you will gain hands-on experience building and validating security operations capabilities for a publicly traded payments technology company. You’ll join the Security Operations and Response team within the Product and Infrastructure Security organization, where you’ll validate and formalize incident response procedures, develop SOAR-based runbook automations, and design tabletop exercises that test our operational readiness against real-world threat scenarios. This role is grounded in security operations fundamentals—procedure development, incident response methodology, and team coordination—with opportunities for exposure to detection engineering and automation workflows.
We work Flexible First. This role can be performed remotely anywhere within Ontario or British Columbia, Canada. We’d love for you to join us!
This will be a 12 week internship program, beginning on June 8th and running through August 28th, 2026.
This position is not for an existing vacancy.
At this point, we hope you're feeling excited about the role. You're encouraged to apply even if your experience doesn't precisely match the job description. Your skills and passion will stand out—and set you apart—especially if your career has taken some extraordinary twists and turns. We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates, so again, don’t hesitate to apply — we’d love to hear from you.
Marqeta is a Flex First company which allows you to choose your best working environment, whether that be from home or at a company office. To support Flex First, we calibrate pay to a competitive value according to working location.
When determining pay, we consider several factors including, but not limited to, skills, prior experience, and work location. The 2026 Internship weekly rate, reflected in CAD, is: 1,468/week
Along with monetary compensation, Marqeta offers Interns:
Ready to apply?
Apply to MQ Referrals Only
Share this job
Lightmatter is leading the revolution in AI data center infrastructure, enabling the next giant leaps in human progress. The company invented the world’s first 3D-stacked photonics engine, Passage™, capable of connecting thousands to millions of processors at the speed of light in extreme-scale data centers for the most advanced AI and HPC workloads.
Lightmatter raised $400 million in its Series D round, reaching a valuation of $4.4 billion. We will continue to accelerate the development of data center photonics and grow every department at Lightmatter!
If you're passionate about tackling complex challenges, making an impact, and being an expert in your craft, join our team of brilliant scientists, engineers, and accomplished industry leaders.
Lightmatter is (re)inventing the future of computing with light!
We are hiring a talented software engineer to help us build the next generation of photonic AI processors and interconnects. In this role, you will be responsible for developing and extending the device software and firmware stack for Photonic Compute and Photonic interconnect products. You will collaborate with other software teams and hardware systems teams to develop security, telemetry, virtualization, and remote administration functionality.
We offer competitive compensation. The base salary range for this role determined based on location, experience, educational background, and market data.
Benefits eligibility may vary depending on your employment status and location. Lightmatter recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, sex, disability, age, veteran status, and other protected status as required by applicable law.
Export Control
Candidates should have capacity to comply with the federally mandated requirements of U.S. export control laws.
Ready to apply?
Apply to Lightmatter
Share this job
Rumble is the Freedom-First technology platform. We proudly offer a video platform, cloud services, advertising solutions, and a non-custodial cryptocurrency wallet.
The Senior Back-End Developer is responsible for designing, building, and maintaining high-performance
server-side systems that power a large-scale video platform serving millions of users. This role
encompasses architecture and optimization of backend services, database design, caching strategies,
and API development. You will also be responsible for integrating front-end elements built by your
coworkers into the application, so a solid understanding of front-end technologies is necessary.
Duties/Responsibilities:
- Architect and implement scalable backend systems and features for a high-traffic video platform
- Design and optimize MySQL database schemas for performance at scale
- Implement and maintain caching strategies using Memcached and Redis
- Build and maintain APIs that serve client-facing applications
- Integrate user-facing elements developed by front-end developers with server-side logic
- Identify performance bottlenecks and implement optimizations for maximum speed and scalability
- Develop and maintain background tasks and data pipelines handling very large datasets
- Conduct code reviews and drive technical decisions on system design and architecture
- Maintain, refactor, and modernize legacy codebases
- Build reusable libraries and establish patterns for future development
- Other duties, as assigned
Requirements:
- 8 years of experience as a back-end developer
- 10 years of experience with object-oriented programming languages
- 5 years of experience with PHP specifically, including PHP 8+
- Strong experience with MySQL, including query optimization, indexing strategies, and schema design
- Experience with caching layers (Memcached, Redis)
- Proven ability to build and optimize systems operating at high scale and throughput
- In-depth understanding of web development and HTTP protocols
- Experience with Linux server environments, including navigating consoles, reading logs, and
troubleshooting production issues
- Experience and/or knowledge with front-end languages such as JS/TypeScript, HTML, CSS
- Willingness to jump in on any project, when needed, regardless of code quality
Preferred Qualifications:
- Knowledge of video technologies, containers, codecs, and live streaming
- Experience with NGINX configuration and optimization
- Experience with WebSocket for real-time communication
- Experience in BASH scripting and automation
- Understanding of networking fundamentals
- Familiarity with static analysis tools (e.g., PHPStan) and modern PHP coding standards
Desired Qualifications:
- Degree in Computer Science/Engineering or related field
- Experience migrating or modernizing legacy PHP codebases
- Experience with server-side rendering architectures
- Experience with CI/CD pipelines and automated testing (PHPUnit)
Annual Compensation Range:
$135,000 - $154,000 CAD base + benefits + equity
Note: The salary range listed for this position is a good faith estimate based on experience, qualifications, and internal compensation structure. The actual salary offered varies depending on the candidate's skill level and experience. This posting refers to an active vacancy within the organization.
Why Our Team Loves Working Here:
EEO Statement:
Rumble is an equal opportunity employer. We promote an equal playing field where everyone has the same opportunities regardless of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability status, or any other applicable characteristics protected by law. Rumble is an active participant in the e-verify program.
Physical demands of the position:
While performing the duties of this job, the employee is regularly required to sit for prolonged periods of time while using a computer and/or keyboard. The employee is required to communicate verbally and hear. The employee may be required to walk, reach with hands and arms, balance, and stoop or kneel. The employee may occasionally be required to lift and/or move up to 15 pounds. Specific vision abilities required by this job include clarity of vision at approximately 20 inches or less (i.e., working with small objects or reading small print), including the use of computers.
Ready to apply?
Apply to Rumble - Career PageShare this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
As an Infrastructure Hardware Technical Program Manager (Server and Network Systems) on the Cluster Architecture Team, you will drive end-to-end delivery of server and network platform programs across Cerebras CS-3–based AI clusters — from requirements and vendor selection through lab bring-up, qualification, and production rollout. You will be the execution owner for multi-team programs spanning OEM/ODM partners, component vendors, internal software/runtime teams and architects, validation/QA, and deployment/operations.
This role is intentionally technical: you must understand server, network, and system-level trade-offs well enough to run effective technical reviews, keep programs grounded in real constraints, and maintain a crisp decision trail - while partnering closely with the Compute / Server / Network Platform Architects for detailed technical direction and sign-off. You will also build shared understanding with our rack/elevations and physical datacenter design partners so that server and network changes land smoothly in real deployments (without owning physical DC design).
Responsibilities
Skills and Qualifications
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
The AI Infrastructure Operations Engineer (SiteOps) is an entry-level individual contributor role focused on the deployment, bring-up, monitoring, and first-line troubleshooting of Cerebras AI infrastructure in data center environments. The role supports CS systems, cluster server hardware, cluster networking hardware, and hardware telemetry and monitoring tools.
Support reliable operation and scale-out of Cerebras AI clusters by executing defined hardware bring-up and validation procedures, monitoring telemetry, performing first-line troubleshooting, and escalating issues using established workflows.
Incident Support & Tooling
Learning & Development
Explicit Non-Responsibilities
Bachelor’s degree in a relevant engineering field or equivalent experience; 0–3 years experience in hardware operations, systems engineering, or datacenter environments; basic familiarity with server hardware, networking fundamentals, and Linux systems.
Internship or early-career experience in datacenter or hardware lab environments; exposure to monitoring or telemetry systems; comfort working in data centers.
What Success Looks Like
Consistent and correct execution of hardware bring-up procedures, early identification and escalation of issues, improving documentation quality, and clear progression toward more independent operational responsibility.
Career Path
This role progresses naturally toward Senior and Principal IC roles within AI Infrastructure Operations (SiteOps), with an optional management track.
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
About The Role
We are seeking a highly skilled and experienced AI Infrastructure Operations Engineer to manage and operate our cutting-edge machine learning compute clusters. These clusters would provide the candidate an opportunity to work with the world's largest computer chip, the Wafer-Scale Engine (WSE), and the systems that harness its unparalleled power.
You will play a critical role in ensuring the health, performance, and availability of our infrastructure, maximizing compute capacity, and supporting our growing AI initiatives. This role requires a deep understanding of Linux-based systems, containerization technologies, and experience with monitoring and troubleshooting complex distributed systems. The ideal candidate is a proactive problem-solver with expertise in large-scale compute infrastructure, dependable and an advocate for customer success.
Preferred Skills And Requirements
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference.
Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation.
As a Compute / Server Platform Architect on the Cluster Architecture Team, you will own the server-side platform architecture that enables Cerebras CS3-based AI clusters (training and inference) to deliver predictable performance, scalability, and reliability. Our accelerators are network-attached, so the x86 server fleet is a first-class part of the end-to-end system: it runs critical-path runtime functions (for example orchestration, prompt caching, and IO/control services) and must be co-designed with software for token-level latency, throughput, and cost efficiency. You will translate workload behavior into CPU, memory, IO, PCIe, and host-networking requirements, drive platform evaluations with vendors, and provide technical leadership through qualification and production adoption in close partnership with other function leaders and TPMs.
Responsibilities
Skills and Qualifications
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Read our blog: Five Reasons to Join Cerebras in 2026.
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Ready to apply?
Apply to Cerebras Systems
Share this job
As one of our Lead Site Reliability Engineers, you will combine hands-on technical expertise with strategic technical leadership across infrastructure and software development. You will own the design and evolution of major systems within our multi-cloud, multi-region, active-active content serving platform that serves upwards of 25 Billion requests daily. Through a combination of architectural vision, cross-team collaboration and mentorship, you will help drive the reliability initiatives and define the technical strategy that scales our platform to 50 Billion requests per day and beyond.
Responsibilities:
Qualifications:
The base pay range for this position is $154,000-$200,000 CAD/year, which can include additional bonus depending on the position ultimately offered, in addition to a full range of medical, financial, and/or other benefits. The base pay offered may vary depending on job-related knowledge, skills, and experience.
Studies have shown that women, communities of color, and historically underrepresented people are less likely to apply to jobs unless they meet every single qualification. We are committed to building a diverse and inclusive culture where all Inkers can thrive. If you’re excited about the role but don’t meet all of the abovementioned qualifications, we encourage you to apply. Our differences bring a breadth of knowledge and perspectives that makes us collectively stronger.
We welcome and employ people regardless of race, color, gender identity or expression, religion, genetic information, parental or pregnancy status, national origin, sexual orientation, age, citizenship, marital status, ethnicity, family or marital status, physical and mental ability, political affiliation, disability, Veteran status, or other protected characteristics. We are proud to be an equal opportunity employer.
Ready to apply?
Apply to Movable Ink
Share this job
Rumble is the Freedom-First technology platform. We proudly offer a video platform, cloud services, advertising solutions, and a non-custodial cryptocurrency wallet.
Rumble is seeking a Senior Infrastructure Engineer (CDN) who will be a pivotal member of the team that builds and maintains the infrastructure that is at the core of the Rumble platform. Candidates applying for this position should be very experienced and highly skilled network infrastructure engineers who are excited about the opportunity of building and supporting a fast growing, fault tolerant and high capacity platform. A person who would be a successful candidate for this position loves challenges and problem solving, and has spent their career working on mission critical infrastructure. This role requires someone who is motivated and can work both independently and as part of a team.
What you will do:
Required Qualifications:
Preferred Qualifications:
Desired Qualifications:
Annual Compensation Range:
$150,000-$195,000 USD base + benefits + equity (If based in the United States)
$123,000-$142,000 CAD base + benefits + equity (If based in Canada)
Note: The salary range listed for this position is a good faith estimate based on experience, qualifications, and internal compensation structure. The actual salary offered varies depending on the candidate's skill level and experience. This posting refers to an active vacancy within the organization.
Why Our Team Loves Working Here:
EEO Statement:
Rumble is an equal opportunity employer. We promote an equal playing field where everyone has the same opportunities regardless of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability status, or any other applicable characteristics protected by law. Rumble is an active participant in the e-verify program.
Physical demands of the position:
While performing the duties of this job, the employee is regularly required to sit for prolonged periods of time while using a computer and/or keyboard. The employee is required to communicate verbally and hear. The employee may be required to walk, reach with hands and arms, balance, and stoop or kneel. The employee may occasionally be required to lift and/or move up to 15 pounds. Specific vision abilities required by this job include clarity of vision at approximately 20 inches or less (i.e., working with small objects or reading small print), including the use of computers.
Ready to apply?
Apply to Rumble - Career PageShare this job
Rumble is the Freedom-First technology platform. We proudly offer a video platform, cloud services, advertising solutions, and a non-custodial cryptocurrency wallet.
About the Role:
Rumble Cloud is seeking a Kubernetes Engineer to support our team in rolling out and operating our next-generation Kubernetes platform. This role will focus on our new CAPI/CAPO-based Kubernetes solution, which is designed to be compatible with our existing OpenStack Magnum API and will be deployed across our public cloud. You will help run the day-to-day operations of the Kubernetes service, assist with migrations and onboarding from our current Magnum-based offering, and act as an escalation point for complex customer issues that go beyond front-line support. This is a hands-on engineering role for someone who enjoys debugging difficult problems, improving reliability, and working closely with both platform engineers and customer-facing teams.
Key Responsibilities:
Required Skills & Experience:
Nice to Have (Preferred Skills):
Qualifications:
Annual Compensation Range:
$175,000 - $220,000 USD base + benefits + equity (If based in the United States)
$157,000 - $187,000 CAD base + benefits + equity (If based in Canada)
Note: The salary range listed for this position is a good faith estimate based on experience, qualifications, and internal compensation structure. The actual salary offered varies depending on the candidate's skill level and experience. This posting refers to an active vacancy within the organization.
Why Our Team Loves Working Here:
EEO Statement:
Rumble is an equal opportunity employer. We promote an equal playing field where everyone has the same opportunities regardless of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability status, or any other applicable characteristics protected by law. Rumble is an active participant in the e-verify program.
Physical demands of the position:
While performing the duties of this job, the employee is regularly required to sit for prolonged periods of time while using a computer and/or keyboard. The employee is required to communicate verbally and hear. The employee may be required to walk, reach with hands and arms, balance, and stoop or kneel. The employee may occasionally be required to lift and/or move up to 15 pounds. Specific vision abilities required by this job include clarity of vision at approximately 20 inches or less (i.e., working with small objects or reading small print), including the use of computers.
Ready to apply?
Apply to Rumble - Career PageShare this job
Rumble is the Freedom-First technology platform. We proudly offer a video platform, cloud services, advertising solutions, and a non-custodial cryptocurrency wallet.
Rumble is seeking an experienced Infrastructure Engineer (Data Center) responsible for the operation, maintenance, and optimization of critical data center infrastructure.
As a key member of the Data Center Services Team, you will:
Required Qualifications
Preferred Qualifications
Desired Qualifications
Annual Compensation Range:
$107,000 - $134,000 USD base + benefits + equity (If based in the United States)
$84,000 - $106,000 CAD base + benefits + equity (If based in Canada)
Note: The salary range listed for this position is a good faith estimate based on experience, qualifications, and internal compensation structure. The actual salary offered varies depending on the candidate's skill level and experience. This posting refers to an active vacancy within the organization.
Why Our Team Loves Working Here:
EEO Statement:
Rumble is an equal opportunity employer. We promote an equal playing field where everyone has the same opportunities regardless of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability status, or any other applicable characteristics protected by law. Rumble is an active participant in the e-verify program.
Physical demands of the position:
While performing the duties of this job, the employee is regularly required to sit for prolonged periods of time while using a computer and/or keyboard. The employee is required to communicate verbally and hear. The employee may be required to walk, reach with hands and arms, balance, and stoop or kneel. The employee may occasionally be required to lift and/or move up to 15 pounds. Specific vision abilities required by this job include clarity of vision at approximately 20 inches or less (i.e., working with small objects or reading small print), including the use of computers.
Ready to apply?
Apply to Rumble - Career PageShare this job
Canonical is a leading provider of open source software and operating systems to the global enterprise and technology markets. Our platform, Ubuntu, is very widely used in breakthrough enterprise initiatives such as public cloud, data science, AI, engineering innovation and IoT. Our customers include the world's leading public cloud and silicon providers, and industry leaders in many sectors. The company is a pioneer of global distributed collaboration, with 1100+ colleagues in 75+ countries and very few office based roles. Teams meet two to four times yearly in person, in interesting locations around the world, to align on strategy and execution.
The company is founder led, profitable and growing.
Location: This is an office based role. We expect the suitable candidate to be based in Xizhi District, New Taipet City as you will be required to work on-site at our lab.
This is a Python software engineering opportunity for a computer lab engineer passionate about open source software, Linux, and the latest server and network technologies. Come build a rewarding, meaningful career working with the best and brightest people in technology at Canonical, a growing international software company. If you love hacking in your home lab and are curious about hardware, you will love this opportunity.
As an Python Engineer - Data Center Hardware Integration in Canonical, you will be responsible for the day-to-day management and operations of our lab in Taipei which serves as a centre point for Ubuntu server certification of US based silicon and server designs. This includes software defined hardware management, working with, and developing data centre automation tooling (MAAS), interacting with vendors, asset tracking and handling deliveries.
We consider geographical location, experience, and performance in shaping compensation worldwide. We revisit compensation annually (and more often for graduates and associates) to ensure we recognise outstanding performance. In addition to base pay, we offer a performance-driven annual bonus. We provide all team members with additional benefits, which reflect our values and ideals. We balance our programs to meet local needs and ensure fairness globally.
Canonical is a pioneering tech firm that is at the forefront of the global move to open source. As the company that publishes Ubuntu, one of the most important open source projects and the platform for AI, IoT and the cloud, we are changing the world on a daily basis. We recruit on a global basis and set a very high standard for people joining the company. We expect excellence - in order to succeed, we need to be the best at what we do.
Canonical has been a remote-first company since its inception in 2004. Work at Canonical is a step into the future, and will challenge you to think differently, work smarter, learn new skills, and raise your game. Canonical provides a unique window into the world of 21st-century digital business.
We are proud to foster a workplace free from discrimination. Diversity of experience, perspectives, and background create a better work environment and better products. Whatever your identity, we will give your application fair consideration.
#LI-Onsite
Ready to apply?
Apply to Canonical
Share this job
Canonical is a leading provider of open source software and operating systems to the global enterprise and technology markets. Our platform, Ubuntu, is very widely used in breakthrough enterprise initiatives such as public cloud, data science, AI, engineering innovation and IoT. Our customers include the world's leading public cloud and silicon providers, and industry leaders in many sectors. The company is a pioneer of global distributed collaboration, with 1100+ colleagues in 75+ countries and very few office based roles. Teams meet two to four times yearly in person, in interesting locations around the world, to align on strategy and execution.
The company is founder led, profitable and growing.
Location: This is an office based role. We expect the suitable candidate to be based in Toronto as you will be required to work on-site at our lab.
We are hiring a MAAS Systems Engineer to focus on building tooling to increase data center operational efficiency. As a MAAS Systems Engineer in Canonical, you will be responsible for the day-to-day management and operations of our lab in the Toronto area, which we use for Ubuntu server certification of US based silicon and server designs. You'll write and use software that deploys and configures servers and network switches, and solve difficult design problems related to doing this well and fast at scale. You will work with a mixture of hardware: cutting edge new silicon that Kernel Team and Partner Engineering teams work on to make them run Linux extremely well, as well as established hardware to make sure new versions of Ubuntu are smooth and fast. This role is a combination of software engineering focusing on reliable automated hardware commissioning and deployment, as well as testing, troubleshooting and experimentation focusing on improving reliability and performance of our software. You don't need to be an expert in all of these areas, but you need to be a good software engineer who is curious about complex distributed systems to succeed in this role. If you love hacking in your home lab, have good Python skills, and are curious about data centre hardware, you will love this opportunity.
We consider geographical location, experience, and performance in shaping compensation worldwide. We revisit compensation annually (and more often for graduates and associates) to ensure we recognise outstanding performance. In addition to base pay, we offer a performance-driven annual bonus. We provide all team members with additional benefits, which reflect our values and ideals. We balance our programs to meet local needs and ensure fairness globally.
Canonical is a pioneering tech firm at the forefront of the global move to open source. As the company that publishes Ubuntu, one of the most important open source projects and the platform for AI, IoT and the cloud, we are changing the world on a daily basis. We recruit on a global basis and set a very high standard for people joining the company. We expect excellence - in order to succeed, we need to be the best at what we do. Canonical has been a remote-first company since its inception in 2004. Working here is a step into the future, and will challenge you to think differently, work smarter, learn new skills, and raise your game.
We are proud to foster a workplace free from discrimination. Diversity of experience, perspectives, and background create a better work environment and better products. Whatever your identity, we will give your application fair consideration.
#LI-hybrid
Ready to apply?
Apply to Canonical
Share this job
Canonical is a leading provider of open source software and operating systems to the global enterprise and technology markets. Our platform, Ubuntu, is very widely used in breakthrough enterprise initiatives such as public cloud, data science, AI, engineering innovation and IoT. Our customers include the world's leading public cloud and silicon providers, and industry leaders in many sectors. The company is a pioneer of global distributed collaboration, with 1100+ colleagues in 75+ countries and very few office based roles. Teams meet two to four times yearly in person, in interesting locations around the world, to align on strategy and execution.
The company is founder led, profitable and growing.
Location: This is an office based role. We expect the suitable candidate to be based in Toronto as you will be required to work on-site at our lab.
We are hiring a Data Center Infrastructure Engineer to build and maintain MAAS test labs. As a Data Center Infrastructure Engineer in Canonical, you will be responsible for the day-to-day management and operations of our lab in the Toronto area, which we use for Ubuntu server certification of US based silicon and server designs. You'll use software that deploys and configures servers and network switches to operate the data centre and update its configuration, make sure the hardware and cabling is perfectly organised and inventoried, handle hardware deliveries and their onboarding and commissioning. You will work with a mixture of hardware: cutting edge new silicon that Kernel Team and Partner Engineering teams work on to make them run Linux extremely well, as well as established hardware to make sure new versions of Ubuntu are smooth and fast. We run our data centres using software-driven operations, and you need to be familiar with Python software development to be effective in this role. If you love hacking in your home lab, have good Python skills, and are curious about data centre hardware, you will love this opportunity.
We consider geographical location, experience, and performance in shaping compensation worldwide. We revisit compensation annually (and more often for graduates and associates) to ensure we recognise outstanding performance. In addition to base pay, we offer a performance-driven annual bonus. We provide all team members with additional benefits, which reflect our values and ideals. We balance our programs to meet local needs and ensure fairness globally.
Canonical is a pioneering tech firm at the forefront of the global move to open source. As the company that publishes Ubuntu, one of the most important open source projects and the platform for AI, IoT and the cloud, we are changing the world on a daily basis. We recruit on a global basis and set a very high standard for people joining the company. We expect excellence - in order to succeed, we need to be the best at what we do. Canonical has been a remote-first company since its inception in 2004. Working here is a step into the future, and will challenge you to think differently, work smarter, learn new skills, and raise your game.
We are proud to foster a workplace free from discrimination. Diversity of experience, perspectives, and background create a better work environment and better products. Whatever your identity, we will give your application fair consideration.
#LI-hybrid
Ready to apply?
Apply to Canonical
Share this job
Who we are:
MasterClass is the streaming platform where the world’s best come together so anyone, anywhere, can access and be inspired by their knowledge and stories. We put you in the room with the creators, thinkers, makers and leaders who have changed the world, so that you can change yours.
Members get unprecedented access to 200+ instructors and classes across a wide variety of fields, including Arts & Entertainment, Business, Design & Style, Sports & Gaming, Writing and more. Step into Nas’ recording studio and Gordon Ramsay’s kitchen, and go behind the big screen with James Cameron. Design your career with Elaine Welteroth, get ready to win with Lewis Hamilton, perfect your pitch with Shonda Rhimes and discover your inner negotiator with Chris Voss.
We’re a remote-first workforce with collaborative work spaces in San Francisco and Kitchener, Ontario, and employees in several U.S. states. If you’re interested in joining a dynamic, culture-driving company—where learning invaluable
skills is all in a day’s work—we invite you to apply.
Snapshot of the Role:
We’re looking for a Staff Infrastructure Engineer who operates as a technical leader and force multiplier across the platform. This is a senior individual contributor role for someone who designs systems that scale beyond a single team, sets technical direction, and elevates how infrastructure is built and operated across MasterClass.
You’ll lead and provide direction for cloud infrastructure, developer platforms, video infrastructure, and emerging AI/ML infrastructure—partnering closely with product engineering, content production, data, and AI teams. This role is ideal for someone who combines deep systems expertise with strong software engineering, thrives in ambiguity, and takes ownership from strategy through execution.
This is not a people-management role, but it carries significant technical leadership responsibility.
What You Will Do:
About You (Requirements):
Who You Are
Nice To Haves
Nice-to-haves are not required; each represents a direction in which our infrastructure is evolving.
At MasterClass, we believe we put our best work forward when our employees bring together ideas that are diverse in thought. We are proud to be an equal opportunity workplace and are committed to equal employment opportunity regardless of race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, genetic information, veteran status, gender identity or expression, sexual orientation, or any other characteristic protected by applicable federal, state or local law. In addition, MasterClass will provide reasonable accommodations for qualified individuals with disabilities. If you have a disability or special need, we would like to know how we can better accommodate you.
The salary range listed is for candidates in Ontario, Canada. As a company, we have a location based strategy, which means the disclosed range estimate has been adjusted for geographic differential associated with the location where the position may be filled.
MasterClass’s salary ranges are based on paying competitively for our size and industry. In addition to salary, we also offer equity and comprehensive benefits (medical, dental, vision, flexible PTO, and more). The range listed is for the expectations as laid out in the job description, however we are often open to a wide variety of profiles, and recognize that the person we hire may be less experienced (or more senior) than this job description as posted. If that ends up being the case, the updated salary range will be communicated with you as a candidate.
Ready to apply?
Apply to MasterClass
Share this job
Armis, the cyber exposure management & security company, protects the entire attack surface and manages an organization’s cyber risk exposure in real time. In a rapidly evolving, perimeter-less world, Armis ensures that organizations continuously see, protect and manage all critical assets - from the ground to the cloud. Armis secures Fortune 100, 200 and 500 companies as well as national governments, state and local entities to help keep critical infrastructure, economies and society stay safe and secure 24/7.
Armis is a privately held company headquartered in California.
At Armis, our Sales Engineers (SE) serve as the linchpin of every prospect engagement. Working closely with our prospects to demonstrate the value of the Armis agentless platform via console demonstrations, proof-of-value deployments, and targeted training sessions.
What you'll do...
Work closely with our prospects to:
Education:
Experience:
Knowledge in one or more of the following:
The choices you make in your career journey matter. You want to do interesting work in an important field while also having time to live your life, which is why we place so much value in your life-work balance. Armis sets you up for success with comprehensive health benefits, discretionary time off, paid holidays including monthly me days, and a highly inclusive and diverse workplace. Put your unique experiences and perspective to work in an environment where they will enable you to thrive, grow, and live your life with integrity.
Armis is proud to be an equal opportunity employer. We never discriminate based on race, ethnicity, color, ancestry, national origin, religion, sex, sexual orientation, gender identity, age, disability, veteran status, genetic information, marital status or any other legally protected (or not) status. In compliance with federal law, all persons hired will be required to submit satisfactory proof of identity and legal authorization.
Ready to apply?
Apply to Armis Security
Share this job
We're seeking highly motivated students who possess strong technical skills and the ability to work in a fast-paced collaborative environment.
Internship opportunities are not limited to the summer and are available throughout the year.
Primary Responsibilities:
Requirements of the Candidate include:
Ready to apply?
Apply to Waterfront International Ltd
Share this job
We are looking for exceptionally bright and talented developers to develop and administer leading edge, global 24×7 financial trading systems in a fast paced, stimulating and dynamic environment.
Primary Responsibilities:
Requirements:
Ready to apply?
Apply to Waterfront International Ltd
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.