All active Redshift roles based in Portugal.
Pick a job to read the details
Tap any role on the left — its description and apply link will open here.
About this job
At JUMO, we’ve built a smart financial services platform to make finance accessible to everyone. Our Data engineering team ensures that our intelligent banking platform is able to acquire data from millions of users across globally distributed sources, such as GSM. Beyond ingress, the data is processed and stored in the cloud to support event-driven analytics and machine learning pipelines.
As a JUMO Junior Data Engineer, you will design, build and maintain scalable data architectures. You will monitor their performance, perform root cause analysis and provide solutions to any issues that might arise. You will work with large, complex data sets, which means experience working with Big Data is essential. On a typical day, we use Spark, Kafka, Python, SQL, Airflow and AWS technologies, such as Redshift. We work with Docker and Kubernetes for automating deployment, scaling, and management of containerized applications.
You will
You will need
Bonus if you have
We ask a lot of each other at JUMO, but we give a lot too.
You will love
Remote First
We operate a remote first working approach, where working remotely is our default way of working. Our environment is designed to foster innovation and enable collaboration. You have flexibility where to work from (between time zone UTC 0 and UTC +3), as long as you are set up to work remotely and have access to data with a strong and reliable connection as we value online facetime for collaboration at JUMO.
Diversity and Inclusion
At JUMO, we firmly believe that diversity strengthens our teams. We are dedicated to fostering an inclusive recruitment process that cultivates an environment where all individuals can be authentic, collaborate, and thrive.
Ready to apply?
Apply to JUMO
About this job
At JUMO, we’ve built a smart financial services platform to make finance accessible to everyone. Our Data engineering team ensures that our intelligent banking platform is able to acquire data from millions of users across globally distributed sources, such as GSM. Beyond ingress, the data is processed and stored in the cloud to support event-driven analytics and machine learning pipelines.
As a JUMO Data Engineer, you will design, build and maintain scalable data architectures. You will monitor their performance, perform root cause analysis and provide solutions to any issues that might arise. You will work with large, complex data sets, which means experience working with Big Data is essential. On a typical day, we use Spark, Kafka, Python, SQL, Airflow and AWS technologies, such as Redshift. We work with Docker and Kubernetes for automating deployment, scaling, and management of containerized applications.
You will
You will need
Bonus if you have
We ask a lot of each other at JUMO, but we give a lot too.
You will love
Remote First
We operate a remote first working approach, where working remotely is our default way of working. Our environment is designed to foster innovation and enable collaboration. You have flexibility where to work from (between time zone UTC 0 and UTC +3), as long as you are set up to work remotely and have access to data with a strong and reliable connection as we value online facetime for collaboration at JUMO.
Diversity and Inclusion
At JUMO, we firmly believe that diversity strengthens our teams. We are dedicated to fostering an inclusive recruitment process that cultivates an environment where all individuals can be authentic, collaborate, and thrive.
Ready to apply?
Apply to JUMO
About DataCamp
DataCamp's mission is to empower everyone with the data and AI skills essential for 21st-century success. By providing practical, engaging learning experiences, DataCamp equips learners and organizations of all sizes to harness the power of data and AI. As a trusted partner to over 17 million learners and 6,000+ companies, including 80% of the Fortune 1000, DataCamp is leading the charge in addressing the critical data and AI skills shortage.
About the role
This role is suitable for experienced Data Engineers with 3 years plus of experience, so only apply for this role if your experience matches that criteria. This is a Data Platform Engineering role sitting within our Platform Engineering department, alongside our Infrastructure and Dev Platforms teams. You will be building the data platform that the rest of DataCamp depends on, not doing analytics work for a single business unit.
DataCamp, being a data-driven organization, runs a Datalakehouse on Google Cloud's BigQuery and our reporting is created using our Looker BI tooling. This BI reporting capability helps the DataCamp leadership team steer the company OKRs and vision, and it also helps our engineering teams make data-driven decisions on everything we do. Data also helps us measure how successful our product features or initiatives have been.
To provide data analytics for all our staff, DataCamp has a completely automated data pipeline built using an "infrastructure as code" methodology with Terraform and Ansible, which allows infrastructure provisioning of all data engineering tooling. DataCamp operates a data pipeline that ingests data using Airbyte, and our Airflow cluster schedules a series of tasks each day to transform and clean our data and seed our data marts that are authored using DBT, which helps simplify reporting for each of our business units.
DataCamp currently has data marts for finance, engineering KPIs, engineering costs, and infrastructure cost, to make reporting as easy as possible for our stakeholders in those business units, and we aim to extend that offering. DataCamp's leadership team, data scientists, data analysts and engineers all require the latest datasets to be refreshed on a daily basis so they can do their daily tasks. It is expected that the data platform engineering team makes sure there are no issues with the daily refresh of data and also provides consultancy on best practices such as data ingestion, data mart design, or data product design.
A large part of this role is enabling AI agents and MCPs across the company. We are in a strong agent-first engineering push at DataCamp. That means the data platform is no longer just serving humans through Looker dashboards, it is serving Claude, Claude Code and other agents through MCP servers that expose BigQuery, dbt metadata, data marts and internal tooling in a safe, governed way. You will be building and maintaining those MCPs, shaping how teams across DataCamp get data into their agent workflows, and setting the standards for how agent-authored changes land in our data platform.
What to expect from DataCamp if you join us as a Data Engineer
It will be your role, as part of the Platform Engineering department, to work directly with our staff data engineer, the wider platform engineering teams, our data science and analytics teams, and stakeholders across product and engineering on all data initiatives, both internally and as part of DataCamp's data products.
You should join us at DataCamp if you want to be fully supported in learning how to create and maintain best-in-class data platform engineering processes in an agent-first environment. If you enjoy learning in a team environment, our data and platform experts will be on hand to provide a support network to help you in your career development. You will also be empowered to have creative freedom to shape the processes and roadmap for data platform engineering at DataCamp, including how we expose data to agents, how we govern agent access to production data, and how we measure the impact of agent-assisted data work.
You will spend a meaningful portion of your time working with and through AI agents, writing AGENTS.md specifications, building MCP connectors, designing skills, and using Claude Code day to day to move faster on pipeline work, dbt changes, and operational tasks. If all of this sounds exciting, then this could be the ideal job for you. We don't expect you to know everything when you join, but we want you to be as passionate about data and agent workflows as we are. What we expect in return is that you are naturally inquisitive, open to constructive feedback, and willing to learn and grow every day.
This role should give you a collaborative working environment to become a world-class data platform engineer and help you in your career. Our aim is for you to build the skills needed to consult and advise on data mart design, data product design, MCP architecture, and how to leverage AI agents to improve our internal processes. You will also be involved in helping our content team provide data engineering courses on Redshift, BigQuery, Snowflake, and all the different widely used data warehousing tooling. This may sound like a lot, and it is, but within our Platform Engineering department we are confident we can support you every step of the way and we can achieve these outcomes as a team.
What do we expect from you as a Data Engineer
DataCamp has a strong bias towards providing self-serve systems for our teams so they can access the data they need to make data-driven decisions. That bias now extends to agents, where teams should be able to reach data through MCPs without us becoming a bottleneck. This means the data pipeline, MCP surfaces, and data development process need to be available, governed, and functional rather than a central bottleneck in the company.
You will, under the guidance of our staff data engineer, play a key part in planning future improvements for our whole data platform: the pipeline itself, the data development process, the testing process for data changes, and the agent and MCP surfaces that sit on top of it.
We need you to be enthusiastic and proactive by owning your day-to-day work or any operational issues that occur. We aim to do 80 percent capability development via planned quarterly OKR work and 20 percent support work. This work balance can only be achieved by automating everything we do and by leaning hard on agents to remove toil. We believe Data Platform Engineering should be a force multiplier for the business — both by giving humans self-serve data and by giving every team a reliable agent-accessible path to the data they need.
To be successful in the role it is essential you know Python and can write and advise on SQL best practices. Having an understanding of data ingestion, data processing, reverse ETL, a passion for data governance, and acting as a gatekeeper for security on our data platform is also a must. Comfort working with and through AI agents, not just using them to autocomplete, but designing workflows around them is also essential.
The ideal candidate
It's a plus if
Why Datacamp?
Joining DataCamp means becoming part of a dynamic, creative, and international start-up. Here are just a few of the reasons why you’ll love being on our team:
Our competitive compensation package offers additional benefits. On top of your salary you will also receive extra legal benefits such as best-in-class medical insurance including dental and vision. Depending on your location additional benefits might be available to you.
At DataCamp, we value diverse experiences and perspectives. If you’re excited about this role but don't meet every qualification, we still encourage you to apply. We believe skills can be developed and are committed to fostering an inclusive workplace where everyone can thrive. Your unique talents and perspectives are what make our team great!
Ready to apply?
Apply to DataCamp
Share this job
MISUMI Europa GmbH is one of the largest B2B e-commerce platforms serving designers’ everyday needs for automation parts. The company is a global manufacturer and distributor of industrial supply components, supporting a wide range of industries including automotive, 3D printing, medical manufacturing, and packaging.
MISUMI's unique business model enables companies to save significant process time while benefiting from reliable, on-time deliveries. With a make-to-order range, a one-stop shop offering over 80 sextillion standard and configurable parts, and the online contract manufacturing service meviy, components can be configured and ordered entirely according to designers’ needs.
You will be part of a multinational team of around 300 employees from more than 33 different nationalities across Europe. Diversity is a core strength, and the IT department consists of 22 people working closely together in a collaborative environment.
As a Data Engineer, you will design, build, and operate data pipelines and integrations that power analytics, reporting, and data-driven applications. You will focus on reliable data ingestion, ETL-based data integration, and enabling high-quality data flows across the data platform.
Your work will directly support data-driven decision-making by ensuring scalable, resilient, and secure data infrastructure in an agile, cross-functional environment.
3–5 years of professional experience in data engineering
Experience working with ETL platforms (preferably Talend) for batch and API integrations
Strong understanding of data ingestion patterns, schema evolution, data quality principles, and data modeling
Strong SQL skills
Experience with relational databases (e.g., PostgreSQL, Redshift, or Oracle)
Basic Java knowledge
Understanding of software development fundamentals
Experience contributing to CI/CD pipelines, version control (Git/Bitbucket), and code reviews
Strong analytical and problem-solving skills
Team-oriented with a structured and proactive working style
Professional experience in data engineering within ETL/ELT environments
Background working with batch processing and API-driven integrations
Experience evaluating and integrating internal and external data sources
Familiarity with data governance, platform standards, and security policies
Experience working in agile, cross-functional teams
Competitive salary ranging from €35,000 to €48,000
100% remote role with hiring focused on candidates based in Portugal
Opportunity to work on a technical environment managing 300 batch ETL jobs using Talend
Collaborative team structure with a focused data management team of two engineers
Structured interview process including screening and live technical assessment
Ongoing collaboration with weekly 15–20 minute update calls and end-of-week summaries
#LI-JM1 #LI-Remote
Thank you for considering this opportunity. Funded.club Senior Recruiters partner exclusively with Startups and are in direct communication with hiring managers and founding team members.
Funded.club uses AI-assisted tools as part of our candidate sourcing and screening process. All applications are reviewed by a human recruiter, who makes all decisions about which candidates to progress. If your application seems like a good fit for the position, a real member of our team will contact you soon!
Ready to apply?
Apply to Funded.club
Cookies & analytics
This site uses cookies from third-party services to deliver its features and to analyze traffic.