Join us/Open Position

Data Engineer

  • Senior
  • Full-remote
  • Spain-based only

As a Data Engineer at DKL, you will play a critical role in designing, building, and optimizing our data infrastructure. Working alongside cross-functional teams, you’ll develop reliable data pipelines and maintain the integrity of large datasets used for analysis and reporting, directly impacting data-driven decision-making across the company.

Remote

100%

You will work from the location of your choice, provided you structure this in a way that is compatible with work residency in Spain. You will also need to have a high-bandwidth internet connection (>= 40Mbs up/down). DKL has no physical headquarters: we take remote work very seriously, and our team is distributed in various parts of Spain and abroad.

Schedule

Flexible

You will work 40 hours per week with the flexibility to organize your schedule in a way that suits you. Requirements include having sufficient overlap with the teams you collaborate with and attending dailies and occasional client meetings. We know that personal wellness is crucial for achieving optimal results.

Compensation

40-60K

Opportunities to grow and advance your career. Every year, you will have €500 explicitly allocated for your educational needs.

100€ Amazon Gift Card on Christmas.

Vacations: 23 days/year.

    Role

    Learn about your responsibilities, how will you work and who will you work with

    As a Data Engineer at DKL, you will be responsible for developing, operating, and maintaining scalable data architectures that support analysis, reporting, and machine learning applications. Your role will involve managing ETL processes, creating and running data warehouses, and ensuring the high performance and reliability of data systems. You will collaborate closely with product owners, data scientists, and analysts to translate business requirements into effective technical solutions while maintaining data quality and accessibility. As one of the primary contributors to DKL's data infrastructure, you will ensure our data solutions are efficient, accurate, and aligned with client goals.

    Responsibilites

    Your responsibilities will encompass a wide range of tasks, including but not limited to:

    arrow_circle_right

    Placeholder text

    Designing, building, and optimizing data pipelines to handle large volumes of data from various sources and various frequencies, including real-time data.

    arrow_circle_right

    Placeholder text

    Developing and maintaining data warehouse architecture, ensuring scalability and performance, taking into account organizational requirements to determine the optimal architecture.

    arrow_circle_right

    Placeholder text

    Implementing ETL/ELT processes to extract, transform, and load data for reporting and analytics.

    arrow_circle_right

    Placeholder text

    Collaborating with data scientists and analysts to support machine learning workflows and advanced analytics.

    arrow_circle_right

    Placeholder text

    Ensuring data quality and compliance with company data governance standards.

    arrow_circle_right

    Placeholder text

    Documenting data processes and infrastructure for internal use and continuous improvement.

    How will you work?

    You will collaborate closely with the data team, working alongside data scientists and analysts to build, optimize, and maintain DKL's data infrastructure. You will report directly to Biel Llobera and Matías Pizarro, receiving guidance on data strategy and infrastructure development. Together, you’ll ensure that our data-driven insights align with business objectives and remain accessible across the organization. You’ll also work alongside the Project Manager to align on project timelines and deliverables, collaborating with engineering leads from Backend, DevOps, and Frontend teams to ensure smooth data integration and effective data utilization across all projects.

    Who will you work with?

    You will collaborate closely with the data team, working alongside data scientists and analysts to build, optimize, and maintain DKL's data infrastructure. You will report directly to Biel Llobera and Matías Pizarro, receiving guidance on data strategy and infrastructure development. Together, you’ll ensure that our data-driven insights align with business objectives and remain accessible across the organization. You’ll also work alongside the Project Manager to align on project timelines and deliverables, collaborating with engineering leads from Backend, DevOps, and Frontend teams to ensure smooth data integration and effective data utilization across all projects.

    image of José Vega José Vega Data Architect

    With 28 years in software development and 8 years as Head of Engineering at McKinsey & Company, Matias leads our technical vision. He specializes in data engineering, AI, DevOps, and scaling teams, and he has grown Power Solutions Tech from 2 to 200 developers in just 5 years. Matías keeps Python, Pandas, Django, FreeBSD, and Bash in his daily toolkit and is passionate about using the right tools for the job. His leadership inspires innovation and excellence across our technical teams.

    image of Biel Llobera Biel Llobera Data Architect & Data Architect

    As a data architect with 10+ years of experience, Biel has specialized in designing and implementing large-scale data platforms that support complex analytics and data-driven decision-making. He has a strong background in building robust, scalable data pipelines, ensuring data quality, and designing scalable systems across various business requirements. He is proficient in industry-leading tools, including Airflow, DBT, Snowflake, and Databricks, and has extensive experience with the major cloud providers.

    What makes you a fit?

    Your qualifications

    Requirements

    arrow_circle_right

    Placeholder text

    Bachelor's degree in Computer Science or a related field.

    arrow_circle_right

    Placeholder text

    Proven experience in data engineering, including designing and maintaining data pipelines.

    arrow_circle_right

    Placeholder text

    Strong Python programming and Software Engineering skills.

    arrow_circle_right

    Strong SQL and analytical skills

    arrow_circle_right

    Placeholder text

    Proficiency with at least one of the main cloud platforms (AWS, GCP, or Azure) and data warehousing tools (Snowflake, Databricks, Redshift, or BigQuery).

    arrow_circle_right

    Placeholder text

    Proficiency with a workflow orchestration tool, preferably Airflow.

    arrow_circle_right

    Placeholder text

    Familiarity with data governance and security best practices.

    arrow_circle_right

    Placeholder text

    Excellent problem-solving skills and the ability to both work independently and collaborate with a larger team in a remote setting.

    Nice-to-have

    • Experience with data streaming technologies, such as Kafka or Kinesis
    • Experience with machine learning pipelines and MLOps
    • Experience implementing a data mesh architecture
    • Experience with functional data engineering
    • Experience with Apache Spark
    • Experience with a Data Quality framework such as Great Expectations
    • Experience using DBT to orchestrate SQL transformations in a Data Warehouse
    • Cloud or data engineering certifications
    • Previous experience in a fast-paced, agile environment

    What are the first 6 months like?

    Your first six months will be structured to support your learning, integration, and progression as you settle into your role. This period aligns with our review checkpoints at 1, 3, and 6 months, ensuring a clear pathway to success during your probation period.

    Month 1

    Your first month will focus on onboarding and getting grounded in our data platforms, engineering practices, and team workflows. You’ll have access to comprehensive technical documentation and training resources, meet key stakeholders across data, analytics, and product teams, and start familiarizing yourself with our data architecture, pipelines, and development tools. This phase is all about building a strong foundation—setting up your local environment, understanding our deployment processes, and reviewing active projects. At the end of the month, we'll have a check-in to reflect on your experience, answer any technical or process-related questions, and ensure you have the support you need to move forward confidently.

    Months 2-3

    By month two, you'll start taking on defined responsibilities within our data engineering projects, collaborating closely with your team to plan deliverables, estimate workloads, and coordinate progress across stakeholders. During this phase, you'll begin owning smaller data pipelines or components within larger initiatives—whether that's building new data ingestion processes, optimizing existing workflows, or contributing to infrastructure improvements. This hands-on experience will help you build confidence with our tech stack and development practices. At the three-month mark, we'll have a dedicated review to reflect on your progress, discuss any technical or operational challenges, and identify growth opportunities as you continue to deepen your impact on the team.

    Months 4-6

    With solid experience under your belt, by month four, you'll be ready to lead your own data engineering projects more independently. During this stage, you'll take ownership of end-to-end delivery—designing, building, testing, and deploying scalable data solutions that support our business needs. You'll also focus on refining your technical skills, improving system performance, and contributing to best practices within the team. The six-month review will serve as a key milestone to evaluate your overall impact, technical growth, and collaboration while closing out the probation period and setting clear goals for your continued development within the team.

    What's the selection process?

    We aim to make our selection process smooth and informative, ensuring it's a two-way street where we get to know each other.

    01/

    Initial Meet & Greet

    A casual video call to introduce ourselves, discuss the role at a high level, and get to know each other's backgrounds and motivations. This call is all about seeing if we're a mutual fit.

    02/

    Role-focused interview

    A more focused discussion, diving into the role's specifics and exploring key data engineering scenarios you might encounter with us. This is where we'll go over some example cases, discuss your experience, and answer any questions you have about the day-to-day.

    03/

    Meet the team leads

    In this call, you'll meet some of our key team leads. This conversation helps you understand the company culture, our team dynamics, and the kind of cross-functional work you'll be doing. It's also a chance to talk more about the projects we're passionate about.

    04/

    Decision & Offer

    After the final discussion, we'll circle back with a decision. If we're a match, we'll be excited to extend an offer and welcome you aboard! If it turns out this isn't the right fit, we'll let you know as well and share our feedback, wishing you all the best in your career journey.

    Are you ready to take a new step in your career?

    Curious to find out more? Complete the form and send us your CV. And don't hesitate to ask questions!

    Max. 500 characters