top of page

Data Engineer -Lausanne

  • Writer: Helen Von Trotsenburg
    Helen Von Trotsenburg
  • Apr 7
  • 2 min read




As part of the Data Platform Team, your mission will be to develop, deploy, and maintain data pipelines, web applications, and data warehouse solutions. You will play a critical role in enabling data consumption for business intelligence tools and various applications. Additionally, you will provide technical guidance to developers, including data scientists, to help them build efficient and scalable data solutions.


To expand the team, we are looking for a data engineer who is proficient in Python, Docker and


DBT, and has a sharp problem-solving mindset.


Objectives and responsibilities of this role


• Develop, deploy, and maintain various data pipelines originating from diverse data


sources, serving data scientists, various systems, and many end users.


• Participate in the development of the enterprise data warehouse to support different


analytical use cases.


• Conduct code reviews with data scientists and business intelligence developers to


provide technical guidance and best practices.


• Work closely with business IT teams, data scientists, and external providers, if the case


may be, to deploy, maintain, and monitor data processing and machine learning


pipelines.


• Develop and maintain advanced CI/CD pipelines to ensure continuous integration,


testing, and deployment of the pipelines.


• Develop and maintain documentation of the data architecture and data management


processes.



Required Skills and Qualifications


• Master’s degree in computer science, information technology, technology engineering


or equivalent.


• Two or more years of experience in data engineering, devops engineering, or software


engineering


• Strong experience with data storage solutions such as data lakes, data warehouses,


and relational databases.


• Experience in developing and deploying Python and Spark pipelines running on Docker.


• Experience in building data vaults and developing business-focused data models, with a


strong understanding of DBT for data transformation and modelling.


• Experience with building CI/CD pipelines with Jenkins, Gitlab, or similar is a plus.



• Familiar with orchestrator tools such as Airflow.


• Knowledge of Linux, Python, and SQL.


• Ability to communicate and collaborate effectively within the team and with external


stakeholders.


• Excellent spoken and written English.




 
 
 

Comments


bottom of page