W1siziisijiwmjavmduvmdqvmtavmjivmjevmziwl3dlymluyxigagvhzgvylmpwzyjdlfsiccisinrodw1iiiwimtawmhg5mdbcdtawm2mixv0

I AM A

JOBSEEKER.

continue to candidate homepage

W1siziisijiwmtkvmdkvmjcvmtmvntkvndmvnju3l1jnmv8ynzg5ifttbwfsbf0uanbnil0swyjwiiwidgh1bwiilcixmdawedkwmfx1mdazyyjdxq

I AM

HIRING.

Continue to client homepage

Senior Data Engineer

  • Location

    Berlin, Germany

  • Sector:

    Data Analytics, Data Engineering, Data Science

  • Job type:

    Permanent

  • Salary:

    €65000.00 - €85000.00 per annum + self development + flat hierarchies

  • Contact:

    Rebecca Newson

  • Email:

    Rebecca.Newson@darwinrecruitment.com

  • Job ref:

    JN -052020-86285_1589446328

  • Published:

    23 days ago

  • Startdate:

    01/07/2020

Senior Data Engineer: Berlin
Python/PySpark/Scala: Hadoop/Spark/SQL/AirFlow

**URGENT ROLE- Please note based on the current situation my client is initially focusing on locally based candidates in Germany and preferably Berlin**

Background:

I am working on behalf of a successful player with a strong brand focus operating successfully in the E-Commerce market for over 40 years. Being a family run business, they have expanded globally to now become one of the largest in Berlin, operating in over 40 countries with 18,000 employees.

We have been helping to scale their data units across Data Engineering and Data Science to join their existing data unit of 14 in order to focus on the operationalisation of data science use-cases.

Are you seeking strong career progression and for your work to have an impact?

Your Profile:

  • 3-4 years (Senior) programming experience in Python (preferred) or Scala
  • Experience with building end to end ETL data pipelines in cross-functional teams
  • Preferred experience working with the Hadoop ecosystem

Role & Duties:

  • Working on the operationalisation of Data Science use cases
  • Areas of focus: Price Optimisation/Logistics/Distribution and Recommendations
  • Working closely with Data Scientists on Python codebase to bring business critical Data Science applications into production
  • Working on open-source systems for data pipelines using ETL/ Airflow
  • Currently migrating to another ERP system and creating a data lake from design through to implementation including maintenance of data pipelines in Python and Spark +DC/OS

This role will have a huge impact on the cross-functional data team and will impact heavily on the business model, therefore your role is imperative in producing business critical data science applications and bringing these into production.

Why should I join?

  • Flat hierarchies
  • Multi-cultural, diverse and international English working environment
  • Permanent reinvention, space for creativity and individual career planning
  • Open culture, dynamic teams and freedom for hands-on mentality
  • Social commitment
  • Social Benefits: Pension scheme/capital formation benefits/30% employee discount
  • 30 days holiday

If you would like to discuss please do send your CV and call availability to: Rebecca.Newson@darwinrecruitment.com

Please note my client is committed to hiring during this time period and is continuing to hire.

Save job