W1siziisijiwmjavmduvmdqvmtavmjivmjevmziwl3dlymluyxigagvhzgvylmpwzyjdlfsiccisinrodw1iiiwimtawmhg5mdbcdtawm2mixv0

I AM A

JOBSEEKER.

continue to candidate homepage

W1siziisijiwmtkvmdkvmjcvmtmvntkvndmvnju3l1jnmv8ynzg5ifttbwfsbf0uanbnil0swyjwiiwidgh1bwiilcixmdawedkwmfx1mdazyyjdxq

I AM

HIRING.

Continue to client homepage

Freelance Data Engineer - 3 months+

  • Location

    Berlin, Germany

  • Sector:

    Data Engineering

  • Job type:

    Contract

  • Salary:

    Negotiable

  • Contact:

    Jodi Barrow

  • Email:

    Jodi.Barrow@darwinrecruitment.com

  • Job ref:

    JN -122020-87486_1608040638

  • Published:

    about 1 month ago

  • Duration:

    3 Months

  • Expiry date:

    2021-01-14

  • Startdate:

    15/01/2021

  • Consultant:

    #

Partnered with a consultancy working heavily in the automtoive domain, my customer is in the mist of a migration from AWZ to Azure, and looking to bring on more cutting edge projects from new and overseas customers.

To ensure they are prepared to take on the projects, they're hiring a Freelance Data Engineer to support the completion of the migration and further more develop their existing platform, while prodominantly managing the new data sources.

As a Freelance Data Engineer, you will:

  • Work closely with respective teams and take a large part of responsibility for the further development of their data landscape.
  • Act as the lead design for their cloud data platform
  • Own advanced design and archtiecutre for their data take
  • Handle further development of exisitng and new data pipeline process
  • Play a key role in the analysis of all data interfaces and transfer to the overall architecture
  • Handover of operation and partly dev topics to their offshore team
  • Establish and ensure standards of quality and stability across the entire process chain


The ideal fitting candidate will come with

  • Strong expertise in the area of ​​Python, (Azure) Databricks & DeltaLake
  • Very good understanding of data and in-depth knowledge of data modeling
  • Be familiar with the most varied of modeling approaches for data lake, data lakehouse and data warehouse
  • Handling of csv, json, avro, parquet file formats
  • Experience in using and configuring Spark as well as dealing with Spark Clusters
  • Knowledge of common DWH solutions (Synapse, Snowflake, Exasol)
  • Experience in the Azure environment (Logical Apps, Blob Storages, Data Factories ...)
  • A high sense of responsibility, independence and initiative with a 'hands-on' mentality
  • Strong analytical mindset and open to innovative topics

Must be fluent English speaking (German language is a strong plus)

Fully Remote, starting January 2021 - 3 months+

Darwin Recruitment is acting as an Employment Business in relation to this vacancy.