Darwin is partnered with a leading technology business in the US that is looking to strengthen its Data Engineering capability across the group.
They are seeking to engage Senior Data Engineering professionals to help build scalable data pipelines and optimize enterprise data architecture.
This is a long-term engagement aligned to a multi-year roadmap, with a strong focus on initiatives around AI and Machine Learning. Additional expertise will be required as these projects continue to expand.
Responsibilities
- Data Engineering: Build and orchestrate reliable data ingestion and transformation pipelines using tools such as Airflow.
- Cloud & Kubernetes Architecture: Deploy and maintain scalable, fault-tolerant infrastructure on GCP using Kubernetes (GKE).
- DevOps & CI/CD: Implement Infrastructure as Code (Terraform), CI/CD workflows, and automated deployment/testing pipelines (GitHub Actions).
- Python Development: Write clean, efficient code for automation, APIs, and cloud integration services.
- API & Service Deployment: Develop high-performance Python APIs (FastAPI/Flask) for ML predictions and core business logic.
- Monitoring & Observability: Configure monitoring, logging, and alerting (Prometheus, Grafana, Cloud Logging) to ensure high availability.
If you are interested please make contact on lungelo.nkosi@darwinrecruitment.com
Darwin Recruitment is acting as an Employment Business in relation to this vacancy.
Lungelo Nkosi