Job Opportunity at Vodacom Tanzania Plc - Cloud Data Engineer

Job Opportunity at Vodacom Tanzania Plc - Cloud Data Engineer

Job Description

Purpose

To be confir The Data Engineer delivers through self and others to:
•Load the data from local and group sources onto the shared platforms that are necessary for insight, analysis and for commercial actions;
•Building applications that make use of large volumes of data and generate outputs that allow commercial actions that generate incremental value
•Support local markets and group functions in obtaining business value from the datamed.

Key accountabilities and decision ownership: 

•Design and develop highly performant, scalable and stable Big Data cloud native applications
•Source data from a variety of different sources, in the correct format, meeting data quality standards and assuring timeous access to data and analytical insights
•Build batch and real-time data pipelines, using automated testing and deployment
•Build transformations to produced enriched data insights
•Integrate applications with business systems to enable value from analytic models and enable decision making
•Define and implement best practices relating to cloud economics, software engineering and data engineering to ensure a well-architected cloud framework with data management practices built in to each design.
•Work with the architecture team to evolve the Big Data capabilities (reusable assets/patterns) and components to support the business requirements/objectives
•Research, investigate and evaluate new technologies and methods to improve delivery and sustainability of data applications and services
•Make contributions to the process of defining best practice for the agile development of applications to run on the Big Data Platform

Cloud Data Engineer Job Vacancy at Vodacom Tanzania Plc

Core competencies, knowledge and experience:

•Experience managing the development life-cycle for agile software development projects
•Expert level experience in designing, building and managing data pipelines for batch and streaming applications
•Experience with performance tuning for batch based applications like Hadoop, including working knowledge using Nifi, Yarn, Hive, Airflow and Spark
•Experience with performance tuning streaming based applications for real-time data processing using Kafka, Confluent Kafka, AWS Kenesis, GCP pub/sub or similar
•Experience working with serverless services such as AWS Lambda
•Relevant practical working development experience with AWS (or GCP) big data analytics services. This includes AWS Glue, AWS Athena, and AWS Step Functions.
•Working experience with other distributed technologies such as CassandraDB, MongoDB, Elastic Search
•Java and Python programming ability would be an advantage
•Experience in metadata management, data modelling and schema management would be an added benefit.

Must have technical / professional qualifications: 
•3 year IT or IS degree or related field is essential
•Degree in Computer Science , IT or Information System
•Relevant cloud certification at professional
•5+ years BI or related software development experience
•Agile exposure, Kanban or Scrum

Mach content banner

Join Our Telegram Channel and WhatsApp Groups

Important Safety Tips

1. Never Pay To Get A Job. Legitimate companies don’t ask for money, Job openings with requests for payment or fees should be treated with extreme caution. SageWap is not responsible for monies paid to Scammers.

2. If you think this advert is not genuine, please report it via the Report Job link below.

Report a Job