Data Engineer Vacancies for Grade 12 Candidates | Apply Now

Spread the love:

Nedbank is looking for a data Engineer in Johannesburg to transform data into solutions. If you enjoy solving puzzles and organizing data, this position allows you to influence how banks use technology. High school graduation is required. However, curiosity is the most important thing. Apply before May 30th, 2025, and then dive into the world of numbers that can tell stories.

Details for Data Engineer Vacancies

  • Company- Nedbank
  • Job Type- Data Engineer
  • Location- Johannesburg, ZA
  • Closing Date- 30 May 2025
  • Job Ref ID- 138206
  • Click for the Latest Job Vacancies
Data Engineer Vacancies for Grade 12 Candidates | Apply Now
Advertisements

✅ Join Our WhatsApp Channel!

Stay updated with the latest posts, exclusive updates, and more. Don’t miss out!

Job Purpose

The goal that Data Engineers Data Engineer is to leverage their expertise in data and related technologies, consistent to the Nedbank Data Architecture Roadmap to develop technology-driven thought leadership in the Enterprise provide fit with purpose-built data solutions and help support data initiatives. Additionally, Data Engineers enhance the infrastructure of data for the bank in order to facilitate advanced analytics machine learning, artificial intelligence and machine learning by providing usable, clean data to the stakeholders. They also design pipelines for data, ingestion streaming, provisioning API, self service and solutions based on big data to assist the Bank’s plan to transform into a data-driven organization.

Job Responsibilities

  • responsible for maintaining of, improvements, cleaning and manipulation of data within the bank’s analytics and operational databases.
  • Data Infrastructure: Create and maintain scalable optimized and supported, tested secure, reliable, and secure infrastructure for data, such as with the Infrastructure or Databases (DB2 PostgreSQL MSSQL, HBase, NoSQL and more), Data Lakes Storage (Azure Data Lake Gen 2), Cloud-based solutions (SAS , Azure Databricks, Azure Data Factory, HDInsight) as well as data Platforms (SAS, Ab Initio, Denodo, Netezza, Azure Cloud). Secure data and protect privacy by working in conjunction in conjunction with Information Security, CISO and Data Governance
  • Data Pipeline Building (Ingestion Provisioning, Streaming, and API) Maintain and build data pipelines for:
  • Create data pipelines for Data Integration (Data Ingestion, Data Provisioning and Data Streaming) using both On Premise tool sets as well as Cloud Data Engineering tool sets
  • effectively extract the data (Data Acquisition) efficiently extract data (Data Acquisition) Golden Sources, Trusted sources and Writebacks using data integration from multiple formats, sources and structures
  • download the Nedbank Data Warehouse (Data Reservoir, Atomic Data Warehouse, Enterprise Data Mart)
  • Provide data to the relevant lines of business Marts Regulatory Marts and Compliance Marts via self-service virtualisation of data
  • give data to applications and Nedbank Data consumers
  • transform data into the same data model that is used for data analysis and reporting as well as to deliver data in a uniform, useful format to Nedbank stakeholders in data
  • manage big data technologies to manage big data (Hadoop) streaming (KAFKA) and data replication (IBM Inphosphere Data Replication)
  • make use by data integration tool ( Ab Initio) and Cloud tools for data integration (Azure Data Factory as well as Azure Data Bricks)
  • Data Modelling and Schema Build Working in conjunction in collaboration with Data Modellers, create data schemas and models of databases for data reservoirs like the Data Reservoir, Data Lake, Atomic Data Warehouse and Enterprise Data Marts.
  • Nedbank Data Warehouse Automation: Automate, monitor and enhance your data warehouse’s performance.
  • Collaboration: Work together with Data Analysts, Software Engineers, Data Modelers, Data Scientistsm Scrum Masers and Data Warehouse teams as part of a team. Contribute to the detail of data architecture designs and manage Epics from beginning to end and ensure that data solutions create the business benefits.
  • Information Quality and Data Governance Make sure that adequate quality controls are in place within the data pipelines in order to ensure a high degree of quality, consistency, and security.
  • Optimization and Performance: Check your data warehouse’s performance by ensuring Nedbank data warehouse and integration patterns, as well as real time and batch jobs streaming, API’s and.
  • API Development: Develop API’s that support for the Data Driven Organisation, ensuring that the data warehouse is designed to work with API’s, by working closely in conjunction with Software Engineers.

Qualifications

  • Matric / Grade 12/ National Senior Certificate
  • Advance Diplomas/National First Degrees
  • The Study Area: Bcom, BSc, BEng

Professional Knowledge

  • Cloud Data Engineering (Azure , AWS, Google)
  • Data Warehousing
  • Databases (PostgreSQL, MS SQL, IBM DB2, HBase, MongoDB)
  • Software Programming (Python, Java, SQL)
  • Modeling and Analysis of Data
  • The Data Pipelines as well as ETL instruments (Ab Initio ADB, ADF, SAS ETL)
  • Agile Delivery
  • Problem solving skills
Jobs that match your passion await! Visit Municipalityvacancies for more.
Follow Us