Responsible for developing, testing and implementing data engineering solutions to generate analytical and reporting solutions. Responsible for analyzing and preparing the data needed for data science based outcomes. Also responsible for managing and maintaining metadata data structures besides providing necessary support for post-deployment related activities when needed.
Roles and Responsibilities
In this role, you will:
- Leverage technical data dictionaries and business glossaries to analyze the datasets
- Prior expertise in the following areas is extremely critical for the role
BI tools such as Sisense, Tableau,
Datawarehouse products such as Redshift and
scripting language such as python
- Perform data profiling and data analysis for any source systems and the target data repositories
- Understand metadata and the underlying data structures needed to standardize the data load processes.
- Develop data mapping specifications based on the results of data analysis and functional requirements
- Perform a variety of data loads & data transformations using multiple tools and technologies.
- Build automated Extract, Transform & Load (ETL) jobs based on data mapping specifications
- Validate the data mapping results and match with the expected results
- Implement Data Quality (DQ) rules provided
For roles outside USA:
Bachelor's Degree in with basic experience.
For roles in USA:Bachelor's Degree in with minimum years of experience2years
Desired CharacteristicsTechnical Expertise:
- Ability to understand logical and physical data models, big data storage architecture, data modeling methodologies, metadata management, master data management & data lineage techniques
- Hands-on experience in programming languages like Java, Python or Scala
- Hands-on experience in writing SQL scripts for Oracle, MySQL, PostgreSQL or Hive
- Experience in handling both Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) data models
- Experience with Big Data / Hadoop / Spark / Hive / NoSQL database engines (i.e. Cassandra or HBase)
- Exposure to unstructured datasets and ability to handle XML, JSON file formats
- Exposure to Extract, Transform & Load (ETL) tools like Informatica or Talend
- Exposure to handling machine or sensor datasets from industrial businesses
- Knowledge of for industrial applications in a commercial/finance/industrial/manufacturing settings.
- Exposure to finance and accounting data domains
- Partner with other team members to understand the project objectives and resolve technical issues.
- Communicate project status or challenges in a clear and concise manner to the cross team members.
To comply with US immigration and other legal requirements, it is necessary to specify the minimum number of years’ experience required for any role based within the USA. For roles outside of the USA, to ensure compliance with applicable legislation, the JDs should focus on the substantive level of experience required for the role and a minimum number of years should NOT be used.
This Job Description is intended to provide a high level guide to the role. However, it is not intended to amend or otherwise restrict/expand the duties required from each individual employee as set out in their respective employment contract and/or as otherwise agreed between an employee and their manager.
Relocation Assistance Provided: Yes