We are looking for a savvy Data Architect or Lead Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture. The ideal candidate is an experienced big data platform builder and data wrangler who enjoys optimizing data systems and building them from the ground up. He/she will be leading various projects implementations and guiding the team on a technical level. This person also needs to meet customers and understand their use cases, and then design the technical architecture for them.
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Open Source and AWS big data technologies
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- We are looking for a candidate with 4+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
- Advanced working SQL knowledge and experience working with relational databases as well as working familiarity with a variety of databases.
- Experience building and optimizing big data pipelines, architectures and datasets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Experience interacting with customers and various stakeholders.
- Strong analytical skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- Working knowledge of message queuing, stream processing, and highly scalable big data lakes.
- Strong project management and organizational skills.
- Experience supporting and working with cross-functional teams in a dynamic environment.
They should also have experience using the following software/tools:
- Big data technologies: Hadoop, Spark, Kafka, etc.
- Relational SQL and NoSQL databases, including Postgres and Cassandra.
- Data pipeline and workflow management tools: Airflow, NiFi etc.
- Cloud services: AWS - EMR, RDS, Redshift, Glue. Azure - Databricks, Data Factory. GCP - Dataproc, Pub/Sub
- Stream-processing systems: Storm, Spark Streaming, Flink etc.
- Object-oriented, functional or scripting languages: Python, Java, Scala, etc.