As a Data Engineer, you will collaborate with a team of business domain experts, data scientists and application developers to identify relevant data for analysis and develop the Big Data solution.
Analyze business problems and help develop solutions for near real-time stream processing as well as batch processing on the Big Data platform.
Set up and run Hadoop development frameworks.
Requirements:
Experience: 5+ years
Explore and learn new technologies for creative business problem-solving.
Ability to develop and manage scalable Hadoop cluster environments
Experience in Big Data technologies like HDFS, Hadoop, Hive, Yarn, Pig, HBase, Sqoop, Flume, etc
Working experience in Big Data services in any cloud-based environment.
Experience in Spark, Scala, Kafka, Akka, and core or advanced Java and Databricks
Experience in NOSQL technologies like HBase, Cassandra, MongoDB, Cloudera, or Hortonworks Hadoop distribution (good to have)
Familiar with data warehousing concepts, distributed systems, data pipelines, and ETL