US Citizens, GC Holders, H4EAD or H1b willing to transfer to our W2
Location: Greenville, SC
Rate: $ 65/hr on our W2
Skills and Experience
Experience building large-scale data ingestion framework or leverage COTS products for implementing batch frameworks (e.g. SpringXD, KiteSDK etc.) – Excellent understanding and implementation experience of Hadoop Architecture, including the following technologies: – Hadoop Distribution: Hortonworks (preferred) – Data Storage: HDFS, HBase, HIVE – Data Processing, Analysis & Integration: Spark (Python or Scala), Kafka, Impala, Sqoop – ETL tool: Talend – Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming – Work on MPP, Big Data technologies like Hadoop, Data Engineering, Data Governance implementations and support. – Gather and process raw data at scale (including writing scripts, calling APIs, write SQL queries, etc.). – Design and develop data structures that support high performing and scalable analytic applications. – Implementing automation and related integration technologies with Ansible, Chef, or Puppet. – Work closely with engineering team to integrate amazing innovations and algorithms into data lake systems.
Responsibilities: – Work on technical architecture design, application design and development, testing, and deployment. – Work on MPP, Big Data technologies like Hadoop, Data Engineering, Data Governance implementations and support. – Produce technical specifications and design for development, solution development/migration and systems integration requirements. – Participate and lead internal development and external collaboration meetings. – Conduct or coordinate tests to ensure that intelligence is consistent with defined needs. – Oversee testing of data acquisition processes and their implementation into production. – Work closely with engineering team to integrate amazing innovations and algorithms into data lake systems.
If interested, send your resume at hreela@gmail.com