Job Description

Role

To design, architect and implement Big Data solutions with scalable distributed data processing technology including Hadoop and Splunk for Analytic applications

Required Technical Skill Set

One or more of

Experience with major big data solutions like Hadoop, MapReduce, Hive, HBase, MongoDB, Cassandra

Experience in big data tools like Impala, Oozie, Mahout, Flume, ZooKeeper and Sqoop

Experience in working with ETL tools such as Informatica, Talend and Pentaho

Firm understanding of major programming scripting languages like Java, Linux, PHP, Ruby, Python and R

Exposure with large cloud computing infrastructure solutions such as Amazon Web Services, Elastic MapReduce

Desired Competencies (Technical Behavioral Competency)

Must Have

Strong background in using tools such as Hadoop and worked as an architect on projects before

Good to Have

Background in consulting and advisory services TOGAF certification

Responsibility of Expectations from the Role

Interpret and translate client requirements into a complete technology solution with application blueprint.

Articulate the pros and cons of various technology options for data acquisition, storage, transformation, analysis, visualization and security.

Demonstrate the technology thought leadership to optimize the IT investment in the client facing role.

Coordinate with Infrastructure team to assist with requirements and design of Hadoop clusters and troubleshooting operation issues.

Lead a technical team of developers for ontime delivery in various project execution methodologies like scrum.

Work with Data scientist and Business Analyst to ensure the delivery of rich insights from indepth analysis of data from multiple sources with various techniques