• Find preferred job with Jobstinger
  • ID
    #11990513
  • Job type
    Permanent
  • Salary
    TBD
  • Source
    Bank Of America
  • Date
    2021-04-07
  • Deadline
    2021-06-06
 
Permanent

Vacancy expired!

Job Description:

We are seeking a highly technical and hands on Big Data Solution Architect to join a team of solution architects who are the technical face of our platform. The candidate will support one of our lines of business by partnering their development teams and architects to solution the appropriate Big Data architecture as per the customers' requirements and the platform's standards.

This candidate will be a part of the larger architecture group that sets the strategy, direction and standards for 50+PB Big Data analytics platform which is on a transformation journey to become self-serve, API driven and containerized. The candidate will be in the unique position to work with technical members of the business applications running on the platform while having the ability to influence the platform's technical strategy and direction.

Responsibilities:

• Lead design, architecture and building of data analytics solutions for an entire line of business. Work directly with platform tenants to understand requirements, align to platform technology stack and document solution.

• Provide thought leadership as a member of the platform's architecture team

• Provide feedback from the tenants back to the larger architecture team in terms of standards, process and technology gaps. Partner with fellow architects to address gaps.

• Conduct proof of concept work to determine proper configurations, design patterns and general tool fit for the platform tenant

• Participate in regular status meetings to track progress, resolve issues, mitigate risks and escalate concerns in a timely manner

Required Skills:

• Computer Science/Software Engineering (or related) degree

• 6+ years developing software in C, C, Java, Scala, Ruby, and/or Python

• 5+ years of hands on experience Hadoop, Hive, Spark, Sentry, HBase, Sqoop, Impala, Kafka, Flume, Oozie, MapReduce, S3 etc.

• Experience designing and implementing complex solutions for distributed systems

• Must have good knowledge and experience with Cloud and containerization technologies: Azure, Kubernetes, OpenShift and Dockers

• Experience and detailed knowledge with Master Data Management, ETL, Data Quality, metadata management, data profiling, micro-batches, streaming data loads.

• Working knowledge of application and system availability, scalability and distributed data platforms

• Excellent communication skills, particularly those relating to complex findings and presenting them to ensure audience appeal at various levels of the organization

• Ability to integrate research and best practices into problem avoidance and continuous improvement

• Exercise independent judgment in methods, techniques and evaluation criteria for obtaining results

• Be able to demonstrate analytical and problem solving skills

• Strong leadership presence. Can-do attitude. Positive and proactive leadership style.

Desired Skills

• Experience with Ranger, Atlas, Tez, Hive LLAP, NiFi

• Experience administrating large-scale distributed applications using Hadoop

• Industry certifications in Big Data, Data Science, Cloud

Shift:1st shift (United States of America)

Hours Per Week:40

Learn more about this role

Vacancy expired!

Report job