• Find preferred job with Jobstinger
  • ID
    #10680835
  • Job type
    Permanent
  • Salary
    Depends on Experience
  • Source
    Sparksoft
  • Date
    2021-03-06
  • Deadline
    2021-05-05

Big Data Architect

Maryland, Columbia, 21044 Columbia USA
 
Permanent

Vacancy expired!

Job Title: Big Data ArchitectLocation: Columbia-MDDuration : Long term with Sparksoft

Job Summary:Sparksoft is looking for a Big data architect who can help achieve the customer’s business goals on data analytics utilizing data, cloud services, hardware and software and other IT components. The architect should assist in identifying existing gaps, building new data pipelines and provide solutions to deliver advanced analytical capabilities and enriched data to business users.

Responsibilities:• Provide top-quality big data solutions to modernize the data platform aligning to the customer expectations• Provide planning and guidance in integrating, centralizing, and maintaining the data to ensure relevance to stakeholders in alignment with architecture strategy and roadmap • Create, maintain, and manage data architecture roadmap, data patterns, data flows, storage, data principles, data governance and data security• Engage with customers to understand strategic requirements, translate business requirements into data solutions• Collaborate with domain experts and developers to deliver data products• Maintain and manage the data models and data artifacts including data dictionaries/metadata registry• Understand bigger picture on data architecture and data flow to identify gaps and apply design thinking to harness existing data in anticipation of future use cases• Analyze and recommend optimal approach for obtaining data from diverse source systems• Utilize Big Data technologies to design, develop, and evolve scalable and fault-tolerant distributed components, provide strategies to transition legacy ETLs with Java and Hive queries to Spark ELTs• Stay current on latest technology to ensure maximum ROI for customersRequired Skills:• Expertise in Hadoop ecosystem and architecture components, Big Data tools and technologies including Hadoop, Spark, Hive, HBase, Sqoop, Kafka, Yarn, Oozie, MapReduce, Tez, Presto etc. • Experience in scripting languages such as Java, Scala, Python, or Shell Scripting, etc.; practical knowledge of end-to-end design and build process of Data Pipelines; • Expertise with SQL and Data modeling working in Agile development process.• Ability to work with large data sets with highly diverse data in multiple types and formats, and sources• Good experience on AWS cloud managed services for data processing and analytics• Ability to work in a team-oriented environment with people of diverse skill sets• Skills to analyze complex problems using information provided, understand customer requests, and provide the appropriate solution• Excellent communication skills to engage with customers and stakeholders to understand their objectives and assist delivering data products

Required Skills:• Expertise in Hadoop ecosystem and architecture components, Big Data tools and technologies including Hadoop, Spark, Hive, HBase, Sqoop, Kafka, Yarn, Oozie, MapReduce, Tez, Presto etc. • Experience in scripting languages such as Java, Scala, Python, or Shell Scripting, etc.; practical knowledge of end-to-end design and build process of Data Pipelines; • Expertise with SQL and Data modeling working in Agile development process.• Ability to work with large data sets with highly diverse data in multiple types and formats, and sources• Good experience on AWS cloud managed services for data processing and analytics• Ability to work in a team-oriented environment with people of diverse skill sets• Skills to analyze complex problems using information provided, understand customer requests, and provide the appropriate solution• Excellent communication skills to engage with customers and stakeholders to understand their objectives and assist delivering data products

Desired Skills:• AWS cloud architect certification• CMS experience • Experience with Databricks

Education/Experience Level:• Bachelor’s Degree with 7 years’ experience or 10+ years of experience in data processing and analytics field• 5+ years of architecting Data lake and warehouse solutions with Spark, Hadoop, HQL• 5+ years of experience solutioning reusable frameworks for event driven data processing • 4+ years of experience is solutioning data products with AWS cloud managed services like EMR, Redshift, Redshift spectrum, Athena, Lambda big data experience. Sparksoft is a certified Capability Maturity Model Integration (CMMI) SVC and DEV Level 3, ISO 9001:2015, ISO 27001:2013, HUBZone, 8(a), Small Disadvantaged Business (SDB), Women-Owned Small Business (WOSB), and Small, Women-owned, Minority-owned (SWaM), and MBE/DBE/SBE consulting firm. With our focused mission “to ignite innovation, inspire transformation, and implement digital solutions for a healthier nation”, we specialize in 6 specific digital health services: Test Automation, Cloud Services, DevOps Delivery, Cyber Security, Data Science, and Human-Centered Design. Since 2004, our exceptionally skilled people, proven leadership, and optimized processes all work together relentlessly to continuously push for more efficient solutions.Required Skills:Sparksoft is an Affirmative Action/Equal Opportunity Employer and does not discriminate against any applicant for employment or employee because of race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, protected veteran status, or any other characteristic prohibited under Federal, State, or local laws.

Vacancy expired!

Report job