-
ID
#48233336 -
Job type
Contract -
Salary
TBD -
Source
VDart, Inc. -
Date
2022-12-31 -
Deadline
2023-03-01
Sr. Data Engineer (Databricks/AWS/Python)
Texas, Plano, 75094 Plano USAContract
VDart Inc is the leading global provider of digital solutions, products and talent management company providing digital technology solutions in Automotive, Manufacturing, Energy & Utilities and Healthcare Industries. Led by a strong global team located across 10 countries including USA, Canada, Mexico, Brazil, UK, Japan, Australia & India. We are currently accepting applications from staffing firms for our Preferred Partner Program, where we align niche staffing firms with specific verticals based on their strengths. To apply, please visit: ;/p> Role: Sr. Data Engineer (Databricks/AWS/Python) Plano, TX Long Term Contract Roles & Responsibilities:
- Perform analysis, design, development, and configuration functions as well as define technical requirements for assignments of intermediate complexity.
- Participate with a team to perform analysis, assessment and resolution for defects and incidents of intermediate complexity and escalate appropriately.
- Work within guidelines set by the team to independently tackle well-scoped problems.
- Seek opportunities to expand technical knowledge and capabilities.
- Stand up data platforms, build out ETL pipelines, write custom code, interface with data stores, perform data ingestion, and build data models
- Oversee data ingesting into enterprise data mining solutions
- Ability to take ownership when necessary, acting with urgency, putting customers first, and looking into the future
- Solid understanding of cloud technologies, enterprise level Data Strategy and Data Governance concepts
- Familiarity with data visualization tools and methodologies is a plus
- Development of data pipelines, in AWS, using all types of data sets along with Redshift
- Strong familiarity and hands-on experience with Databricks, Data Factory, StreamSets
- Experience in writing code in Python/Scala.
- 10+ years of experience in architecture, design, implementation, and analytics solutions
- Hands on development Design and Develop applications using Databricks
- Experience with Solutioning on AWS
- Data Migration experience from other platforms to AWS
- In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib.
- Any programming language experience with SQL, Store procedures, Spark/Scala.
- Strong understanding of Data Modeling and defining conceptual logical and physical data models.
- Ability to design and demonstrate system Architecture with different environments
- Ability to deal with both On-Premises and Cloud systems
- Data Engineering experience
- Experience working with ETL tools such as Databricks and Redshift
- Experience coding in Python, and PySpark
- Experience with Python SDLC tools (flake8, commitizen, CircleCI)
- Comfortable working with APIs
- Cloud experience, specifically working with AWS (ECS, Redshift)
- Experience working with relational databases and SQL scripts.
Report job