-
ID
#20176913 -
Job type
Permanent -
Salary
Depends on Experience -
Source
Keller Schroeder -
Date
2021-09-23 -
Deadline
2021-11-21
Data Engineer - Remote
Indiana, Indianapolis, 46201 Indianapolis USAPermanent
Vacancy expired!
Keller Schroeder, an information technology firm in Evansville, IN is seeking a Data Engineer to work for our direct client. This is a direct hire, full time opportunity. Must live within 2 – 3 hours driving distance of Washington, IN to work onsite as needed. Otherwise, can work remote.
No sponsorship is available. A Data Engineer should be someone who:- Wants to understand what the data is (its’ metadata definition) and how it may be valuable to supporting business decision-making, application integration, and analytics.
- Can visualize the data lifecycle in their head; knows the data has a source, it may have been aggregated, manipulated, or modified throughout its lifecycle, and it has business rules governing why it was captured in the first place.
- Understand the business mission and can design solutions to source data, curate it, and deliver it safely and securely, from Point A to Point B, to meet the needs of the business.
- Perform the tasks of data engineering: Source data, extract it, enrich metadata definitions, develop data rules, transform data, load data to target locations, and visualize data graphically.
- Design, build and support scalable, highly available data pipelines using Azure Data Factory and Databricks.
- Implement processes that improve and lead to greater data quality.
- Work with the line of business to understand requests pertaining to data.
- Utilize SQL/ETL processes to get data from various sources.
- Ensure data is clean/structured to be able to be consumed and visualized.
- Write SQL queries to transform raw data into usable business information and metrics.
- Collaborate with different teams to understand business requirements and to design workable solutions.
- 3+ years’ experience in a Data Engineer role.
- Azure Data Factory or similar data integration, orchestration, and automation tools.
- SQL Server and stored procedures.
- SQL queries
- Expert in relational database design and concepts.
- Strong familiarity and hands-on experience with SQL and statistical software packages (Python, R, SAS).
- Expert in one or more programming languages: C/C, Python, Perl, or Java.
- Data visualization tools, such as Power BI, Tableau, Spotfire, Rstudio Shiny, etc.
- Knowledge of tools and techniques for developing machine learning models.
- Up to date on the latest industry trends, able to articulate trends and their potential clearly and confidently.
- Experience with genetic algorithms, logistic and linear regression, PCA, decision tree analysis and statistical methods is a plus.
- Bachelor’s degree in computer science, engineering, mathematics, or related field.
- Master’s degree in computer science, mathematics, or related field preferred.
- Goal-oriented, organized team player.
- Comfortable in both a leadership and individual contributor roles.
- Encouraging to team and staff; able to mentor and lead.
- Creative problem solver who thrives when presented with a challenge.
- Able to analyze problems and strategize for better solutions.
- Experience with metadata, identifying data sources and lineage, developing data pipelines, and data mapping.
- Excellent verbal and written communication skills.
- Able to multitask, prioritize, and manage time effectively.
- Able to work in a fast-paced environment.
Vacancy expired!
Report job