-
ID
#46049001 -
Job type
Permanent -
Salary
USD65 per hour -
Source
Matlen Silver -
Date
2022-09-27 -
Deadline
2022-11-25
L3 Production Support/Release Coordinator
North Carolina, Charlotte, 28201 Charlotte USAPermanent
Vacancy expired!
- Typically, 2-5 years of experience
- Provide on-call (rotational) support to handle production failure scenarios (Incident Management)
- Monitor and ensure adherence to daily/weekly/monthly Service Level Agreements (SLA) for data sourcing, processing and provisioning timings
- Serves as the first line of support to investigate and resolve data quality issues and escalate to L2/L3 Support Team when required
- Follow communication and escalation routines to prevent and handle SLA breaches - contacting upstream data source applications for data delays, data quality issues, create and send user bulletins for data provisioning delays / issues
- Provide capacity management support (e.g., ensure enough space allocation on database/server directories)
- Provide application availability support (e.g., bringing application to Business-as-Usual (BAU) by running jobs after a platform Outage)
- Manage Incidents, Service requests, Problems Change using ITSM tools and methodologies.
- Work closely with Technology Infrastructure and Development Teams in supporting Integrated / Independent release deployments, software/hardware upgrades, server upgrades and also participate in Disaster Recovery and Resiliency testing exercises
- Communicates effectively with L2/L3 Support, Development and internal business operations teams serving as an escalation point between the client/business area and internal management for the resolution of moderately complex unresolved problems, complaints and service requests
- Collaborate with offshore team members for efficient production support shift handoffs
- Teradata
- Secondary Skill
- Hadoop
- Python
- Computer Science/Software Engineering (or related) degree
- 2-5 year s experience with application production support on Teradata / Hadoop data-warehouse and analytical platforms
- Experience monitoring, analyzing and fixing ETL job failures for SQL/Hadoop-based ETL and analytic workflows using native utilities (Teradata bteq, tpt, fastexport), Hive/Spark/Pyspark etc
- Experience analyzing data incidents and issues, and resolve simple/moderate quality issues
- Experience working with Hadoop Technologies, programs and toolsets like Hadoop, Hive, Sqoop, Impala, Kafka, and Python/Spark/PySpark workloads
- Very good knowledge of Unix/Linux shell Scripting and scheduling (like Autosys)
- Excellent written, proactive communication and diagramming skills
- Strong analytical and problem solving abilities
Vacancy expired!
Report job