-
ID
#20235164 -
Job type
Permanent -
Salary
Depends on Experience -
Source
Integral Ad Science, Inc. -
Date
2021-09-24 -
Deadline
2021-11-22
Senior Big Data Engineer (CTV)
Illinois, Chicago, 60290 Chicago USAPermanent
Vacancy expired!
Note: This role can sit 100% remote, or in our Chicago or NYC office locations.Integral Ad Science (IAS) is a global technology and data company that builds verification, optimization, and analytics solutions for the advertising industry and we’re looking for a Senior Data Engineer to join our Data Engineering team. If you are excited by technology that has the power to handle hundreds of thousands of transactions per second; collect tens of billions of events each day; and evaluate thousands of data-points in real-time all while responding in just a few milliseconds, then IAS is the place for you!As a
Senior Big Data Engineer you will design, implement, and maintain big data pipelines responsible for aggregating tens of billions of daily transactions. You will be part of a growing team to develop and mature IAS’ capability in video ad verification, analytics and anti ad fraud software products. The ideal candidate is naturally curious, dedicated, detailed-oriented with a strong desire to work with awesome people in a highly collaborative environment. You should be able to not take yourself too seriously as well.What you’ll do:- Architect, design, code and maintain components for aggregating tens of billions of daily transactions
- Work on Big Data technologies such as Hadoop, MapReduce, Kafka, and/or Spark in columnar databases on AWS
- Contribute to the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation for streaming & batch ETL and RESTful APIs
- Provide leadership, work collaboratively, and be a mentor in an awesome team
- Bachelors or Masters in Computer Engineering, Computer Science, Electrical Engineering or related field
- 5+ years of recent hands-on in object oriented language (Java, Scala, Python)
- 5+ years of experience designing and building data pipelines and data-intensive applications
- Experience using Big Data frameworks (e.g., Hadoop, Spark, MapReduce) and MPP databases (e.g., RedShift, Snowflake) for complex data assembly and transformation
- Strong knowledge of collections, multi-threading, JVM memory model, etc.
- In-depth understanding of algorithms, performance, scalability, and reliability in a Big Data setting
- Good knowledge of SQL
- Experience in full software development, Agile, and continuous integration / deployment
- Excellent interpersonal and communication skills
- Experience building production level systems in a cloud environment (AWS, Azure or Google Cloud Platform)
- Building event-driven microservices applications using spring boot or gRPC
- Orchestrating data pipelines using tools such as Airflow
- Familiarity with messaging frameworks like Kafka or RabbitMQ
- Experience with Spark streaming or Flink
- Solid understanding of OLTP and OLAP systems, database fundamentals
Vacancy expired!
Report job