• Find preferred job with Jobstinger
  • ID
    #12228944
  • Job type
    Permanent
  • Salary
    $100,000 - $160,000 per year
  • Source
    Jobot
  • Date
    2021-04-13
  • Deadline
    2021-06-12
 
Permanent

Vacancy expired!

AWS, Apache, SaaS

This Jobot Job is hosted by: Derek CoxAre you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.Salary: $100,000 - $160,000 per year

A bit about us:

We are a Cloud Security SaaS startup looking for ambitious engineers to join a small team. We create the first software cloud-native security and compliance platform built on a graph data model. We just had a 20 million series A round. Are you passionate about building pipelines for storing and querying data efficiently? Our product is cloud native and some of our customers are Reddit, Databricks and Auth0, which helps them manage and monitor their cloud infrastructure.

You will be responsible for determining the tools, technologies, and approach to expanding the capabilities of our data platform. You will be expected to guide architectural decisions on creating a cost-effective, efficient, and resilient data ingestion pipeline. You will be expected to write service and infrastructure code to support the production system. Experienced data engineers will be expected to educate and inform others about the best practices of using cloud native tools to support various use cases.

Why join us?
  • Competitive Base Salary
  • 100% company paid health plan for employees
  • Equity in high-growth start-up (not in lieu of a salary)
  • Flexible Hours
  • Very generous PTO
  • Dental and Vision, FSA, HSA
  • Small team, autonomy
  • Many more great perks!

Job Details

Key Experience:
  • Big data tools such as Hadoop, Athena, Apache Hive, and Apache Spark
  • Write production code that is reliable and easy to support
  • Guide other engineers on data engineering best practices
  • Understands the importance of maintaining customer data privacy and security compliance
  • Make practical decisions regarding data storage and retrieval
  • Help architect a data ingestion and query pipeline that enables efficient and cost efficient data science use cases
Desired Experience:
  • Working with graph databases
  • Working with microservices
  • DevOps automation and Infrastructure as Code with tools like Terraform or CloudFormation
  • Willing to explore new approaches to solving problems and challenging the status quo
  • Eager to improve processes via automation
Technologies
  • JVM languages (Java, Scala, etc.)
  • Python
  • SQL
  • Node.js
  • TypeScript
  • Apache Spark, Hive, Hadoop
  • Serverless (AWS Lambda and AWS Fargate)
  • Docker
  • AWS (Athena, Neptune, Lambda, API Gateway, DynamoDB, Kinesis, SQS, S3, Comprehend, etc.)
  • GraphQL
  • Elasticsearch
  • Redis
  • Terraform

Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.

Vacancy expired!

Report job