Software Engineer, Workflows Infrastructure

The Software Engineer, Workflows Infrastructure

Location: Palo Alto, CA

Full-time

Software Engineer, Workflows Infrastructure

at Lyft

Palo Alto 

At Lyft, community is what we are and it’s what we do. It’s what makes us different. To create the best ride for all, we start in our own community by creating an open, inclusive, and diverse organization where all team members are recognized for what they bring.

We care deeply about delivering the best transportation experience; this means the best experience for the passenger and the best experience for the driver. We believe this quality of service can only be achieved with a deep understanding of our world, our cities, our streets… how they evolve, how they breathe. We embrace the powerful positive impact autonomous transportation will bring to our everyday lives and with our ambition, we will become a leader in the development and operation of such vehicles. Thanks to our network, with hundreds of millions of rides every year, we have the means to make autonomy a safe reality. As a member of Level 5, you will have the opportunity to develop and deploy tomorrow’s hardware & software solutions and thereby revolutionize transportation.

Workflows team in Autonomous Infrastructure organization is chartered with building a data platform that enables connecting various large scale data processing jobs into reproducible and efficient workflows. We have to solve many distributed systems challenges to accomplish that. Working in the team gives a unique ability to understand how all the pieces of self-driving technology fit together: from perception and motion control to mapping. If you are excited about being a force multiplier in a team of talented engineers working on cutting edge problems, we should talk!

Responsibilities:

  • Perform requirements discovery among diverse set of hardware, software, and system engineers

  • Design and implement reliable, scalable, and high performing distributed systems and data pipelines

  • Provide observability into the systems inner working that enables others to iterate independently

  • Design and operate compute platform processing 100+ TB/day

  • Level up, educate, and evangelize data workflows knowledge across the entire Autonomous teams

Experience & Skills:

  • Strong knowledge of Software Engineering and Computer Science fundamentals. It usually comes with Bachelors in CS, but doesn’t have to be

  • Extensive programming experience. We use C++, Python, Java with a sprinkle of Rust, Go, and Scala

  • Ability to work effectively with a diverse range of talented engineers

  • Demonstrable skills with large scale analytics data pipelines. From ELK and Grafana to BigQuery and Hive

  • 2+ years of relevant professional experience

  • Experience building REST or gRPC services

  • Understanding of containerization, including Docker and Kubernetes

Nice To Have:

  • Experience with cloud scale data stores (e.g. Dynamo, Spanner, BigQuery), distributed messaging platforms (e.g. SQS, Kafka, Kinesis), or data processing frameworks (e.g. Spark, Flink, Beam)

  • Extensive experience with AWS or GCP

  • Familiarly with infrastructure management tools, like Terraform or Ansible