The Role
We are looking for a well-rounded and entrepreneurial Data Engineer. You will serve as an extension of the founding team, make foundational technology decisions, dive deep on every aspect of the tech stack, solve challenging problems involving automation, scaling & optimization, and contribute to critical decisions around company direction.
In this role, you will be responsible for the data transfer, processing, and storage infrastructure of the Amplify platform. This role requires you to be versatile and able to build a working knowledge of a many technologies, databases, and libraries in order to contribute to quick and sound decisions about the best approach to any problem.
Initial Responsibilities
As an early employee of a well-funded and rapidly-growing startup, you will have significant opportunities for growth and ownership, but early responsibilities include:
- Write code to manipulate and transfer large amounts of data across different cloud data stores and warehouses in an automated and performant manner
- Ensure compatibility with multiple cloud data technologies like Snowflake, Redshift, BigQuery, and Databricks
- Enable quality and performance monitoring for all data connections
Ideal Profile
- At least 5 years within high-performing software engineering or data teams (startup experience a plus)
- Experience with Python and SQL (ideally in the context of a framework like Django)
- Previously managed and integrated with one or more analytical cloud warehouses (Snowflake, Redshift, BigQuery, or Databricks)
- Experience writing production-grade data transfer and transformation pipelines (ETL/ELT), especially including external data sources
- Experience with distributed computing / data processing technologies like Dask or Spark
- Passionate about building products from the ground-up
- Opinionated on the best way to do things and able to explain tradeoffs between design choices