Founded in 2008, our Customer represents a global travel community that offers magical end-to-end trips, including where you stay, what you do, and the people you meet. They exist to create a world where anyone can belong anywhere, providing healthy travel that is local, authentic, diverse, inclusive, and sustainable. Their website and app uniquely leverage technology to economically empower millions of people around the world to unlock and monetize their spaces, passions, and talents to become hospitality entrepreneurs.
We are seeking a Data Engineer
on a contract basis
to support their business needs. This role is 100% remote
What You’ll Do
- Extract, cleanse, conform, and integrate data from external Web Application Programming Interface (APIs) into our Presto data warehouse
- Dimensionally model datasets and orchestrate workflows and automated data quality checks
- Generate ad hoc reports and regular datasets or report information or data visualizations for end-users using system tools and database or data warehouse queries and scripts
- Integrate data from multiple sources to produce requested or required data elements
- Program and maintain report forms and formats, information dashboards, data generators, canned reports, and other end-user information portals or resources
- Create specifications for reports based on business requests
- Internal operations-focused or external client-focused, working in conjunction with Professional Services and outsourcing functions
- 3-6 years of experience building scalable Spark data pipelines (preferably using Scala) leveraging airflow orchestration
- 4-6 years of strong experience in extracting data from Web APIs
- 3-5 years of experience in high-level programming languages such as Python Proficiency in Spark/MapReduce development
- 3-6 years of relational databases and query authoring Structured Query Language (SQL), particularly Hive & Presto
- 1-5 years of experience with large-scale data warehousing architecture and data modeling is preferred
- 2-3 years of experience with Tableau data visualization is preferred
- Expertise with data processing Extract, transform, and load (ETL) technologies is required
- Experience building high-quality end-to-end data pipelines in an agile environment from requirements to production
- Experience with big data technologies such as Hadoop, Spark, Hive, etc is preferred
- Experience working with Git and Jira (or other source control and task management tools) is preferred
- Good communication skills that allow smooth collaboration with stakeholders is preferred
- Bachelor's degree in Computer Science or Computer Engineering is required
- Python Proficiency in Spark/Reduce Development Certification or License
Hours & Location:
- M-F, 40 hours/week. This role will be 100% remote.
Perks are available through our 3rd Party Employer of Record
(Available upon completion of waiting period for eligible engage Health Benefits: Medical, Dental, Vision, 401k, FSA, Commuter Benefit Program
: In order to create a safe, productive work environment, our client is requiring all contractors who plan to be onsite to be fully vaccinated
according to the CDC guidelines. Prior to coming into our offices, contractors will be required to attest that they are fully vaccinated.
The salary range for this position is $83.62 - $98.62 per hour.