Design and Develop Data Pipelines: Create and maintain efficient, reliable, and scalable data pipelines that extract, transform, and load (ETL) data from diverse sources into AWS data storage systems.
Data Modeling and Architecture: Design and implement data models for data warehousing and data lakes on AWS, ensuring data integrity, performance, and scalability.
AWS Cloud Infrastructure: Utilize various AWS services such as Amazon S3, Amazon Redshift, AWS Glue, Amazon EMR, Amazon RDS, and others to build data solutions.
Data Transformation and Processing: Develop data transformation processes, including data cleansing, enrichment, and aggregation, to ensure data accuracy and consistency.
Performance Optimization: Identify and implement performance optimization techniques to enhance data processing speed and reduce latency in data pipelines.
Data Security and Compliance: Ensure that data handling practices comply with relevant data security and privacy regulations. Implement security measures to protect sensitive data.
Monitoring and Troubleshooting: Monitor data pipelines, data jobs, and data storage systems for issues and troubleshoot any data-related problems to ensure smooth data flow.
Continuous Improvement: Stay updated with the latest AWS services and data engineering best practices to propose and implement improvements to the existing data infrastructure.
,
Qualifications & Experience
At least Bachelor's degree in Computer Science, Engineering, or a related field.
AWS Certified Data Analytics Specialty or similar certifications will be added advantage.
Proven experience as a Data Engineer for at least 1-2 years, specifically working with AWS data services and related technologies.
Strong knowledge of AWS services such as Amazon S3, Amazon Redshift, AWS Glue, Amazon EMR, Amazon RDS, and others.
Proficiency in programming languages such as Python, SQL, and familiarity with data manipulation frameworks/libraries.
Hands-on experience with data modeling, data warehousing concepts, and building data pipelines using ETL tools.
Knowledge of data streaming platforms and technologies such as Apache Kafka or Amazon Kinesis.
Familiarity with data governance, data security, and data privacy best practices.
Strong communication skills to effectively collaborate with cross-functional teams and convey technical concepts to non-technical stakeholders.
a Necessity, not a Luxury
[Apply now at https://my.hiredly.com/jobs/jobs-malaysia-zus-coffee-job-data-engineer-1]