Build new and existing applications in preparation for a launch of new business.
Align with the business teams and rest of the AMFMCT teams in assessing business needs and transforming them into scalable applications.
Build and maintain code to manage data received from heterogenous data formats including web-based sources, internal/external databases, flat files, heterogenous data formats (binary, ASCII).
Help build new enterprise Datawarehouse and maintain the existing one.
Design and support effective storage and retrieval of very large internal and external data set and be forward think about the convergence strategy with our AWS cloud implementation.
Assess the impact of scaling up and scaling out and ensure sustained data management and data delivery performance.
Build interfaces for supporting evolving and new applications and accommodating new data sources and types of data.
Your Required Skills
10+ years of experience in building out Data pipelines in Java/Scala
5 + years of experience working in AWS Cloud especially services like S3, EMR, Lambda, AWS Glue and StepFunctions.
5+ years of experience with Spark
Exposed to working in an Agile environment with Scrum Master/Product owner and ability to deliver
Strong Experience with data lake/data marts/data warehouse
Ability to communicate the status and challenges and align with the team
Demonstrating the ability to learn new skills and work as a team
Your Desired Skills
Experience working in Hadoop or other Big data platforms
Exposure to deploying code through pipeline
Good exposure to Containers like ECS or Docker
Direct experience supporting multiple business units for foundational data work and sound understanding of capital markets within Fixed Income
Knowledge of Jira, Confluence, SAFe development methodology & DevOps
Excellent analytical and problem-solving skills with the ability to think quickly and offer alternatives both independently and within teams.
Proven ability to work quickly in a dynamic environment.
Bachelor's degree Computer Science or a related field.
Flair for data, schema, data model, how to bring efficiency in big data related life cycle. Understanding of automated QA needs related to Big Data and visualization platforms.
Java is must and also should have UI experience
BS in Computer Science or related area; 5-8 years software development experience; Minimum 2 Year Experience on Big Data Platform; Proficiency with Java, Python, Scala, HBase, Hive, MapReduce, ETL, Kafka, Mongo, Postgres. Visualization technologies etc.
Flair for data, schema, data model, how to bring efficiency in big data related life cycle
Understanding of automated QA needs related to Big data
Understanding of various Visualization platform (Tableau, D3JS, others)
Proficiency with agile or lean development practices