Devoteam is a leading consulting firm focused on digital strategy, tech platforms and cybersecurity. By combining creativity, tech and data insights, we empower our customers to transform their business and unlock the future. With 25 years’ experience and 10,000 employees across Europe, the Middle East and Africa, Devoteam promotes responsible tech for people and works to create better change.
Join our global community of exceptional talents at Devoteam, where you’ll collaborate with multidisciplinary teams of data specialists, business consultants, solution architects, security experts, cloud engineers, developers, and more across over 20 countries in the EMEA region. As a member of our team, you’ll have the opportunity to work with the world’s leading partners, including AWS, Google Cloud, Microsoft, and ServiceNow, driving innovation and advancing your career.
Here is a non-exhaustive list of your daily missions; we trust you to take them in hand and enrich them in your own way:
Participate in AWS Cloud projects (EC2, S3, Lambda, Redshift , EMR, Kinesis, DynamoDB, etc.) or other solutions hosted on an AWS architecture (Snowflake, Databricks, etc.).
Develop and automate data ingestion pipelines with appropriate processing layers in relevant technologies (Python, Spark, Kafka).
Industrialize Data Science algorithms.
Design scalable and generic data schemas to meet reporting or other needs ( SQL ).
Develop custom applications based on existing generic components to meet client needs (scenario scripting, training monitoring of predictive models and AI, reporting, etc.).
Supervise and mentor junior consultants , i.e., peer code review, application of best practices.
Assist our sales team in writing proposals and pre-sales meetings.
Contribute to the development of our internal community.
Participate in the recruitment of our future talents.
The list of technologies you would be working on includes:
Python, PySpark, or Scala Spark. Scikit-learn, MLlib, Tensorflow, Keras, PyTorch, LightGBM, XGBoost, Scikit-Learn, and Spark (to name a few).
Data architectures and environments like Hadoop, ElasticSearch, Kafka, etc.
AWS Big Data stack (Step Function, Lambda, ECR, S3, EC2, Code Build, Glue, automation and DevOps tools, EMR, Redshift, Athena).
Implementation of DevOps environments and Infrastructure as Code.
Familiarity with tools like Git, GitLab CI, Jenkins, Ansible, Terraform, Docker, Kubernetes, ML Flow, Airflow, or their equivalents in Cloud environments.
Do you want to join a team of " builders " working on real Data projects in production and at scale? You are in the right place!
You have at least 4 years of experience in data engineering and data platform architecture on AWS .
You have experience in data visualisation ( PowerBI , Tableau, QlikView, D3.js, ...).
You have honed your software engineering skills to perfection working with: Git, Scala, Python and or SQL
You Build batch and streaming ELT pipelines with one eye closed.
Your data modelling experience gives you an edge in building analytics use cases.
You have a knack to understand business needs through use case analysis and your communication skills.
Next to data analytics, you are aware of the evermore growing need to process unstructured data and keep an active eye on emerging technologies.
You cultivate your know-how and constantly seek better ways to do things, with an unlimited willingness to share.
Next to French or Dutch you also master English both orally and in writing.