What does that mean for you? You’ll join an international organization big enough to take you anywhere, and small enough to get you there sooner. You’ll help change how goods get to market and contribute to global sustainability. You’ll be empowered to bring your authentic self to work and be surrounded by diverse and driven professionals. And you can maximize your work-life balance and flexibility through our
- Global
- Responsible for experimenting with and implementing machine learning frameworks for data science/machine learning development and operations.
- Responsible for learning and operating new data science frameworks and technologies and exploring their viability for current and planned projects.
- Responsible for learning and operating data storage frameworks and technologies and exploring their viability for current and planned projects.
- Responsible for learning and operating data recovery, data backup and high availability.
- Responsible for learning, operating, evolving, contributing to, and maintaining Digital bespoke frameworks that support the BRIX platform (such as specific data pipes).
- Responsible for learning, operating, evolving, contributing to, and maintaining Digital bespoke frameworks that support the BDH (Brambles Data Hub) platform.
- Responsible for rigorous testing of framework robustness and scalability
- Will contribute to data science teams across Digital and the engineering teams discussions, providing insight as needed on other team member’s current approaches and methods as well as on tools and data repositories.
- Liaise with Cloud Team (Global IT) to understand company-wide cloud standards and policies and ensure compliance.
- Supporting Serialization and Asset digitization programs across Company by setting up and maintaining IoT and Edge platforms.
- Responsible for the Continuous Integration and Deployment pipelines to support product and software delivery.
- Responsible for contributing to capability building of the Cloud Engineering team, including researching and staying up-to-date on best practices e.g. GitOps, IaC (infrastructure as code).
- Successful roll out, development and continuous evolution and operation of cloud-based advanced analytics, data science and machine learning platforms, IoT, Edge and Software Delivery frameworks, both for research & development and for continuous operation.
- Effective support of data science projects, digitization projects and software delivery projects.
- Reliability of systems .
- Adoption of systems
- Data science and machine learning frameworks, Advanced Analytics Frameworks, IoT, Edge and Software Delivery frameworks, selection and implementation.
- Tooling selection and implementation.
- Working autonomously in a highly matrixed organization.
- Global IT, Digital Data Science and Application Development team, Digital Product Teams (PM), Software Engineering Team, Data Engineering Team, DataOps Team.
- Potential vendors (i.e., security).
- BS degree in Data Science, Computer Science, Engineering, Math, Statistics, Physics, or similar formal training or equivalent.
- Proven experience with looking after data recovery and database high availability and database tuning.
- Proven experience with looking after data lakes and data engineering environments.
- Proven experience with FinOps and being able to optimize spend for CE impact.
- Experience with working with IoT and Edge interaction with the Cloud.
- 3 years + relevant experience in Cloud / DevOps Engineering or adjacent fields
- Installed, operated, and managed several data science and machine learning frameworks, or developed own data science methodologies.
- Experience with Continuous Integration and Continuous Deployment.
- Experience operating, optimizing, querying, and administering databases (such as Postgres, TimescaleDB, etc.).
- Comfortable using and working in a polyglot computer language environment (Python, Go, Julia etc.).
- Experience with Amazon Web Services (S3, EKS, ECR, EMR, etc.).
- Experience with containers and orchestration (e.g. Docker, Kubernetes).
- Experience with Big Data processing technologies (Spark, Hadoop, Flink, etc.) .
- Experience with interactive notebooks (e.g. JupyterHub, Databricks).
- Experience with Git Ops style automation.
- Experience with *ix (e.g, Linux, BSD, etc.) tooling and scripting.
- Participated in projects that are based on data science methodologies, and/or physical experiments, or statistical analysis – especially in a data engineer and dev ops capacity.
- Knowledge of major data science and dev ops frameworks and methods.
- Very strong analytical skills and systems thinking.
- Strong programming skills in addition to operational skills a plus (ideally in one or more of the following languages: Python, Go, Julia, or C/C++).
- Attention to big picture and details.