You will be designing, building and deploying high-quality solutions across Chambers products, platforms, and applications, ensuring they meet Data Engineering and QA standards.
You will also be required to promote engineering best practices and being point of expertise for all data related projects and ensuring standards and performance are met across the Data Engineering team.
As a core member of our Data Engineering team, you will help in the implementation of our data strategy and transformation roadmap.
- Mentoring a team of developers to effectively create, optimise and maintain automated systems and processes across a given project(s) or technical domain.
- Coordinate engineering resources ensuring adherence to standards and effective delivery.
- Contribute to the continuous improvement of the team
- Contribute to the team’s ability to make and deliver on their commitments
- Innovate and experiment with technology to deliver real business benefits.
- Regularly launch products and services based on your work and be an integral part of making these a success.
- Guide, influence and challenge the technology team and stakeholders to understand the benefits, pros and cons of various technical options.
- Assist with the recruitment of the engineering team.
- Guide, instruct, motivate and manage assigned staff on projects.
- Promote an innovative thinking process and encourage it in others.
- Plan and undertake technical work on projects.
- Lead the development of technical specifications and architecture around Data Lake.
- Write clean and testable code using SQL and Python scripting languages
- Review and refactor code
- Deploy code in a trackable and safe manner
- Document development and operational procedures
- Working within the agile framework at Chambers
- Demonstrable professional Data experience
- Excellent understanding of DataBricks and PySpark
- Excellent understanding of SQL and CosmosDB databases
- Excellent knowledge of designing, constructing, administering, and maintaining data warehouses and data lakes.
- Knowledge of Azure Cloud Services
- Good understanding T-SQL programming
- Good exposure to Azure Data Lake technologies such as ADF, HDFS and Synapse
- Good knowledge of Data Governance, Data Catalog, Master Data Management
- Knowledge of Advanced Analytics and Model Management including Azure Databricks, Azure ML/ MLFlow as well as deployment of models using Azure Kubernetes Service
- Excellent oral and written communications skills
- Highly driven, positive attitude, team player, self-motivated and very flexible
- Strong analytical skills, attention to detail and excellent problem solving/troubleshooting
- Good time management skills
- Knowledge of agile methodology
- Knowledge of GitHub
- Prioritisation skills to handle fast passed dynamic environment
- A passionate Data Engineer with a history of driving his or her own technical and professional development.
- Worked in the media, publishing, research, or a similar consumer focused industry. (Highly Desirable)
- Able to clearly communicate with business and technology stakeholders.
- Attention to Detail, focused on the finer details that make the difference.
- Delivery Focus, pragmatic and driven to get solutions live.
- Able to Lead, providing thought leadership in the data domain.
- A Proactive attitude. A self-starter who seeks out opportunities for yourself and your team.
- Awareness of industry and consumer trends
- Awareness of and the ability to manage business and technology expectations.
- Able to build strong personal relationships and trust.
- Able to sell ideas or visions. Influence and advise stakeholders at all levels