Job Reference #
City
Job Type
Your role
We’re looking for a Big Data Lead Engineer to:
- engineer reliable data pipelines for sourcing, processing, distributing, and storing data in different ways, using cloud (Azure) data platform infrastructure effectively.
- transform data into valuable insights that inform business decisions, making use of our internal data platforms and applying appropriate analytical techniques.
- develop, train, and apply Data Engineering techniques to automate manual processes, and solve challenging business problems.
- ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements.
- build observability into our solutions, monitor production health, help to resolve incidents, and remediate the root cause of risks and issues.
- understand, represent, and advocate for client needs.
- codify best practices, methodology and share knowledge with other engineers in UBS
- shape the Data and Distribution architecture and technology stack within our new cloud-based datalake-house.be a hands-on contributor, senior lead in the Big Data and data lake space, capable to collaborate and influence architectural and design principles across batch and real time flows
- have a continuous improvement mindset, who is always on the look out for ways to automate and reduce time to market for deliveries
Your team
Your expertise
- Experience in building Data Processing pipeline using various ETL/ELT design patterns and methodologies to Azure data solution, building solutions using ADLSv2, Azure Data factory, Databricks, Python and PySpark.
- Experience with at least one of the following technologies: Scala/Java or Python
- Deep understanding of the software development craft, with focus on cloud based (Azure), event driven solutions and architectures, with key focus on Apache Spark batch and streaming, Datalakehouses using medallion architecture. Knowledge of DataMesh principles is added plus.
- Ability to debug using tools Ganglia UI, expertise in Optimizing Spark Jobs
- The ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate datasets.
- Expert in creating data structures optimized for storage and various query patterns for e.g. Parquet and Delta Lake
- Experience in traditional data warehousing concepts (Kimball Methodology, Star Schema, SCD) / ETL tools (Azure Data factory, Informatica)
- Experience in data modelling atleast one database technology such as:
- Traditional RDBMS (MS SQL Server, Oracle, PostgreSQL)
- NoSQL (MongoDB, Cassandra, Neo4J, CosmosDB, Gremlin)
- Understanding of Information Security principles to ensure compliant handling and management of data
- Ability to clearly communicate complex solutions.
- Strong problem solving and analytical skills.
- Working experience in Agile methodologies (SCRUM)
- A proven team player with strong leadership skills, who can work in a collaborative way across business units, teams and regions
About us
We have a presence in all major financial centers in more than 50 countries.
How we hire
Join us
From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact?
Contact Details
UBS Recruiting
Disclaimer / Policy Statements