Data Engineer | Databricks - Melbourne (hybrid)
Role Overview:
We are seeking an experienced Senior Data Engineer with extensive Databricks Lakehouse expertise to join a client’s Data Engineering Hub.
The ideal candidate will have at least 3+ years of experience working with Databricks technologies and a strong background in data engineering, with a proven track record of designing and implementing scalable data solutions on Databricks and Lakehouse architecture.
Key Responsibilities:
- Develop solutions for data pipelines on Databricks, including pattern development to enable other engineers
- Develop and optimise notebooks using PySpark and Spark SQL
- Implement performance monitoring and tuning for data workflows
- Create and maintain data models to support business requirements
- Ensure data quality and integrity across all data pipelines
- Establish and implement best practices for data engineering, including data governance and security
Required Qualifications:
- Advanced PySpark development, including optimisation of wide transformations, shuffle reduction, and memory management
- Strong understanding of Spark execution plans, cluster configuration, and performance tuning
- Experience with Delta Lake, including ACID transactions, schema evolution, OPTIMIZE/VACUUM, and MERGE patterns
- Ability to design and implement Bronze ? Silver ? Gold Lakehouse architectures
- Proven experience as a Data Engineer with specific focus on Databricks
- Advanced SQL skills and experience with various database technologies
- Exceptional problem-solving abilities and attention to detail
- Experienced with Data Modelling, Normalisation and Dimensional Modelling
- Excellent communication skills and ability to work in a team environment



