Data Platform Lead

Data Platform Lead

Contract Type:

Temp

Location:

Sydney

Industry:

Contact Name:

Michael Mooney

Contact Email:

michael.mooney@methodrecruitment.com.au

Contact Phone:

0413245023

Posted Date:

13-May-2026

 

Data Platform Lead – (Azure Databricks) – 12 Month Daily rate

contract.

Data Platform Lead (Azure Databricks)

Contract: 12-month daily rate | Sydney (Hybrid)

Position Overview

The Technology Hub enables the organisation’s businesses to achieve their technology-driven goals.

The work undertaken spans leading and supporting enterprise transformation, building or adopting

modern technology platforms, protecting against risk, and delivering data insights that transform

technology capabilities.

The Infrastructure team is responsible for designing, implementing, and maintaining the

organisation’s IT backbone, including networks, servers, data centres, and cloud services. The team

ensures robust security, manages data storage and backup solutions, and provides technical support

and troubleshooting. The team also manages vendor relationships, procurement, and compliance

with relevant regulations. By optimising performance and planning for future needs, the

Infrastructure team ensures the stability, security, and efficiency of the IT environment, enabling

seamless operations across the organisation.

As the Data Platform Lead, you will be responsible for overseeing the design, implementation, and

operation of a modern data platform. You will ensure that data systems are robust, scalable, and

secure, supporting business needs and future growth. You will lead a team of engineers, provide

technical direction, and foster a collaborative, delivery-focused environment.

the data platform will support multiple enterprise domains, including HR (HCM) data. This includes

enabling secure, scalable analytics and AI use cases across employee, workforce, and organisational

data sourced from platforms such as Workday, SAP SuccessFactors, and Salesforce, while ensuring

strong governance, privacy, and compliance controls for sensitive employee information.

 

Why join us?

You will have the opportunity to work on complex, high-impact challenges across multiple industries,

contributing to outcomes that matter. The organisation values curiosity, collaboration, and

continuous improvement, offering an environment where people are encouraged to grow their skills,

explore new technologies, and make a tangible impact.

You will be part of a large, diverse, and highly skilled community that values bold ideas, teamwork,

and long-term outcomes.

What you’ll do

• Lead the design and implementation of data platform and infrastructure solutions aligned

with business requirements and industry best practices

• Ensure the stability, security, scalability, and reliability of the data platform

• Lead and mentor a team of data and platform engineers, supporting professional

development and capability uplift

• Drive platform improvements, promoting innovation and implementing solutions that

improve efficiency, reliability, and cost effectiveness

• Collaborate with architecture, engineering, security, and business teams to align platform

initiatives with organisational goals

• Oversee ongoing operations, support, and optimisation of the platform to minimise

downtime and improve performance

• Define, implement, and enforce platform standards, policies, and operating procedures

What we’re looking for

Technology-specific experience

• Strong experience with Spark, SQL, Python/PySpark, and building batch and streaming

pipelines (Structured Streaming, Auto Loader, Kafka/Event Hubs/Kinesis)

• Define and implement platform guardrails and standards, including Unity Catalog, access and

privilege models, cluster and SQL warehouse policies, workspace strategies, naming

conventions, and environment separation

• Build scalable ingestion and transformation frameworks using Delta Lake, Auto Loader, and

Delta Live Tables, including medallion (bronze/silver/gold) patterns for batch and streaming

workloads

 

• Establish CI/CD pipelines and infrastructure-as-code practices for repeatable provisioning

(e.g. Terraform, Databricks Terraform provider, GitHub Actions, Azure DevOps, Jenkins,

Databricks Repos)

• Implement observability and reliability practices: logging, metrics, lineage integration, data

quality checks, SLAs/SLOs, incident response, and capacity planning

• Design, build, and operate a Databricks lakehouse platform (workspaces, Delta Lake, Unity

Catalog, jobs/workflows, Databricks SQL) aligned to enterprise architecture and data

operating models

Industry experience

• Strong security and networking experience on a major cloud platform (AWS, Azure, or GCP),

including identity federation/SCIM, secrets management, private networking, firewalling,

encryption, and compliance controls

• Knowledge of FinOps and capacity management, including cost modelling, tagging and

chargeback, cluster policy design, autoscaling strategies, and cost optimisation for compute

and storage

• Solid understanding of lakehouse architectures and modern data engineering patterns,

including CDC, schema evolution, data quality frameworks, and performance optimisation

(including Photon)

Other role requirements

• Strong analytical and structured problem-solving skills, with the ability to translate between

business and technical stakeholders

• Experience assessing and prioritising AI and advanced analytics opportunities based on value,

feasibility, and delivery risk

• Proven capability in process mapping, requirements definition, and senior stakeholder

engagement

• Clear and confident communication skills for both executive and delivery audiences

• Relevant tertiary qualifications in technology, business, or related disciplines, with

foundational AI or data training highly regarded

 

What you’ll gain

• The opportunity to work with complex organisations and modern technologies that stretch

and grow your capabilities

• Flexible working arrangements that support work-life balance while meeting team and

delivery needs

• Clear career development pathways supported by continuous learning and leadership

development

• Competitive leave entitlements and wellbeing benefits

Equal Opportunity Statement

The organisation is committed to providing a fair, inclusive, and respectful recruitment process.

Applicants are encouraged to request reasonable adjustments or workplace accommodations to

support them throughout the selection process and in the role.

APPLY NOW

Share this job

Interested in this job?
Save Job
Create As Alert

Similar Jobs

Read More
SCHEMA MARKUP ( This text will only show on the editor. )

Contract Type:

Temp

Location:

Industry:

Contact Name:

Michael Mooney

Contact Email:

michael.mooney@methodrecruitment.com.au

Contact Phone:

0413245023

Date Published:

13-May-2026

 

Data Platform Lead – (Azure Databricks) – 12 Month Daily rate

contract.

Data Platform Lead (Azure Databricks)

Contract: 12-month daily rate | Sydney (Hybrid)

Position Overview

The Technology Hub enables the organisation’s businesses to achieve their technology-driven goals.

The work undertaken spans leading and supporting enterprise transformation, building or adopting

modern technology platforms, protecting against risk, and delivering data insights that transform

technology capabilities.

The Infrastructure team is responsible for designing, implementing, and maintaining the

organisation’s IT backbone, including networks, servers, data centres, and cloud services. The team

ensures robust security, manages data storage and backup solutions, and provides technical support

and troubleshooting. The team also manages vendor relationships, procurement, and compliance

with relevant regulations. By optimising performance and planning for future needs, the

Infrastructure team ensures the stability, security, and efficiency of the IT environment, enabling

seamless operations across the organisation.

As the Data Platform Lead, you will be responsible for overseeing the design, implementation, and

operation of a modern data platform. You will ensure that data systems are robust, scalable, and

secure, supporting business needs and future growth. You will lead a team of engineers, provide

technical direction, and foster a collaborative, delivery-focused environment.

the data platform will support multiple enterprise domains, including HR (HCM) data. This includes

enabling secure, scalable analytics and AI use cases across employee, workforce, and organisational

data sourced from platforms such as Workday, SAP SuccessFactors, and Salesforce, while ensuring

strong governance, privacy, and compliance controls for sensitive employee information.

 

Why join us?

You will have the opportunity to work on complex, high-impact challenges across multiple industries,

contributing to outcomes that matter. The organisation values curiosity, collaboration, and

continuous improvement, offering an environment where people are encouraged to grow their skills,

explore new technologies, and make a tangible impact.

You will be part of a large, diverse, and highly skilled community that values bold ideas, teamwork,

and long-term outcomes.

What you’ll do

• Lead the design and implementation of data platform and infrastructure solutions aligned

with business requirements and industry best practices

• Ensure the stability, security, scalability, and reliability of the data platform

• Lead and mentor a team of data and platform engineers, supporting professional

development and capability uplift

• Drive platform improvements, promoting innovation and implementing solutions that

improve efficiency, reliability, and cost effectiveness

• Collaborate with architecture, engineering, security, and business teams to align platform

initiatives with organisational goals

• Oversee ongoing operations, support, and optimisation of the platform to minimise

downtime and improve performance

• Define, implement, and enforce platform standards, policies, and operating procedures

What we’re looking for

Technology-specific experience

• Strong experience with Spark, SQL, Python/PySpark, and building batch and streaming

pipelines (Structured Streaming, Auto Loader, Kafka/Event Hubs/Kinesis)

• Define and implement platform guardrails and standards, including Unity Catalog, access and

privilege models, cluster and SQL warehouse policies, workspace strategies, naming

conventions, and environment separation

• Build scalable ingestion and transformation frameworks using Delta Lake, Auto Loader, and

Delta Live Tables, including medallion (bronze/silver/gold) patterns for batch and streaming

workloads

 

• Establish CI/CD pipelines and infrastructure-as-code practices for repeatable provisioning

(e.g. Terraform, Databricks Terraform provider, GitHub Actions, Azure DevOps, Jenkins,

Databricks Repos)

• Implement observability and reliability practices: logging, metrics, lineage integration, data

quality checks, SLAs/SLOs, incident response, and capacity planning

• Design, build, and operate a Databricks lakehouse platform (workspaces, Delta Lake, Unity

Catalog, jobs/workflows, Databricks SQL) aligned to enterprise architecture and data

operating models

Industry experience

• Strong security and networking experience on a major cloud platform (AWS, Azure, or GCP),

including identity federation/SCIM, secrets management, private networking, firewalling,

encryption, and compliance controls

• Knowledge of FinOps and capacity management, including cost modelling, tagging and

chargeback, cluster policy design, autoscaling strategies, and cost optimisation for compute

and storage

• Solid understanding of lakehouse architectures and modern data engineering patterns,

including CDC, schema evolution, data quality frameworks, and performance optimisation

(including Photon)

Other role requirements

• Strong analytical and structured problem-solving skills, with the ability to translate between

business and technical stakeholders

• Experience assessing and prioritising AI and advanced analytics opportunities based on value,

feasibility, and delivery risk

• Proven capability in process mapping, requirements definition, and senior stakeholder

engagement

• Clear and confident communication skills for both executive and delivery audiences

• Relevant tertiary qualifications in technology, business, or related disciplines, with

foundational AI or data training highly regarded

 

What you’ll gain

• The opportunity to work with complex organisations and modern technologies that stretch

and grow your capabilities

• Flexible working arrangements that support work-life balance while meeting team and

delivery needs

• Clear career development pathways supported by continuous learning and leadership

development

• Competitive leave entitlements and wellbeing benefits

Equal Opportunity Statement

The organisation is committed to providing a fair, inclusive, and respectful recruitment process.

Applicants are encouraged to request reasonable adjustments or workplace accommodations to

support them throughout the selection process and in the role.

APPLY NOW

Posted Date

Location

Sector

Salary

Work Type

13-May-2026

Open

Temp

Apply Now

Share this job

Interested in this job?
Save Job

Posted Date:

13-May-2026

Location:

Sydney

Sector:

Data Architect

Salary:

Work Type:

Temp

 

Data Platform Lead – (Azure Databricks) – 12 Month Daily rate

contract.

Data Platform Lead (Azure Databricks)

Contract: 12-month daily rate | Sydney (Hybrid)

Position Overview

The Technology Hub enables the organisation’s businesses to achieve their technology-driven goals.

The work undertaken spans leading and supporting enterprise transformation, building or adopting

modern technology platforms, protecting against risk, and delivering data insights that transform

technology capabilities.

The Infrastructure team is responsible for designing, implementing, and maintaining the

organisation’s IT backbone, including networks, servers, data centres, and cloud services. The team

ensures robust security, manages data storage and backup solutions, and provides technical support

and troubleshooting. The team also manages vendor relationships, procurement, and compliance

with relevant regulations. By optimising performance and planning for future needs, the

Infrastructure team ensures the stability, security, and efficiency of the IT environment, enabling

seamless operations across the organisation.

As the Data Platform Lead, you will be responsible for overseeing the design, implementation, and

operation of a modern data platform. You will ensure that data systems are robust, scalable, and

secure, supporting business needs and future growth. You will lead a team of engineers, provide

technical direction, and foster a collaborative, delivery-focused environment.

the data platform will support multiple enterprise domains, including HR (HCM) data. This includes

enabling secure, scalable analytics and AI use cases across employee, workforce, and organisational

data sourced from platforms such as Workday, SAP SuccessFactors, and Salesforce, while ensuring

strong governance, privacy, and compliance controls for sensitive employee information.

 

Why join us?

You will have the opportunity to work on complex, high-impact challenges across multiple industries,

contributing to outcomes that matter. The organisation values curiosity, collaboration, and

continuous improvement, offering an environment where people are encouraged to grow their skills,

explore new technologies, and make a tangible impact.

You will be part of a large, diverse, and highly skilled community that values bold ideas, teamwork,

and long-term outcomes.

What you’ll do

• Lead the design and implementation of data platform and infrastructure solutions aligned

with business requirements and industry best practices

• Ensure the stability, security, scalability, and reliability of the data platform

• Lead and mentor a team of data and platform engineers, supporting professional

development and capability uplift

• Drive platform improvements, promoting innovation and implementing solutions that

improve efficiency, reliability, and cost effectiveness

• Collaborate with architecture, engineering, security, and business teams to align platform

initiatives with organisational goals

• Oversee ongoing operations, support, and optimisation of the platform to minimise

downtime and improve performance

• Define, implement, and enforce platform standards, policies, and operating procedures

What we’re looking for

Technology-specific experience

• Strong experience with Spark, SQL, Python/PySpark, and building batch and streaming

pipelines (Structured Streaming, Auto Loader, Kafka/Event Hubs/Kinesis)

• Define and implement platform guardrails and standards, including Unity Catalog, access and

privilege models, cluster and SQL warehouse policies, workspace strategies, naming

conventions, and environment separation

• Build scalable ingestion and transformation frameworks using Delta Lake, Auto Loader, and

Delta Live Tables, including medallion (bronze/silver/gold) patterns for batch and streaming

workloads

 

• Establish CI/CD pipelines and infrastructure-as-code practices for repeatable provisioning

(e.g. Terraform, Databricks Terraform provider, GitHub Actions, Azure DevOps, Jenkins,

Databricks Repos)

• Implement observability and reliability practices: logging, metrics, lineage integration, data

quality checks, SLAs/SLOs, incident response, and capacity planning

• Design, build, and operate a Databricks lakehouse platform (workspaces, Delta Lake, Unity

Catalog, jobs/workflows, Databricks SQL) aligned to enterprise architecture and data

operating models

Industry experience

• Strong security and networking experience on a major cloud platform (AWS, Azure, or GCP),

including identity federation/SCIM, secrets management, private networking, firewalling,

encryption, and compliance controls

• Knowledge of FinOps and capacity management, including cost modelling, tagging and

chargeback, cluster policy design, autoscaling strategies, and cost optimisation for compute

and storage

• Solid understanding of lakehouse architectures and modern data engineering patterns,

including CDC, schema evolution, data quality frameworks, and performance optimisation

(including Photon)

Other role requirements

• Strong analytical and structured problem-solving skills, with the ability to translate between

business and technical stakeholders

• Experience assessing and prioritising AI and advanced analytics opportunities based on value,

feasibility, and delivery risk

• Proven capability in process mapping, requirements definition, and senior stakeholder

engagement

• Clear and confident communication skills for both executive and delivery audiences

• Relevant tertiary qualifications in technology, business, or related disciplines, with

foundational AI or data training highly regarded

 

What you’ll gain

• The opportunity to work with complex organisations and modern technologies that stretch

and grow your capabilities

• Flexible working arrangements that support work-life balance while meeting team and

delivery needs

• Clear career development pathways supported by continuous learning and leadership

development

• Competitive leave entitlements and wellbeing benefits

Equal Opportunity Statement

The organisation is committed to providing a fair, inclusive, and respectful recruitment process.

Applicants are encouraged to request reasonable adjustments or workplace accommodations to

support them throughout the selection process and in the role.

Share this job

Apply Now

Share this job

Interested in this job?
Save Job
Create As Alert

Similar Jobs

Read More
SCHEMA MARKUP ( This text will only show on the editor. )

 

Data Platform Lead – (Azure Databricks) – 12 Month Daily rate

contract.

Data Platform Lead (Azure Databricks)

Contract: 12-month daily rate | Sydney (Hybrid)

Position Overview

The Technology Hub enables the organisation’s businesses to achieve their technology-driven goals.

The work undertaken spans leading and supporting enterprise transformation, building or adopting

modern technology platforms, protecting against risk, and delivering data insights that transform

technology capabilities.

The Infrastructure team is responsible for designing, implementing, and maintaining the

organisation’s IT backbone, including networks, servers, data centres, and cloud services. The team

ensures robust security, manages data storage and backup solutions, and provides technical support

and troubleshooting. The team also manages vendor relationships, procurement, and compliance

with relevant regulations. By optimising performance and planning for future needs, the

Infrastructure team ensures the stability, security, and efficiency of the IT environment, enabling

seamless operations across the organisation.

As the Data Platform Lead, you will be responsible for overseeing the design, implementation, and

operation of a modern data platform. You will ensure that data systems are robust, scalable, and

secure, supporting business needs and future growth. You will lead a team of engineers, provide

technical direction, and foster a collaborative, delivery-focused environment.

the data platform will support multiple enterprise domains, including HR (HCM) data. This includes

enabling secure, scalable analytics and AI use cases across employee, workforce, and organisational

data sourced from platforms such as Workday, SAP SuccessFactors, and Salesforce, while ensuring

strong governance, privacy, and compliance controls for sensitive employee information.

 

Why join us?

You will have the opportunity to work on complex, high-impact challenges across multiple industries,

contributing to outcomes that matter. The organisation values curiosity, collaboration, and

continuous improvement, offering an environment where people are encouraged to grow their skills,

explore new technologies, and make a tangible impact.

You will be part of a large, diverse, and highly skilled community that values bold ideas, teamwork,

and long-term outcomes.

What you’ll do

• Lead the design and implementation of data platform and infrastructure solutions aligned

with business requirements and industry best practices

• Ensure the stability, security, scalability, and reliability of the data platform

• Lead and mentor a team of data and platform engineers, supporting professional

development and capability uplift

• Drive platform improvements, promoting innovation and implementing solutions that

improve efficiency, reliability, and cost effectiveness

• Collaborate with architecture, engineering, security, and business teams to align platform

initiatives with organisational goals

• Oversee ongoing operations, support, and optimisation of the platform to minimise

downtime and improve performance

• Define, implement, and enforce platform standards, policies, and operating procedures

What we’re looking for

Technology-specific experience

• Strong experience with Spark, SQL, Python/PySpark, and building batch and streaming

pipelines (Structured Streaming, Auto Loader, Kafka/Event Hubs/Kinesis)

• Define and implement platform guardrails and standards, including Unity Catalog, access and

privilege models, cluster and SQL warehouse policies, workspace strategies, naming

conventions, and environment separation

• Build scalable ingestion and transformation frameworks using Delta Lake, Auto Loader, and

Delta Live Tables, including medallion (bronze/silver/gold) patterns for batch and streaming

workloads

 

• Establish CI/CD pipelines and infrastructure-as-code practices for repeatable provisioning

(e.g. Terraform, Databricks Terraform provider, GitHub Actions, Azure DevOps, Jenkins,

Databricks Repos)

• Implement observability and reliability practices: logging, metrics, lineage integration, data

quality checks, SLAs/SLOs, incident response, and capacity planning

• Design, build, and operate a Databricks lakehouse platform (workspaces, Delta Lake, Unity

Catalog, jobs/workflows, Databricks SQL) aligned to enterprise architecture and data

operating models

Industry experience

• Strong security and networking experience on a major cloud platform (AWS, Azure, or GCP),

including identity federation/SCIM, secrets management, private networking, firewalling,

encryption, and compliance controls

• Knowledge of FinOps and capacity management, including cost modelling, tagging and

chargeback, cluster policy design, autoscaling strategies, and cost optimisation for compute

and storage

• Solid understanding of lakehouse architectures and modern data engineering patterns,

including CDC, schema evolution, data quality frameworks, and performance optimisation

(including Photon)

Other role requirements

• Strong analytical and structured problem-solving skills, with the ability to translate between

business and technical stakeholders

• Experience assessing and prioritising AI and advanced analytics opportunities based on value,

feasibility, and delivery risk

• Proven capability in process mapping, requirements definition, and senior stakeholder

engagement

• Clear and confident communication skills for both executive and delivery audiences

• Relevant tertiary qualifications in technology, business, or related disciplines, with

foundational AI or data training highly regarded

 

What you’ll gain

• The opportunity to work with complex organisations and modern technologies that stretch

and grow your capabilities

• Flexible working arrangements that support work-life balance while meeting team and

delivery needs

• Clear career development pathways supported by continuous learning and leadership

development

• Competitive leave entitlements and wellbeing benefits

Equal Opportunity Statement

The organisation is committed to providing a fair, inclusive, and respectful recruitment process.

Applicants are encouraged to request reasonable adjustments or workplace accommodations to

support them throughout the selection process and in the role.

Share this job

Create As Alert

Similar Jobs

Read More
SCHEMA MARKUP ( This text will only show on the editor. )