Building Trust in the Age of AI

How we use artificial intelligence is just as important as what we use it for. Why? Because Australia’s opportunity to benefit from AI, the success of our AI strategy, and the trust of our customers and the Australian community depend on it.

Lisa Green · 20 April 2026 · 6 minute read

We want every innovation in AI to improve lives and experiences without compromising privacy, security, or ethics. Responsible AI isn't a checkbox for us – it's the foundation of how we build and deploy every AI solution.

Ethics by Design

From day one, Telstra embedded responsible and ethical AI practices into our AI projects by design.

We made a commitment to the Australian AI ethics principles many years ago; we were the first Australian company to join UNESCO's global council promoting ethical AI; and we operate under a comprehensive Responsible AI Policy that translates these high-level principles into practices people can follow.

  • For example, one key principle is that AI systems should be reliable and safe. One way we bring this to life is by requiring AI systems to be tested and monitored in a way that links to their intended purpose.
  • Another principle is accountability. We bring this to life by having a named owner for every AI system we introduce, and by considering and documenting whether direct human oversight of AI-enabled decisions (human-in-the-loop) is required. We also back up our principles with strong governance processes.

All AI systems, whether built in-house or by a partner, are included on an AI register and screened for their impact level on various stakeholders.

  • Higher-impact systems go through independent review by our AI Risk Oversight Council (AIROC). AIROC is a cross-functional committee of experts from across Telstra – ranging from data science and engineering to legal, privacy, cyber security, people and culture, and customer experience. This group closely examines the risks and controls of proposed AI use cases to ensure they are introduced safely.
  • Alongside AIROC, Telstra's senior leadership and even our Board (through an Audit & Risk Committee) keep a watchful eye on how AI is being used in the business.
  • We also train our staff in AI ethics and responsible innovation – both in our annual training for all staff, and for specific AI systems we introduce. For example, every Telstra employee must complete responsible AI training before getting access to Microsoft 365 Copilot.

Consider Telstra Assistant, which is our Gen AI powered chat assistant that you'll find on our website and the MyTelstra app. When it was being developed, we looked closely at what data it would draw on, what information it would share with users, what would happen if the user was identified as a vulnerable customer, and more.

In every case we needed to be confident the tool would comply with our Responsible and Ethical AI policies.

This rigorous oversight ensures we identify potential risks early and address them appropriately, before deployment.

By nurturing a culture of responsibility and accountability, we make sure everyone at Telstra plays a part in upholding our AI values.

Scaling Governance with a Responsible AI Control Plane

As we expand our use of AI – from network management tools to customer-facing ones like our AI-powered Telstra Assistant – we're also scaling up our ability to govern these systems effectively.

The next frontier in our journey is building a Responsible AI control plane. In simple terms, this is a company-wide platform that will help us monitor and manage all our AI models, ensuring they comply with our ethical standards and policies even as they operate on a large scale.

We are currently building and testing the control plane in our non-production environment and will have more to share on how we're scaling it soon.

Leading responsible AI in Australia – and beyond

We're proud of what we're doing to build trusted and responsible AI systems, but our efforts don't stop at our own doorstep – we actively collaborate with industry partners, government bodies, and global organisations to champion best practices in AI ethics. By sharing our experiences and learning from others, we aim to help shape an Australian AI landscape that is innovative and responsible.

Ultimately, we all need to keep humanity at the heart of technology, be transparent about how AI works, protect privacy and security, and always ask the tough ethical questions.

If we get this right, we believe that not only will Telstra's AI strategy thrive, but it will also help build greater trust in AI across Australia – ensuring that everyone can share in the benefits of these powerful technologies, safely and confidently.

By Lisa Green

Data & AI Solutions Executive, Telstra