How we rebuild our network after natural disaster strikes
We have a whole team of people dedicated to planning for and recovering from natural disasters. Planning our response starts months before a natural disaster strikes so that the repair and recovery job gets started as soon as it is safe to do so.
Before disaster strikes
Even when there isn’t a disaster on the horizon, our network experts capture and monitor our network and the physical infrastructure to make it more resilient.
We take steps including hardening the physical cables against fire and flood damage; ensuring batteries and generators are fit for purpose; and maintaining helicopter landing sites at a lot of our telephone exchanges to make sure we can access them by air should we need to.
In some examples, like when we see a cyclone developing out to sea, we have time to sandbag at-risk telephone exchanges and roadside cabinets to reduce the risk of water damage, as well as move temporary power generators into staging locations where they’re ready to be deployed quickly.
During the summer we expect to encounter bushfires and severe storms, so we make sure we have our backup equipment ready to go. The same goes for our highly skilled and dedicated group of technicians, as we put them on standby for coming days and weeks, when safe to do so.
But as we’ve seen from the “rain bomb” that besieged Southern Queensland and Northern NSW in recent weeks, despite even the best planning, severe weather can still damage power, water and even telco infrastructure.
Phase one: intel
When a disaster strikes, our “Mission Control” swings into rapid action.
It’s called the Global Operations Centre (or GOC), and it’s staffed 24×7, 365 days a year with network professionals who monitor the health of our network both in Australia and around the world.
If connectivity is lost, or a community is pushed offline by a disaster, alarms ring in the GOC, alerting the team to the issues.
That team then starts to rapidly assess the situation, with the safety of our customers and our employees being top priority.
Gathering intel about what’s happening on the ground is crucial in this phase. Failure to plan is a plan to fail, so we need to get really clear about who we’re going to get on the ground and what they need to do when they’re there.
Construction crews are lined up; equipment is allocated, and emergency equipment – like Cells-On-Wheels (COWs) and Mobile-Exchange-On-Wheels (MEOWs) – are deployed once it’s safe to do so.
Then after that intense planning, the real work starts.
Phase two: logistics
Access is key to network restoration. If a site is still cut off from roads, underwater, on fire or bearing the brunt of a cyclone, we can’t safely access it to start restoration.
The sheer scale and severity of the recent flooding in Northern NSW and Southern Queensland showed just how big this job can be.
The “rain bomb” and the flooding it caused was unprecedented, in that the water just kept on coming. This not only caused significant flooding, but also prevented emergency services and our technicians from accessing damaged sites.
Exchanges were underwater, roadside cabinets were flooded and, in some instances, whole mobile towers were destroyed.
Worse still, landslips and landslides washes away parts of our fibre and copper networks. Bridges and roads were washed away, often taking our network cables with them.
In a disaster we know how important the mobile network is. The mobile network relies on underground cables to transmit data and signals, so when supporting infrastructure like our exchanges and cables are destroyed, even the most resilient mobile network can’t operate properly.
With flood waters continuing to cut off access to our infrastructure by road, we worked with the NSW Telco Authority and State Emergency Service to get into the disaster zone. With one of our team members able to grab a ride in a helicopter, we were able to do some aerial reconnaissance to find out what equipment was still in place and had been washed away.
Once the water started to recede and we had a view of what remained, the process of cleanup and reconnection started. Bringing things back online isn’t always as easy as flipping a switch.
As well as the process of removing all the snakes and small animals from inside our cable pits, who use them to take shelter from the elements.
Phase three: make it work
As anyone who has lived through a natural disaster knows, the process of rebuilding and restoring can be protracted and painstaking. The same is true for restoring our network.
To make sure we’re doing it right, we take a two-pronged approach: how do we get it working as quickly as possible, and how can we make sure it’s stronger for next time?
Flooding in Lismore, for example, put the entire lower level of our telephone exchange underwater. It was a lot of important equipment used to keep people in the local area connected.
Instead of simply drying it out and turning it back on – which can lead to failure later on – we’ll instead be replacing the equipment with new, more modern gear, that we can store on the second floor of the building for better flood-proofing.
Our techs will then work to restore cable connectivity to the surrounding infrastructure that powers interstate connectivity and local mobile service.
This will be a painstaking process this time around, as our techs work to try to find cable cuts big and small, all while trudging through the middle of a still-flooded jungle. Think of old-school Christmas tree lights where you need to track down one malfunctioning bulb to get it working, then multiply it with rain, floods and thick bushland.
Where cables have been completely washed away, we’ve simply put new cables down above the ground, adjacent to the broken infrastructure to restore service, before figuring out how we can safely rebury and resecure it for future use to make the fix permanent.
Mobile towers that have been affected get the same treatment. We’ll restore what we can as quickly as we can, before figuring out how to make it more resilient it for future disasters. Some towers can even be completely destroyed, and a rebuild could take weeks or months to complete.
It’s then a series of logistical sprints by our teams to get things done, as we rotate techs from around the country to come in and lend a hand to de-mud and rebuild our network.
Connectivity is vital in an emergency, and we work around the clock to restore it following a disaster. We know everyone wants it restored as soon as possible, and we do too. When we’re rebuilding following a disaster, we want everyone to know we appreciate your patience, and we’re working as fast as we can.