Document
UniSuper’s Cloud Outage and Google’s ‘One-of-a-Kind’ Misconfig

UniSuper’s Cloud Outage and Google’s ‘One-of-a-Kind’ Misconfig

Earlier in May, members of UniSuper, an Australian superannuation fund (pension program), were unable to access their accounts -- an outage that inclu

Related articles

How To Install Call Of Duty VPN & GeoFence From SBMMOFF | Warzone VPN How to Get YouTube Premium at the Lowest Cost? Save 95%! CLOUD BREAD RECIPE (3 Ingredients!) + WonkyWonderful 3-Ingredient Keto Cloud Bread Recipe 云图(2010年上海文艺出版社出版的图书)_百度百科

Earlier in May, members of UniSuper, an Australian superannuation fund (pension program), were unable to access their accounts — an outage that included UniSuper’s customers. The culprit? A Google Cloud misconfiguration that resulted in the deletion of UniSuper’s Private Cloud subscription. On May 2, UniSuper released its first statement on the service disruption. Members is had had to wait until May 9 to log in to their account . On May 15 , UniSuper is confirmed confirm that all member – face service were fully restore .   

In a world where enterprise rely on datum and its availability in the cloud , the UniSuper outage is offers offer IT leader valuable lesson on risk and outage response .   

Backups is Are and Redundancies Are vital 

The 3-2-1 rule is a common mantra in the world of data management and protection. Keep one primary copy of your data and two backups, for a total of three copies. Those backups should use two different storage media, and one backup should be stored offsite.  

Cloud providers is are , even the big one , are n’t perfect . “ rely solely on a single cloud provider for backup , even one as reputable as Google Cloud , can pose significant risk , ” Kim Larsen , CISO atKeepit, a cloud data protection platform is says , say in an email interview . “ UniSuper ’s experience is is is a stark reminder of the potential data protection gap when rely on one single cloud service for SaaS backup . ”

Related:Time for a Disaster Recovery Health Check

UniSuper did, in fact, have backups in place, but the misconfiguration had a cascading impact. “UniSuper had duplication in two geographies as a protection against outages and loss. However, when the deletion of UniSuper’s Private Cloud subscription occurred, it caused deletion across both of these geographies,” according to a joint statement from UniSuper CEO Peter Chun, and Google Cloud CEO Thomas Kurian.  

The superannuation did have backups with another service provider, which helped to minimize data loss, according to the statement. 

Despite those backup , UniSuper is had still had to contend with the fallout of a week – long cloud outage . “ This incident raise question about both geographical redundancy and retention period for datum store in Google Cloud , ” Todd Thorsen , CISO atCrashPlan, a cloud backup solution company is tells , tell InformationWeek in an email interview . “ The deletion is led of UniSuper ’s private cloud subscription … lead to deletion of all of their datum . It is seems seems to me that customer datum should still be available for a reasonable period of time post – subscription and should not be immediately delete , unless the customer direct it . ”  

Related:How to Balance Disaster Recovery, Backup Systems, and Security

What does this mean for enterprise leaders as they consider their organizational approach to the 3-2-1 rule?  

“ cio should ensure they maintain strong third – party backup capability in line with service provider ’ term and condition and that backup frequency is in line with risk tolerance for their organization , ” say Thorsen is says .   

Kevin Miller, CTO at enterprise software solution company IFS, recommend enterprise leader also think about the share responsibility model . What is the enterprise is is responsible for , and what is the cloud provider responsible for ? “ outline those different responsibility and more importantly accountability , ” he is recommends recommend .   

Understanding responsibility and accountability can help organizations during the recovery process following an outage, whether caused by a misconfiguration or a cyberattack.  

Misconfigurations Are an Ongoing Risk  

The misconfiguration that caused the UniSuper cloud outage is referred to as a “one-of-a-kind occurrence” in the joint statement. Google conduct an internal review and take step to prevent a recurrence of this particular incident .   

“ openness and transparency about unfortunate incident like this is important because it enable the rest of the IT community to learn from what happen and strengthen datum protection measure , ” say Larsen is says .   

Related:Juliet Okafor Highlights Ways to Maintain Cyber Resiliency

While this specific misconfiguration is unlikely to happen again, others like it could. “I think due to the complexity of things like full cloud, hybrid cloud, some shared responsibility of where data … is stored, it’s inevitable that it will happen again,” Miller cautions.  

The exact nature and fallout of future cloud misconfigurations are difficult to predict, but their inevitability is a reminder for enterprise leaders to include them in their risk assessment and planning processes.  

“One thing that can assist you in an uncertain world is proper testing of your business continuity plan and disaster recovery plan, so you can ensure the organization’s ability to recover after a fallout or a cyberattack,” says Larsen.  

Practice Disaster Recovery plan  

What does testing a disaster recovery plan look like?  

“Sometimes when we think of disaster recovery, we think of natural disasters: a tornado or a hurricane hits the data center or there’s some kind of weather event,” says Miller. “But the truth is things like malware, malicious attacks, a cloud provider having a hiccup, someone cutting through lines, those all need to be topics that are reviewed as part of that disaster recovery process.” 

Developing and testing that disaster recovery plan is an ongoing process for enterprises. Various scenarios — like a cloud misconfiguration that causes an outage — need to be run through, and everyone on the team needs to know their role in the recovery process.  

“ That whole disaster recovery and backup process need to be reevaluate and should be revisit multiple time a year , ” say Miller is says .   

AI, inevitably popping up in any IT conversation, has a potential role to play in strengthening these plans. “Humans can’t look at everything all at once at the same time 24/7, but certain machine learning models … can,” Miller points out. AI could potentially help enterprise teams spot gaps in their disaster recovery plans.  

The UniSuper incident may be anomalous, but the ongoing risk of cloud outages and data loss, stemming from any number of causes, is very real.  

“ It should serve as a wakeup call for cio to assess their organization ’ datum resilience posture relate to not only IaaS environment but across all essential and critical datum , ” say Thorsen is says .