Your organization needs regional disaster recovery: Here’s how to build it on Kubernetes

[ad_1]

System outages are inevitable, but you can minimize disruptions. Here’s why regional disaster recovery based on Kubernetes container orchestration is crucial to effective business continuity.

The finger of the businessman who pushed domino to fall down / concepts of risk and business failure.
Image: chingching/Adobe Stock

Fires, hurricanes, floods: Disasters have always threatened IT operations. Today, cybersecurity breaches are of equal or bigger concern. Even partial disasters can bring your business to a standstill. If the crew outside your building severs a cable, you could lose external connections, no matter how resilient your datacenter.

Every organization should have a business continuity plan, and a key component of your business continuity policy should be regional disaster recovery, which places a secondary IT environment far enough away from your primary site that it won’t be affected by the same disaster.

SEE: Mobile device security policy (TechRepublic Premium)

Kubernetes container orchestration can help provide the foundation for modern DR strategies, helping organizations to apply the benefits of cloud-native technologies to business continuity. Alongside supporting software, Kubernetes-based regional DR offers the capabilities to more quickly replicate workloads to a secondary site, enable failover and failback, and automatically restore to your primary environment once the disaster is remediated.

Why you should meet regional recovery needs with Kubernetes

Regional DR uses asynchronous replication, while bringing up a secondary production environment with a minimal recovery point objective — the amount of data you can afford to lose – and recovery time objective -– the amount of time you can afford to be down. The latter for many organizations is typically around 10 to 30 minutes.

That stands in contrast to metro DR, which relies on high-speed and high-cost fiber-optic connections to support sub-second RPO and RTO. Such synchronous replication writes data to two sites simultaneously, enabling the near-instant failover required for demanding, sensitive workloads such as banking transactions.

Why is Kubernetes ideal for DR? Kubernetes is an open source, portable and extensible platform for managing containerized workloads. Resilience is built in, because Kubernetes isn’t tied to a specific location or piece of hardware. If an application process fails, the platform immediately spawns a new instance to keep workloads running.

A good DR solution offers confidence that when your Kubernetes applications move to your secondary site, all the relevant metadata goes along with them. That includes the namespace information, objects and custom configurations to help the workload function properly in the secondary environment. Without an effective DR methodology for Kubernetes, your IT team would have to assemble all that manually — a painstaking, error-prone and time-consuming task.

Why you should use API integration for cybersecurity resilience

As cybersecurity attacks proliferate, the ability to more quickly recover from these events has grown in importance. Kubernetes can help here as well.

One way is by addressing targeted cyberattacks. Some cyber exploits are intended to affect the largest number of victims, such as the various software supply chain attacks we’ve seen over the past few years. But increasingly, cyber criminals have specific targets in mind: A regional hospital, a municipal water plant or a geographic region in political turmoil. In these cases, regional DR built on Kubernetes can help your organization recover more quickly.

Kubernetes can also aid in broader IT responses to ransomware. That said, if your primary environment is shut down by ransomware, you can’t simply replicate to your secondary site because the ransomware will probably be replicated along with it.

A solution to this problem is an implementation of Kubernetes that includes a data-protection API. With a data-protection API, you can integrate Kubernetes with your existing data-backup solution. If your site is hit with ransomware, you can restore data to an earlier point in time in a safer production environment at your secondary site, where it can run till you remediate your primary site.

An effective Kubernetes implementation will likewise enable DR in edge environments. By combining Kubernetes and a data-protection API on a single node, you can benefit from essentially the same replication and DR capabilities in a smaller form factor, supporting both containers and virtual machines running at the edge.

How to get back up to speed

Once you’ve remediated your primary site, you’re ready to return production there. Using cluster management capabilities in your secondary environment makes the task simple. When your IT team triggers the process, the cluster manager automatically replicates applications with their namespace information and configurations back to your primary site. Without such capability, your IT team would have to manually rebuild your production environment piece by piece.

As business, IT and global environments become more complex every day, it’s very likely that every organization, at some point, will face a business-interrupting event. But with an effective implementation of Kubernetes and supporting technologies as part of a comprehensive DR strategy, your organization can recover more predictably, quickly and cost-effectively. That will help minimize the data and financial impacts while maximizing the speed with which you get back to work.

Marcel Hergaarden

Marcel Hergaarden is a senior manager of product marketing for the Data Foundation Business team at Red Hat. Based near Amsterdam, he has been with Red Hat since 2012. He has a technical background and extensive experience in infrastructure-related technical sales.

 

[ad_2]

Source link