Why multicloud environments can help improve security and redundancy

Why multicloud environments can help improve security and redundancy

[ad_1]

Single-cloud environments are said to be redundant. One expert disagrees and explains why.

cloudsaas.jpg
Image: iStockphoto/Denis Isakov

Before cloud computing burst on the scene, high-availability digital architectures were the holy grail. That meant redundant network providers, redundant data centers, and redundant internet service providers—all to eliminate single points of failure that have the potential to shut an organization down.

That all changed when cloud computing made its debut. Cloud providers claimed computing and storage cloud environments were fully redundant, and a single-cloud provider using multiple data centers is safe. And, even more appealing, switching to the cloud appeared to be significantly cheaper from an operational standpoint.

SEE: Google Chrome: Security and UI tips you need to know (TechRepublic Premium)

Michael Gibbs, CEO of Go Cloud Architects, a global organization providing training in cloud computing, said during an email conversation that he wanted to set the record straight when it comes to cloud computing environments.

Single-cloud computing environments are risky

Gibbs offers the following reasons using a single cloud provider is a risky proposition:

  • When an organization uses a single-cloud provider, that usually means working with one network provider, and that’s a single point of failure.
  • Single-cloud providers advertise redundancy by employing multiple data centers. However, data centers share a common control plane.“The control plane is what enables the cloud to function,” Gibbs said. “The cloud control plane orchestrates the network and data centers. If anything happens to the cloud control plane, that will likely turn into a single-point-of-failure outage.”
  • Cloud providers are high-value targets for cybercriminals. If there’s an attack and cybercriminals get control of the cloud, they can access sensitive business and customer data, or if desired, the attackers could prevent access to the cloud-computing service.

Gibbs offers this example: “Imagine what could happen if a hospital and a 911 dispatch center were hosted on a single cloud provider and there was an outage.”

SEE: Password breach: Why pop culture and passwords don’t mix (free PDF) (TechRepublic)

And we all know that cloud outages occur. Last year, several highly rated cloud service providers fell victim to significant outages. “These cloud providers have the best equipment and personnel in the world,” Gibbs wrote. “The thing is, tech fails, and we need to plan for it.”

Multicloud environments are the answer

Gibbs is adamant that using a multicloud environment is the way to go.

“Multicloud is the use of multiple cloud computing and storage services in a single heterogeneous architecture. This also refers to the distribution of cloud assets, software, applications, etc., across several cloud-hosting environments. With a typical multicloud architecture utilizing two or more public clouds as well as multiple private clouds, a multicloud environment aims to eliminate the reliance on any single cloud provider.”

Gibbs next looked at what is needed to support a multicloud environment. Building two identical clouds using open-source tools, such as the ones listed below, is highly recommended:

  • Open databases (MariaDB, MongoDB, Apache Casandra)
  • Open Kubernetes services
  • Standard networking protocols (BGP, 802.1q)
  • Open Linux (Ubuntu, Red Hat, CentOS)

When it comes to security, Gibbs adds, “No cloud vendor-proprietary service should be used, as marketplace security is not vendor proprietary and, in many cases, offers more robust security than cloud-native security tools.”

To keep things simple and secure, Gibbs recommends:

  • Using commercial non-cloud-specific tools, marketplace firewalls and VPN concentrators that can hold a nearly identical configuration in both clouds (Cisco, Palo Alto, Fortinet, Checkpoint, etc.).
  • Ensuring each side of a connection has the same security configuration.
  • A network load balancer will front-end two virtual firewalls in each cloud, followed by network access control lists, security groups, host-based firewalls, endpoint protection, and similar identity and access management policies.

Creating network connections

According to Gibbs, the router connecting to each cloud provider should have redundant line cards, redundant control modules, and redundant power supplies.

“There should be a separate high-availability router for each connection,” Gibbs says. “Each WAN connection to the cloud provider (Ethernet WAN) should be from a different network service provider. Each WAN connection to the cloud should also be in a separate direct connect/express connect point of presence—redundancy everywhere.

“Two internet connections across two internet service providers are needed at the customer’s site connecting to the internet with BGP for load sharing and optimized routing,” Gibbs says. “There should be two separate routers at the customer site that will provide backup VPNs to each cloud provider, should one of the primary network connections fail.”

More thoughts from Gibbs:

  • Each site, customer site, and provider should use a different CIDR range that can easily be summarized into a single route if desired.
  • Nearly identical BGP policies should be set up for the routing between each cloud (obviously adjusted for address differences).
  • If moderate availability of 99.99% is sufficient, the best approach is to use a single availability zone (data center) in two clouds.

Super-high availability designs

Gibbs defined super-high availability as networks that are at least 99.999% available and do not experience more than five minutes of unplanned downtime per year. “When this level of availability is needed, using two availability zones (data centers), each in two separate clouds is recommended,” Gibbs said. “Keeping the same design as above, but with two data centers per cloud provider.”

There is a problem, Houston

If the above seems complex, many agree. In Lance Whitney’s TechRepublic article How to beef up your multicloud security, he writes: “A full 95% of the respondents [of a Valtix survey] said they’re making multicloud a priority in 2022, with almost all of them putting security at or near the top of the list. Yet only 54% said they feel confident that they have the tools and skills necessary to achieve this goal.”

If you look back at pre-cloud computing networks, it becomes apparent that Gibbs is trying to inject that same redundancy into cloud-computing environments to reduce the likelihood of single-point-failure events that can occur when using a single cloud provider.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *