The current state of edge computing

[ad_1]

Blue integrated circuit with edge computing icon.
Image: Michael Traitov/Adobe Stock

First, there were cables. Then, the Earth cooled, and we got networking and the early bulletin board services that worked as a precursor to what became the World Wide Web. After that, we realized that we could use one word instead of three, and we started to refer to the web as the Internet.

After the turn of the millennium, the popularization of cloud computing, and the much-needed renaissance of artificial intelligence and machine learning, we started to create Internet connectivity not just on our desktops but also in so-called smart machines. These devices had compute and storage power all of their own, and their eventual application points would be ubiquitous and multifarious.

Although smart toasters enabled with social media connectivity and self-scanning refrigerators capable of emailing homeowners when the milk is about to go off do exist, the real world application of connected smart machines has been more predominantly focused on industrial use cases in everything from sensors to switches to seismographs.

This brief history of the Internet of Things has given us edge computing. As TechRepublic has clarified and explained before, our use of the term edge computing enables us to distinguish it from IoT in that it is “what happens on IoT devices,” rather than it being synonymous with IoT as another term for the environment at large.

With edge computing being the action, process and workflows associated with devices that are capable of collecting, processing, analyzing and generating data in situ, we can now talk about edge as that element of computing that does not happen in a centralized computing environment such as a cloud data center.

SEE: Don’t curb your enthusiasm: Trends and challenges in edge computing (TechRepublic)

The clue is in the name

Today we know that edge computing happens often in real-time and very much as a standalone digital entity in the wider enterprise IT stack. The clue really is in the name: Edge computing happens out on the edge.

But where do we find the edge today? What form does it take? How do we differentiate between the various shapes and forms of edge? What’s the difference between micro-edge, mini-edge, medium-edge, heavy-edge and multi-access edge?

The notion of micro- and mini-edge describes the use of devices that reach down to the use of a printed circuit board and even a single microcontroller level. While the largest of the devices at this level might run a traditional PC operating system — or perhaps more likely Linux — individual microcontrollers exist as a bare-bones integrated circuit designed to oversee one specific operation in an embedded edge computing system. Both micro- and mini-edge devices are capable of benefiting from AI acceleration to speed up their ability to perform.

“As more and more computing today happens closer to the edge, it is logical that we will see micro-edge and mini-edge computing develop as a rapidly growing trend that helps businesses improve network performance, reduce network congestion and increase security,” said Erwan Paccard, head of products at Traefik Labs, a company known for its cloud-native application networking stack technology.

Today, we can see that open source solutions like Rancher K3s and Traefik are increasingly popular in terms of their use to deploy and manage these types of micro- and mini-edge applications. These devices have limited processing power and storage capacity, but are designed to handle simple tasks such as data collection, pre-processing and basic analytics.

“These solutions are lightweight, easy to install, and provide advanced features that make it easy to deploy and manage edge applications at scale,” said Paccard. “As the demand for low-latency, high-bandwidth applications continues to grow, the use of mini-edge computing architectures is expected to become even more widespread.”

Medium to well done

Moving logically upwards then, we naturally come to the medium-edge, a device deployment model typified by grouping a number of devices in a cluster. Although requiring the use of some form of cluster management and orchestration technology — think about Kubernetes, obviously — the resulting deployment has more processing and storage capacity than micro-edge and mini-edge deployments and so can handle more complex tasks.

“We find medium-edge computing devices deployed at the ‘physical’ edge of our networks, such as at the city or regional level,” said Wayne Carter, vice president of engineering at the NoSQL cloud database company Couchbase. “Devices in this deployment model are designed to handle more complex tasks such as data processing, analytics and machine learning, as well as more advanced applications such as augmented reality and virtual reality.”

Carter suggests that medium-edge devices are found in smart city environments such as traffic lights, security cameras and all manner of sensors. This category could also include medical devices, such as wearables and diagnostic equipment, allowing for real-time monitoring and analysis of patient health data.

Couchbase also reports working with medium-edge use cases in the retail sector, where devices are used to analyze data from in-store cameras, sensors and other tracking devices focused on both people and goods, allowing for real-time monitoring of customer behavior and inventory management.

Getting kind of heavy

If you’re wondering why we have yet to mention the Industrial IoT and the development of heavyweight edge installations that represent comparatively meaty computing estates, we are now. Sometimes located in an industrial facility, sometimes located in a customer’s own on-premises cloud data center facility and sometimes residing in a public cloud data center, heavy-edge describes a combined hardware and software stack.

“Heavy-edge refers to the deployment of edge computing in large, high-performance computing environments, such as in manufacturing or industrial settings,” confirmed Dominique Bastos, senior vice president of cloud at the digital engineering company Persistent Systems.

Degrees of proximity

Extending all of these nuances of edge one tier further, Bastos also points to multi-access edge.

“In the multi-access edge space, we’re referring to the deployment of edge computing at multiple access points — such as in a cellular network — to provide low latency and high-bandwidth services,” Bastos said. “The primary difference between all of these edge terms is the degree of proximity to the end user and the level of computational power and data storage at the edge.”

This concept of “degree of proximity” may be the most defining element of this still-changing set of definitions. After all, while these terms have not been ratified by some international consortium of networking best practices, they are among the same de facto set of terminologies that we use for a lot of how cloud computing continues to grow every day.

As industry field CTO at data cloud specialist Snowflake, Fawad Qureshi says it’s often challenging to provide concrete definitions of the micro-, mini-, medium-, heavy- and multi-access edge, primarily because what is mini- or medium today, might well become heavy- or multi-access tomorrow.

“If we follow the premise of Moore’s Law on technological evolution — i.e., that the number of transistors in an integrated circuit doubles every two years — then it’s hard not to see how these notions of edge can quickly evolve,” Qureshi noted.

A new recipe challenge

As edge devices continue to get more processing, compute, analytics and storage power, we may find a new recipe ingredients challenge: For any given edge deployment to determine how to combine different technologies based on the processing requirements of the job, the service level agreement at hand and the practicality of doing so in the first place.

“In general, decisions are taken closer to the edge when the decision-making does not involve significant contextual historical processing,” Qureshi said. “For example, a self-driving car coming across an obstruction and requiring to stop. That decision cannot be made with any delay. Routing a message back to the cloud takes a few milliseconds to complete, by which time the vehicle may have already crashed.”

What does the future hold for edge proliferation, segmentation and indeed classification of this kind? Intel is aspiring to put one trillion transistors in a single device by 2030, so the sky — or at least the web and the cloud backbone — is clearly the limit.

What type of edge sub-genre will we be defining and encoding next then? We can look forward to eco-wellbeing-edge, a type of remote device deployment model with exemplary environmental credentials, a zero carbon deployment model, and a core functional option to assess users’ personal stress levels and state of mind.

We jest of course, but edge for good is no bad thing. Let’s champion that too.

There’s plenty more to read about edge computing. Check out these articles on the top four best practices, the risks and the benefits.

[ad_2]

Source link