Inside Edge Computing

Inside Edge Computing

Ask a dozen technologists what edge computing is, and you’ll get a dozen answers. Each vendor has a different meaning, based on what they already sell, or what they think venture capitalists want to hear. Edge computing is the new cloud: a catch-all buzzword that has been so diluted as to be nearly—but not quite—meaningless. The only thing that’s certain about edge computing is that it will require organizations to change how they think about their IT.
If the above seems somewhat tongue in cheek, understand that this is an assessment born of experience. Every major change in IT leads to big changes in how IT is managed. The entire IT industry is constantly oscillating between bleeding-edge solutions that require a room full of PhDs to operate and commoditizing the ground-breaking technologies of the last decade into something that requires only a handful of buttons to operate.

What is Edge Computing

Most definitions of edge computing involve securely extruding Network A into (or geo-proximately near) Network B. This is done because Network B requires resources provided by Network A. But for whatever reason (latency, throughput, connectivity costs), it isn’t really feasible for Network B to ship all the data to Network A, let Network A chew on it, and then send the results back.
Core to the idea of edge computing is that edge computing servers are designed to be completely remote. While various technologies, from self-healing storage to infrastructure-as-code, are discussed as the solution to ensuring minimal-to-zero on-premises management, the basic idea of “plug it in and forget it” is an important part of edge computing.

Type 1 Edge Computing

Edge computing or edge networking may be a relatively new concept, but there are already variations on the theme. Most of what you’ll read on the topic, whether in trade magazines or blogs, is driven by vendors. Vendors are predominantly interested in Type 1 edge computing. Type 1 edge computing is the extrusion of a public cloud into a specific location.
The important feature of Type 1 edge computing is that it’s an extension of a managed service. Type 1a edge computing involves the extension of public cloud services into an organization’s premises. Amazon’s Greengrass is a classic example: organizations place an instance of Greengrass Core near devices that want to use Lambda functions, and they can do so even when connectivity to Amazon’s cloud is down.
Type 1b edge computing is less common. This involves public cloud providers placing microdata centers in a location geoproximate to demand. The internet is littered with arguments about whether or not this counts as edge computing, fog computing, or just traditional cloud computing. The debate won’t be resolved here.

Type 2 Edge Computing

Beyond the marketing money of the major public cloud providers and their startup ecosystems, there is another type of edge computing that is just as relevant. Type 2 edge computing involves the extrusion of an organization’s private cloud into another organization’s network.
The classic example of Type 2 edge computing is a logistics provider (such as FedEx) placing a dedicated terminal into the shipping department of a customer. The dedicated unit is usually managed and maintained by the logistics provider, and this is usually done so that the customer’s IT team can’t break it.

Why Edge Computing

In general, edge computing is deployed for one of three reasons. The one most discussed (because it is most likely to create new revenue streams for vendors) is latency requirements.
The latency requirements argument for edge computing is simple, if not quite as common a requirement as it is made out to be. You have two networks: Network A and Network B. Network B requires access to resources provided by Network A, but needs to do something with/to those resources so fast that the speed of light imposes unacceptable delays in using traditional public clouds.
The example most often cited is driverless cars. The story behind this use case goes something like this: in a future where driverless cars are the new normal, cities will be filled with cars that are packed with sensors. These sensors let cars see, and thus react to their environment.
This works great on highways, but cities are crowded with more unpredictability—like individuals walking into the street from behind parked cars. No person, or driverless car, can see everything.

Model-Driven Orchestration for Rapid Deployment of the NATO DCIS

If, however, all those sensor-equipped autonomous cars sent all their data into the cloud, then the cloud very well could see everything; it would have multiple angles to look at every object.
If you could crunch the data fast enough and deliver it back to the cars fast enough, those driverless cars could effectively see around corners. Accident rates would theoretically plummet. Unfortunately, the cloud is often dozens, if not hundreds of milliseconds away. This is an eternity when you’re worried about things like stopping distances, big metal cars, and defenseless humans. Sprinkling mini data centers around the city would solve this.
Absent driverless cars for the time being, the latency argument is predominantly being driven by use cases such as autonomous warehouses, next-generation medical monitoring, and consumer-facing AI-driven chatbots.

The Security Argument

Many edge computing deployments are being driven by intellectual property and/or privacy requirements. Network A needs to be able to provide services to Network B, but doesn’t want to give up control over data, intellectual property, and so forth. For whatever reason, simply providing these services over the internet is a no-go.
In order to provide the relevant services, Network A’s operators either place nodes into Network B (network extrusion) or place nodes geoproximate to Network B (the microdata center model). Network A’s nodes are fully controlled and managed by Network A. This means that Network B can consume the edge resources of Network A, but cannot “poke under the hood” of Network A.
In this case, Network A can be a public cloud provider, a vendor, or two organizations that are part of a supply chain. Before edge computing became a buzzword, these sorts of arrangements were typically called “managed appliances.” The FedEx example mentioned above is one use case.
Another example is contract manufacturing. Here, an OEM manufacturer might subcontract out to other manufacturers in order to fulfill a particularly large order from a customer. The OEM manufacturer might place a cluster that performs multiple tasks on the premises of all contract manufacturers. The cluster might run workloads that monitor the sensor data of the subcontractor’s manufacturing equipment. It likely will offer up line-of-business software and middleware that the subcontractor’s staff can use, and which the subcontractor’s own IT can interface with. This helps with order tracking, lifecycle management of the end product, and supply chain management.
This type of edge computing isn’t always done with managed appliances located on premises. If there are clusters of users located near one another, it is fairly common to see organizations rent space in a local colocation facility, and simply lease some dedicated network connections between the customer locations and the colocation facility. This eliminates any possibility of a curious customer tampering with the edge servers, while still providing secure, low latency access that is independent of the more central version of these services.

Managing Multi-Site Deployments

The third major reason for deploying edge computing involves the management of large multi-site deployments, typically in retail environments. This use case centers more on the “managed appliance” aspect of edge computing than anything else. Here, any location without a dedicated IT team (which usually is the case in most locations) is treated as a “customer network,” and the central IT network as the cloud provider. Individual locations would receive their IT pre-configured from the central IT team. Ideally, local staffs should not have to do more than rack the servers, plug them in, and turn them on.

This is unquestionably the most popular use case for edge computing, as it has been around for decades. Prior to the creation of the term “edge computing,” this was simply called “IT,” and was practiced by millions of organizations around the world. What makes everything old new again is the software.

Edge Computing Versus Traditional On-Premises IT

Indeed, the similarity to traditional IT is common in many edge computing use cases. IT teams have been remotely managing servers, desktops, mobile phones and more for decades. Innumerable technologies have been developed to assist them, ranging from Lights Out Management (LOM) boards for servers to remote management software for endpoints, such as TeamViewer.
The line between edge computing and traditional IT is blurry and depends a lot on who you ask. If there is a clear dividing line between “regular IT” and edge computing, it’s that edge computing solutions are designed from the ground up to be remotely managed.
To put this in context, let’s look at the cult movie Snakes on a Plane: Samuel L. Jackson is trapped on a plane that is filled with venomous snakes that are intent on killing everyone on board. In one scene, Mr. Jackson ventures down the aisle of the plane with a stun gun, individually tasering snakes as they get too close. Given the sheer number of snakes, this proves inadequate and he is quickly forced to retreat. Eventually, the snakes are defeated using the horribly inaccurate, but time-honored tactic of depressurizing the plane, blowing the snakes out into the sky.
As ridiculous as this may sound, it actually serves as a good example of the blurry line between edge computing and traditional IT. In traditional IT, administrators manage individual servers, desktops, and other IT devices. Some aspects of management are automated, and some tasks are performed on groups of computers all at once. Despite this, traditional IT never really gets away from individually tasering snakes over TeamViewer, a LOM board, SSH, or RDP.
Edge computing is hypothetically about defining everything about systems deployed as edge servers so that if something goes sideways, you simply blow the workload (or the whole server) away and rebuild from definition files. If you think this sounds a lot like infrastructure as code or Pets Versus Cattle, you’re not wrong. Much of what is called edge computing today can honestly be thought of as branch office IT performed with management tools and approaches developed in this century.

Additional Differentiators

Beyond management solutions that move us beyond individually tasering snakes, products that are marketed as edge computing don’t have very many things in common. Or, more precisely, many of them have things in common, but the percentage of products that share a given feature begins to decrease.
Edge computing solutions tend to have integrated offsite data storage. The offsite copy is frequently considered the “single source of truth.” The on-premises equipment, on the other hand, is merely a data cache, or a local processing node/cluster/rack which processes large volumes of data and pushes the crunched numbers up to the parent cloud. Combined with out-of-the-box remote management, edge computing is designed to be as “zero touch” as possible.
Traditional computing treats various technologies (OS management, app management, backups, DR, etc.) as separate roles. Edge computing usually treats on-premises equipment merely as an extrusion of a large cloud. In other words, management of all these roles are part and parcel of the solution, just as they are in a cloud.

The Promise of Edge Computing

Edge computing has a number of seemingly fantastic use cases that are often used in marketing or hype. The most frequently hyped edge computing use case—the driverless car that can see around corners—has not yet come to fruition, nor is it likely to in the near future. However, there are other, more realistic use cases that effectively demonstrate the promise of edge computing.

One of these use cases is the hyper-connected hospital. Imagine a hospital in which a patient’s medical records are made available to any authorized person who walks into the patient’s room. The tablet would detect both the RFID tag of the person holding it and the RFID tag of the room into which that person walked. It would pull up the relevant records, verify authorization, and simply make the relevant data available. Patients and equipment would be tagged too. If a patient wanders into a place they shouldn’t be, the staff is alerted. If a tablet goes missing, it’s wiped as soon as it leaves the hospital door, and an alarm would sound. All of this tracking is incredibly data intensive, and operates on highly sensitive information.
Here, edge computing would provide the hyper-connected hospital with managed services and proprietary software solutions that could handle the data on-premises, in real time, without putting patient data at risk, or creating unacceptable delays. More important, existing hospitals could be upgraded to this technology without needing specialists in all of the various components—the software and services are delivered as managed solutions by remote providers.
Drone-staffed warehouses are another example. In a similarly hyper-connected scenario, drones could handle all aspects of warehouse work. They could unload trucks, stock shelves, perform order picking, do inventory, and more. Today, drones would have to rely on carefully tagged inventory, and accurately enter data about each and every one of those tags. They would probably rely on RFID tags to navigate shelves as well. This is costly, difficult, and time-consuming. This is why even Amazon’s warehouses rely heavily on humans. However, the edge computing-powered warehouse theoretically could provide drones with the ability to perform image recognition and other forms of in-depth analysis. Public cloud providers already offer highly advanced (and proprietary) facial recognition services.
The same public cloud providers are also working on more generic image recognition capabilities as well. Every time you fill out a Google captcha, you’re teaching Google’s AI systems what’s what. In time, these services will be available to organizations as edge computing services, and drones will be able to understand what they’re looking at, and archive it appropriately.

Edge Computing in the Real World

Edge computing is being used in the real world today, and it is being used beyond simply building a better retail IT management solution. One notable application of edge computing is agribusiness. Most people don’t think of farms as being particularly well connected to the internet. As a general rule, farms are pretty lucky if they can get reliable 3G mobile connectivity. Despite this, agribusinesses are making extensive use of various modern technologies. These include Internet of Things (IoT) sensors, drones, various types of robots, and machine vision. (It turns out AI is really good at identifying insects and other pests.)
Even in the middle of the city, with a fiber optic connection, this can be a lot of data to ship back and forth. Just ask any of Canada’s many cannabis cultivators; by and large, they grow their crops indoors, in fully modern, instrumented hydroponic facilities. Even they struggle with data volume. Farmers working the land are unlikely to have the connectivity to put all their data through an internet pipe, let a cloud provider chew on it, and then pull down the results. Instead, they use edge computing services deployed locally by public cloud providers to crunch the data locally, and only send the results back to the public cloud for long-term storage.
This same pattern is being replicated over and over in other industries in which locally generated data volume exceeds what’s reasonable to ship back to a central cloud. These include pipeline monitoring, distributed geoinformatics, geosensing (earthquake detection), automated distributed astronomy, and many more.

Conclusion: The Need for Multi-Cloud Management Solutions

Operations teams and developers alike are going to face additional challenges posed by edge computing. Already struggling with the hybrid cloud, multi-cloud deployments, and now the hybrid multi-cloud, IT teams will have to figure out how to control the deployment, management, monitoring, and regulatory compliance of applications deployed to infrastructures they don’t physically manage.
Some challenges of edge computing deployments will be consumption: IT teams will consume products provided by public cloud providers. Others will involve extruding one’s own network into the network of others. To complicate matters, edge computing servers are sometimes located deep within someone else’s network. This means there are network considerations: the edge clusters have to “call back to the mothership,” often through layers of firewalls.
Managing all of this requires a multi-cloud management solution which can manage both the underlying physical infrastructure, as well as hypervisors, microvisors, OSes, workload automation and infrastructure orchestration — not to mention the applications themselves. This is where Cloudify comes in.

Cloudify is a multi-cloud management solution. Cloudify manages Azure, AWS, OpenStack, and Kubernetes. Cloudify manages networking, including software-defined networking, and network functions virtualization. Cloudify can also manage popular applications. In short, Cloudify can manage the clouds you use, and the edge computing extrusions of those clouds, wherever that edge happens to live.
Read more about how Cloudify is being used in edge computing use cases:

Ilan Adler
Ilan Adler
Ilan is a marketing data analyst at Cloudify. He works to help users understand the power of the Cloudify Orchestration frameworks, and how it can help fit in a variety of use cases from Edge Orchestration, NFV Orchestration, vCPE, Hybrid Cloud, and more.

comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Back to top