Search
Close this search box.
An Edge Cloud case study

An Edge Cloud case study

Since 2019, Fairbanks has had a broad and deep experience with Edge Computing, whether it is based on virtual machines, containers, or a combination of the two.

Recently, we helped one of our clients with a setup of an Edge Computing architecture and with the implementation of the infrastructure. The case was a little different than usual, because we needed to make adjustments in the OpenStack Availability Zones and we needed to add custom Charms. In the first phase, we discussed several Edge Cloud solutions like StarlingX and OpenStack. Secondly, we designed and adjusted a software solution to roll out the Edges in the most efficient and reliable way.

So, what solution did we go for, and why? What adjustments did we make to standardize and automate rolling out the edges? How did the overall process look like; let’s examine it step by step in this blog.

What is an Edge Cloud? 
Edge computing is a form of cloud computing in which computing is distributed across local smaller infrastructures rather than in one larger localized location, on what is known as an “origin server.” In fact, Edge Computing recreates a cloud-like system using “edge servers” or “micro-servers” instead of origin servers. So, the idea of an Edge Cloud is to have an infrastructure that is closer to the final user.

Advantages 
When it comes to the advantages, from a financial perspective Edge Clouds are more appealing because they are smaller than usual clouds. From a technical perspective, running closer to the end user provides a lower latency. In other words, you usually have one cloud that is very powerful. In addition to that cloud, you set up extra clouds (the Edge Clouds) that are closer to the final user. In the future, it is expected that Edge Clouds will be used by many non-human users in the field of the Internet of Things (IoT), such as processes running in houses and cars. As you might understand, such internet connections need to be very quick and with as little latency as possible. 

In general, companies that need Edge Computing, are companies who themselves have clients all over the world who need low latency services. At Fairbanks and 42on we usually see use cases in the gaming industry (online multiplayer functionality), in the media streaming industry (such as film, TV series, and music streaming), and in the telecommunication industry.

So, the advantages are that it is financially more appealing, and it provides low latency. What about the disadvantages?

Disadvantages 
When it comes to general disadvantages of Edge Cloud Computing, one of them is the limitation of power. It depends on the way you deploy the Edge Clouds, but in traditional computing, you usually deploy big environments in big data centers. If you move to scenarios where these data centers are distributed, this will result in the capacity of those data centers being lower.

Secondly, another disadvantage can be that it is also more complex to manage and configure such an environment. For instance, instead of managing one environment, you need to manage tens or even hundreds of environments. However, if done correctly you will automate that.

What Edge Cloud Architecture did we choose for? 
Together with the client, we first looked at StarlingX. StarlingX is a complete cloud infrastructure software stack for Edge Cloud computing and is used by the most demanding applications in industrial IoT, telecom, video delivery, and other ultra-low latency use cases. With deterministic low latency required by edge applications, and tools that make distributed edge manageable, StarlingX provides a container-based infrastructure for edge implementations in scalable solutions that are ready for production now.

The solution looked very promising, but the challenge was that StarlingX is mainly focused to deploy in Edge Clouds workloads based on containers and not virtual machines. For a large part, this would be fine, but our client still needs a combination of bare metal, virtual machines as well as containers running on the edges.

So, StarlingX is very interesting in case you use a container-based Edge Cloud. But we needed more functionality and found this in OpenStack, which is the solution we eventually went for. This is a complex software that was not initially configured or developed to be used in an Edge Cloud use case. Therefore, we needed to make some modifications and adaptations that are not completely natural in OpenStack.

We had two major challenges to resolve:

1.     OpenStack Availability Zones were not specifically developed for an Edge Cloud. Nevertheless, we found a way to use these in the configuration of OpenStack by tweaking some aspects. Normally OpenStack Availability Zones can be a challenge because of network throughput, but because the network latency and connectivity in between the data centers were very well built we tested performance which resulted in good results. It was important to have the option of an OpenStack Availability Zone, because, through an Availability Zone, OpenStack associates itself with the different data centers. This results in a user being able to select where to deploy and run a specific Edge Cloud and workload, which is key to manageability as discussed earlier.

A local Ceph cluster for each Edge Cloud, together with the proper set of Availability Zones allowed the intense use of network, storage, and compute resources to be perfectly limited to each Edge Cloud.

2.     We used the standard Canonical Ubuntu tooling like MAAS and JUJU, but we had to put a little bit more effort into configuring the Edges. For this, we make significant adjustments to use Availability Zones for Compute, Network, and Volumes and configure local glance servers for each edge cloud.

When a new virtual machine is launched the image must be downloaded from the Glance server, if the Glance server is located on the central cloud it can significantly slow down the launch process because downloading a virtual machine several hundred times is a high load in the central cloud. Because of this, we configured local Glance servers synchronized to the central one and properly configured several parameters to point to the central or local cloud as appropriate. When you need to implement a new virtual machine instead of downloading it from the central OpenStack cloud, you can download it directly from the Edge Cloud.

What distribution and why? 
The distribution chosen for this Edge Cloud use case is Ubuntu. The reason for this is that it is based on the Linux Ubuntu distribution, which is already well-known by the client’s technical staff. When it comes to our own staff, Fairbanks is distribution independent and supports all major OpenStack distributions like CentOS, Rocky Linux, SUSE et cetera. Thus, to us the decision of the client made sense. Additionally, the tooling used for Canonical is well known with the Fairbanks team as well.

Desired result achieved? 
At this moment the initial deployment is ready. We are now able to run different workloads. We made sure for it to be easy to select where we want to run those workloads. Also, from the final user point of view, the only thing the client needs to do is select where to run something and follow the standard OpenStack way of launching an instant.

Furthermore, if you have lots of workloads and deployment and management are automated well, these Edge Clouds will grow as well. This is a major key to success in all IT use cases. 

Are you interested in case studies? You can read our LOKI case study through the following link: https://fairbanks.nl/loki-openshift-on-openstack-egress-ip-address-per-project/

If you have questions about this case or about your own Edge Cloud use case, let me know. I would love to discuss the challenges that you have or see.

We are hiring!
Are you our new