Search
Close this search box.
Disruptive innovation with LOKI

Disruptive innovation with LOKI

Redefining OpenStack and Kubernetes integration in LOKI 

In recent months, I dove into the myriad of use cases that revolve around LOKI. LOKI is a combination of Linux, OpenStack, Kubernetes, and open-source infrastructure technologies. What makes LOKI so intriguing, is its capacity to empower engineers to explore fresh avenues of technology adoption, setting their solutions apart from the crowd. 

Just this week, I had an enlightening conversation with a client who has devised an unconventional approach to harnessing LOKI in their production environment. Given the uniqueness of their setup, I couldn’t resist sharing this innovative twist with all of you. In this particular case, our focus is on utilizing OpenStack to deliver virtual machines and bare metal capabilities to end-users and teams. However, what sets this scenario apart is the atypical manner in which OpenStack is managed. I’ll circle back to that in a moment. 

For those well-acquainted with OpenStack, the deployment process has historically presented some formidable challenges. It extends beyond mere software installation and necessitates thoughtful platform architecture, incorporating existing infrastructure while accommodating future scalability and adaptability. The ideal goal with OpenStack is to minimize operational overhead, freeing you to concentrate on the developmental aspects. 

In the realm of OpenStack infrastructure management, two solutions are most common: 

  • Canonical Juju. 
  • Red Hat Director. 

Both of these are recognized as lifecycle management platforms for OpenStack. They’ve been meticulously designed to seamlessly bridge the gap between planning, design, installation, and the ongoing operation, administration, and management of your environment. 

Now, picture this: What if we used bare metal Kubernetes as the lifeblood of OpenStack’s lifecycle management? In other words, what if we ran OpenStack within Kubernetes and approached all lifecycle management tasks from a DevOps perspective? 

Within this innovative infrastructure, we’re running a singular application on Kubernetes, which houses OpenStack in combination with the applications Kolla and Helm. Kolla generates containers for OpenStack that can serve various environments, while Helm orchestrates a myriad of Kubernetes objects. This approach allows you to deploy, for instance, Podman and Docker containers atop Kubernetes. 

The advantages of using Kubernetes as infrastructure management for OpenStack 

So, what are the advantages of working in this manner? 

  1. We pave the way for the continuous deployment of OpenStack configurations in a DevOps paradigm, facilitating updates, changes, configuration adjustments, migrations, patches, and more – all in a fully automated fashion. When changes are made to OpenStack, they’re developed by a DevOps team and committed in Git. Subsequently, these changes are implemented in a staging environment, tested, and once validated, they’re introduced into production – all fully automated within this cluster. 
  1. Another compelling reason why this approach is successful, is its harmony with the Cloud Native framework, making it ideal for both OpenStack and Kubernetes, in line with the DevOps ethos. A way of working which of course appeals to all engineers who are automating as much as possible.  
  1. Lastly, this approach streamlines the requirement for deep knowledge of Juju and Director, which is typically demanded when working with OpenStack and its traditional lifecycle management platforms. By harnessing Kubernetes and Helm, you can focus on your solutions without the need for specialized expertise in these areas.  

Challenges and future plans of using Kubernetes as infrastructure management for OpenStack 

Of course, there are challenges too. This approach calls for a deeper understanding of the layers within your infrastructure sandwich, which can grow complex rapidly. Additionally, your teams should already be well-versed in DevOps practices to manage the OpenStack infrastructure efficiently. While Helm is a mature project, Red Hat Director and Canonical Juju, especially in their roles in managing OpenStack, bring more years of experience to the table. 

As previously mentioned, in this specific use case, we’re currently running OpenStack to offer virtual machines and bare metal services to end users and teams. The next phase involves providing containers within OpenStack, bolstering them with robust isolation and security features. Thus, the future infrastructure configuration might look as follows: Hardware > Bare Metal Kubernetes > OpenStack > Kubernetes. Layering an infrastructure to deploy bare metal, virtual machines, and containers in a most secure and efficient way. 

We are eager to hear your thoughts on this emerging development. Do you see this as the future of combining bare metal, virtual machines, and containers? 

We are hiring!
Are you our new