Search
Close this search box.
Getting started with Kubernetes 

Getting started with Kubernetes 

In today’s rapidly evolving technological landscape, containerization has become a cornerstone of modern application development and deployment. Kubernetes, often abbreviated as K8s, has emerged as a leading container orchestration platform that empowers developers to efficiently manage, scale, and deploy containerized applications. In this tutorial, we’ll delve into the basics of Kubernetes and provide step-by-step instructions to help you get started on your Kubernetes journey. 

What is Kubernetes? 

Kubernetes is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). With Kubernetes, you can seamlessly manage complex applications across multiple containers, distribute workloads, and ensure high availability without the need for manual intervention. 

Kubernetes components and architecture 

Kubernetes features a complex yet efficient architecture. Its control plane, hosted on the master node within each cluster, encompasses essential components: the API server for handling requests, etcd for configuration storage, the scheduler for optimal container placement, and the controller manager for maintaining desired state. Worker nodes within the cluster execute containers and report back to the master. Pods, the smallest deployable units, house one or more tightly linked containers. Services offer load balancing and service discovery within the cluster, while ReplicaSets and Deployments manage replica counts for fault tolerance and updates. Namespaces segregate clusters virtually, ConfigMaps and Secrets separate configurations, and Ingress handles external access. Volumes ensure data persistence for containers within the cluster. By abstracting infrastructure intricacies, Kubernetes simplifies scaling and management of containerized applications across diverse clusters. 

Prerequisites 

Before you begin your Kubernetes journey, you need to have a few prerequisites in place: 

Setting up a local Kubernetes cluster with Minikube 

To get started with Kubernetes, we’ll begin by setting up a local cluster using Minikube. This will allow you to experiment with Kubernetes concepts in a controlled environment. 

  • Start the Cluster: Open your terminal and run the following command to start the Minikube: 

minikube start

  • Verify Cluster Status: Ensure that the cluster is up and running by executing the following command: 

kubectl cluster-info

Creating and managing deployments 

Deployments in Kubernetes define how applications should be managed and scaled. They ensure that the desired number of replicas are running and can handle updates and rollbacks gracefully. 

  • Create a deployment: Let’s create a simple nginx deployment. Run the following command to create the deployment: 

kubectl create deployment nginx-deployment --image=nginx:latest

  • Check deployment status: To see the status of your deployment, use: 

kubectl get deployments  

  • Scaling the Deployment: You can scale your deployment using the following command:  

kubectl scale deployment nginx-deployment --replicas=3 

Managing pods 

Pods are the smallest deployable units in Kubernetes and can contain one or more containers. They share the same network and storage resources. 

  • List pods: To view the pods in your cluster, use: 

kubectl get pods 

  • Pod logs: Retrieve the logs of a pod with: 

kubectl logs <pod_name>

  • Execute commands: You can execute commands in a pod using: 

kubectl exec -it <pod_name> -- /bin/bash

Services and exposing applications 

Services provide networking and load balancing to your application’s pods. They ensure that your application remains accessible and resilient. 

  • Expose deployment: To create a service and expose your deployment, use: 

kubectl expose deployment nginx-deployment --type=NodePort --port=80 

  • Access the service: Obtain the URL to access the service: 

minikube service nginx-deployment 

Cleaning up 

Once you’re done experimenting with your local Kubernetes cluster, you can clean up the resources. 

  • Delete service: Delete the service using: 

kubectl delete service nginx-deployment 

  • Delete deployment: Delete the deployment with: 

kubectl delete deployment nginx-deployment 

  • Stop Minikube: Stop the Minikube cluster by running: 

minikube stop 

Concluding Kubernetes tutorial

Kubernetes has revolutionized the way we deploy and manage applications, providing a powerful and flexible platform for container orchestration. In this tutorial, we’ve covered the basics of setting up a local Kubernetes cluster using Minikube, creating deployments, managing pods, and exposing applications through services. This is just the tip of the iceberg – Kubernetes offers a wealth of advanced features for handling complex scenarios in production environments. Next week we will share part 2 of the getting started with Kubernetes tutorial. In the following tutorial we will explain: How to deploy and expose a Node.js application on Kubernetes. Make sure to follow our socials and website so you don’t miss this tutorial!

If you have remarks or questions let me know. We also provide free health checks for companies using Kubernetes or OKD, OpenShift, Rancher. If you would like to apply for the health check you can do so through the following link: https://fairbanks.nl/kubernetes-health-check/  

 

We are hiring!
Are you our new