Do You Really Need Kubernetes?

Written by Intezer

    Share article
    FacebookTwitterLinkedInRedditCopy Link

    Top Blogs

    Kubernetes is one of the top open-source container orchestration projects, as it dramatically simplifies the creation and management of applications by providing built-in solutions to common problems. Although Kubernetes can be a solution for companies working with a large number of containers, others might be better off using an alternative solution.

    Advantages of Using Kubernetes

    There are several advantages that come with Kubernetes:

    • Automatic scheduling of containers: Kubernetes takes care of the heavy lifting and automatically schedules/reschedules containers from one node to another to increase resource utilization.
    • Service discovery: Containers can easily communicate via Kubernetes’ process of service discovery, which comes with two options to connect to a service: via an environment variable or Kube DNS (a cluster add-on).
    • Load balancing: When dealing with multiple services, you need a load balancer to spread the traffic. Kubernetes lets you set one up with just a few lines of configuration code.
    • Self-healing: Kubernetes automatically monitors containers and reschedules them if they crash or terminate.
    • Horizontal scaling: Kubernetes can manually or automatically scale your containers up or down based on demand.
    • Rolling upgrades with zero downtime: Kubernetes allows for containers to be updated, or reverted, without disrupting service.
    • Secret data management: Kubernetes stores and manages sensitive information, including passwords and API keys.

    Disadvantages of Using Kubernetes

    And then, there are the downsides:

    • Steep learning curve: Kubernetes doesn’t come up with an easy-to-use GUI, which can make it challenging to learn.
    • Overkill for small applications: If your applications are too small, such as a landing page with a store’s location and hours, Kubernetes can be overkill, as it brings unneeded technical complexity.
    • Installation isn’t simple: Kubernetes comes with many components (API server, controller manager, etcd, kubelet, etc.), making installation not so straightforward, unlike with a managed solution.
    • Best for microservices: If you’re not using a microservices environment, Kubernetes is really of no use.

    When Can I Skip Kubernetes?

    Before adopting Kubernetes, you should make sure your organizational requirement or use case demands it as a solution. Let’s take a look at some of the scenarios where you can skip Kubernetes:

    • You want to avoid complexity.
    • You are a small organization running only a few containers.
    • You don’t have the time and resources needed to educate your team and address deploying a new technology (as Kubernetes is still relatively new).

    Alternative Solutions

    What other options do you have? Below we offer some alternative solutions and service comparisons to help you decide which choice would best fit your organizational needs. First up, Docker Swarm.

    Swarm vs. Kubernetes

    Let’s look at a side-by-side comparison between Docker Swarm and Kubernetes:

    Docker Swarm

    Kubernetes

    Small developer community

    Large developer community

    Suitable for small architectures

    Suitable for complex architectures

    Useful when dealing with a small set of containers, 10-20

    Useful when dealing with a large set of containers, 1000+

    There are several factors to consider when choosing Swarm over Kubernetes as your container orchestration tool.

    Installation

    Kubernetes: Setting up a Kubernetes cluster is challenging and complicated, as it involves a lot of components, plus, you need your worker node to join the cluster.

    Docker Swarm: Setting up a cluster is pretty simple and only requires two commands: One to bring up the cluster and the other to join the cluster to your nodeScalability.

    Kubernetes: Kubernetes’ distributed system guarantees your cluster state, but this in turn slows down scaling.

    Docker Swarm: Scaling up is 5x faster than Kubernetes.

    Load Balancing

    Kubernetes: If your service spans multiple containers running in different pods, you’ll need to manually configure load balancing since Kubernetes has multiple nodes, multiple pods inside each node, and multiple containers inside each pod.

    Docker Swarm: Swarm takes care of automatic load balancing; it does not use pods and containers are easily discovered.

    In summary, Docker Swarm offers a simple solution to get started quickly, while with Kubernetes, you need to deal with a complex ecosystem. Swarm is popular among the developer community, which seeks simplicity and faster deployment.

    OpenShift

    Up next is OpenShift. Let’s take a look at what this alternative solution brings to the table by reviewing some of the drawbacks of Kubernetes and how OpenShift solves those issues:

    Kubernetes

    OpenShift

    Deploying an application is time-consuming.

    Deploying an application is pretty simple, involving only creating a project (equivalent to a namespace) and application.

    You need to figure out your CI/CD system.

    OpenShift does the heavy lifting and creates a CI/CD pipeline for your application.

    You need to add a dashboard to manage your cluster health.

    It comes with a dashboard that easily builds on top of the Kubernetes API.

    Adding a node to the cluster is time-consuming.

    OpenShift uses a pre-configured Ansible playbook, making it easier to add new nodes.

    OpenShift’s edge over Kubernetes is its unique features, especially its ability to streamline most individual tasks using the CI/CD system.

    Serverless Solution (AWS Lambda)

    AWS Lambda offers up a serverless container orchestration solution, allowing you to run code without dealing with the servers themselves.

    AWS Lambda

    Kubernetes

    No server to provision or manage

    Must manage your own infrastructure

    Never pay for idle resources

    Must pay as long as your infrastructure is up

    No server to patch

    Have to patch your own servers

    Highly available

    On you to make it highly available

    Limited selection of compute power (128MB to 3GB)

    No such limitation, depends on your provisioned infrastructure

    Time limit for code execution

    No such limitation

    Use Cases of Serverless (AWS Lambda) vs. Kubernetes

    AWS Lambda is an event-driven architecture, where an event such as an AWS service invokes a Lambda function to process that event. For example, if a user uploads any file/photo to your S3 bucket, that is an event that will trigger your Lambda function.

    Lambda is also useful when your traffic is unpredictable, as Lambda automatically takes care of autoscaling and follows the pay-as-you-go model. Kubernetes is better suited for applications where traffic is predictable, as you pay for the underlying infrastructure.

    In summary, Lambda’s pros are that it is easier to onboard and focuses on solving a business problem. Meanwhile, Kubernetes (installed on-premises) gives you complete control of your environment and a rich ecosystem but has a steep learning curve and operational overhead. Which orchestration solution you choose depends completely on your requirements and whether or not your traffic is predictable.

    Fargate

    Fargate is AWS’ managed serverless compute engine that lets you run containers without the need to provision and configure the cluster.

    Comparison of AWS EKS vs. AWS Fargate

    EKS

    Fargate

    Managed Kubernetes solution by AWS

    Container on-demand solution

    Requires you to create your cluster

    No need to create any cluster or determine EC2 instance size

    Control plane costs $144

    Only pay for tasks based on memory and CPU

    Suitable for cloud-native container architectures and easier for moving your on-premises Kubernetes to EKS

    Suitable for workloads that run for a certain duration

    So, if you need to run a container and don’t want to worry about patching, provisioning, and managing servers, then Fargate is your ideal solution; it also scales on demand. The only thing to note is the number of tasks you’re running since Fargate is an expensive solution if you have high-CPU/memory tasks running all the time.

    Google Kubernetes Engine (GKE)

    Google Kubernetes Engine (GKE) is Google’s managed service for running Kubernetes.

    Some of the advantages of using GKE over the vanilla Kubernetes solution are that GKE:

    • Automatically takes care of cluster creation and joining of the worker nodes
    • Takes care of managing the Kubernetes master and provides a high availability environment
    • Handles all container networking details for you
    • Provides you hardened OS images built for containers
    • Takes care of autoscaling
    • Automatically upgrades your GKE cluster when a new version of Kubernetes is released
    • Features auto-repair, so if a node fails a health check, the Kubernetes engine will try to get that node back online
    • Comes with a logging and monitoring solution

    GKE vs. AWS EKS

    Now, time to review some key differences between GKE and EKS:

    Google Kubernetes Engine (GKE)

    Elastic Kubernetes Service (EKS)

    Automatic upgrade of control plane (users can also initiate)

    Manual control plane upgrade (e.g., for AWS VPC CNI, kube-proxy)

    Automatic upgrade of worker nodes

    Worker node upgrades initiated by users

    Uses container-optimized OS (default), Ubuntu

    Uses Amazon Linux 2 (default), Ubuntu (partner AMI)

    Container runtime supports Docker by default, also supports containerd and gVisor.

    Only supports Docker

    Control plane supports two cluster options, zonal clusters (single control plane) or regional clusters (three Kubernetes control planes)

    Control plane deployed across multiple availability zones

    Node repair enabled by default

    Node repair is not Kubernetes-aware, but AWS autoscaling group will kick off and replace the failed node

    As you see in the above chart, GKE has a slight edge over EKS, as it automatically takes care of the control plane and worker node upgrades, while this is a manual process in EKS.

    Summary

    There is no doubt that Kubernetes comes with a lot of powerful capabilities and features. Still, the question you need to ask before adopting Kubernetes in your organization is: Does it solve your business problem? You also need to keep in mind the steep learning curve and cost associated with pursuing this new technology.

    Managed solutions provided by cloud vendors like Google (GKE) and AWS (EKS) will make your life easier, but the trade-off is vendor lock-in. If your application is not complicated, you can use a solution like Docker Swarm or serverless.

    These are all container orchestration tools at the end of the day, and they are there to simplify your work. You need to choose wisely and consider other options before going with Kubernetes as a solution.

    Intezer Protect

    Once you pick a container orchestration platform that is right for you, you need to secure it. Containers are not secure by default and have become the target of an expanding number of attacks. Doki is a malware that can break out of containers and take up your host’s resources for cryptomining. There’s also Kaiji, which makes your server part of a botnet used in DDoS attacks. Intezer Protect defends your container workloads in runtime against the latest cloud threats.

    Intezer

    Count on Intezer’s Autonomous SOC solution to handle the security operations grunt work.

    Generic filters
    Exact matches only
    Search in title
    Search in content
    Search in excerpt