What is Kubernetes and why do we call it k8s?
Kubernetes is an open-source container orchestration platform that helps manage and automate the deployment, scaling, and management of containerized applications. In simpler terms, it's like a traffic cop that helps containers (which are like virtualized packages of software) communicate with each other and work together smoothly.
As for the name "k8s," it's a shorthand way of writing "Kubernetes." The "8" in the middle represents the 8 letters between the "K" and the "s," which makes it easier to type and pronounce.
What are the benefits of using k8s?
Using Kubernetes (k8s) has several benefits, such as:
Scalability: k8s makes it easy to scale applications up or down as demand changes. You can add or remove containers without downtime.
Fault-tolerance: k8s can automatically replace or restart containers that fail, ensuring that your application remains available.
Portability: k8s is a platform-agnostic tool, meaning it can be used with different cloud providers or on-premise infrastructure.
Resource optimization: k8s can manage the allocation of computing resources, making sure that each container has the appropriate amount of resources.
Automated deployment: k8s can automate the deployment process, which means less time spent configuring and deploying applications manually.
Explain the architecture of Kubernetes
The architecture of Kubernetes can be broken down into several key components:
Kube-API Server
The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end of the Kubernetes control plane.
The main implementation of a Kubernetes API server is kube-Episerver. kube-Episerver is designed to scale horizontally—that is, it scales by deploying more instances. You can run several instances of kube-Episerver and balance traffic between those instances.
etcd
Consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.
If your Kubernetes cluster uses etcd as its backing store, make sure you have a backup plan for the data.
You can find in-depth information about etcd in the official documentation.
kube-scheduler
Control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.
Factors taken into account for scheduling decisions include individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and deadlines.
kube-controller-manager
Control plane component that runs controller processes.
Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.
Some types of these controllers are:
Node controller: Responsible for noticing and responding when nodes go down.
Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.
EndpointSlice controller: Populates EndpointSlice objects (to provide a link between Services and Pods).
ServiceAccount controller: Create default ServiceAccounts for new namespaces.
controller-manager
A Kubernetes control plane component that embeds cloud-specific control logic. The cloud controller manager lets you link your cluster into your cloud provider's API and separates the components that interact with that cloud platform from components that only interact with your cluster.
The cloud controller manager only runs controllers that are specific to your cloud provider. If you are running Kubernetes on your premises, or in a learning environment inside your PC, the cluster does not have a cloud controller manager.
As with the Kube-controller-manager, the cloud-controller-manager combines several logically independent control loops into a single binary that you run as a single process. You can scale horizontally (run more than one copy) to improve performance or to help tolerate failures.
The following controllers can have cloud provider dependencies:
Node controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding
Route controller: For setting up routes in the underlying cloud infrastructure
Service controller: For creating, updating and deleting cloud provider load balancers
Node Components
Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.
kubelet
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The Kubelet doesn't manage containers that were not created by Kubernetes.
kube-proxy
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
kube-proxy uses the operating system packet filtering layer if there is one and it's available. Otherwise, kube-proxy forwards the traffic itself.
Container runtime
The container runtime is the software that is responsible for running containers.
Kubernetes supports container runtimes such as containers, CRI-O, and any other implementation of the Kubernetes CRI (Container Runtime Interface)
What is Control Plane?
The Control Plane in Kubernetes is the collection of components that manage the state of the cluster and orchestrate the deployment and scaling of containerized applications. It includes several key components, such as the API server, etcd, controller manager, and scheduler.
write ite the difference between kubectl and kubelets.
ubectl and Kubelet are two different components of the Kubernetes ecosystem that serve different purposes.
Kubectl: Kubectl is the command-line tool used to interact with the Kubernetes API server. It allows users to create, modify, and delete Kubernetes objects such as pods, services, and deployments. Kubectl is typically used by administrators and developers to manage the Kubernetes cluster and deploy applications.
Kubelet: Kubelet is an agent that runs on each worker node in the Kubernetes cluster. It is responsible for managing the containers running on that node and ensuring they are healthy. Kubelet communicates with the API server to receive instructions on which pods to run and manages the containers using the container runtime, such as Docker.
Explain the role of the API server.
The API server serves as the primary control point for the Kubernetes cluster, allowing administrators and developers to manage the cluster and deploy applications. It is a critical component of the Kubernetes architecture, and the stability and performance of the API server can have a significant impact on the overall health and performance of the cluster.