Kubernetes on Google Cloud Platform : Google Kubernetes Engine

2022.10.21

この記事は公開されてから1年以上経過しています。情報が古い可能性がありますので、ご注意ください。

Since Kubernetes in itself is a tool designed by Google, it made sense for Google to use it as their default container orchestration offering on their cloud platform. Kubernetes is a very widely known tool which is used for container orchestration and many developers are familiar with it, this familiarity plays a big role in the adoption of services and products.

Let;s try to understand the different features which Google Cloud Platform offers to its users.

Google Kubernetes Engine is an IaaS (Infrastructure as a Service) offering.

Any IaaS offering lets us share resources with others by virtualising hardware.

Each VM has its own instance of an OS, so they can be run with access to memory, file systems, networking interfaces and other attributes which physical computers also have.

In VMs scaling up is difficult because as and when new VMs come up they need time to start up and initialise the startup script, maybe install a few dependencies before load can be transferred to them, easier than this method is using AppEngines which are a PaaS (Platform as a Service) offering in which there is a family of services already present which your application might need and you just need to upload the code and the application is up and running, but in this scenario the underlying control of the architecture of the application is lost.

This is where containers come in, they prevent independent scalability of workloads just like they are given in PaaS but independence and control over hardware as well like IaaS.

A container starts as quickly as a new process.

If an application has certain dependencies which need to be installed before a container is launched, the docker build command is used to build a docker image which has pre installed dependencies on the OS image, and the docker run command starts up a container with the specified specifications.

gcloud container clusters create k1

This command creates a kubernetes cluster called K1.

If there are multiple containers which have a hard dependency (one cannot function without the other) then they can be both deployed on the same pod with shared networking and hardware resources for data exchanging.

kubectl run

This command runs a container on a given pod.

By default it runs a container with nginx server running in it, the kubectl command is sufficient for the pod to fetch an nginx image for the container.

Deployment

Deployment represents a group of replicas of the same pod.

Deployments can be configured to control the application or a component of it.

kubectl get pods command gets you the running pods in a given deployment .

 

kubectl expose command connects a load balancer to the given deployment which in turn has an external/internal IP representing the whole deployment.

A service groups a set of pods together and provides a stable endpoint for them.

As pods keep coming up and down a service cannot be represented by just the IP of its front end, and the front end cannot connect to the back end in the similar way, hence a stable endpoint is needed.

kubectl get services command shows you the external IP of the service being provided.

kubectl scale command scales up the current deployment.

 

The real beauty of kubernetes is the fact that everything can be controlled with the help of configuration files, which define how a deployment must look and the engine handles the rest, scaling up and down and the rest of it all. For example : 

kubectl get pods “app=nginx” -o yaml

This way you can access the configuration file for nginx deployment.

kubectl apply -f nginx-deployment.yaml

Which applies the changes in the configuration file.

Kubectl get replicasets command shows the replicas and their updated states

To update the application, rolling update mechanism is used in which one by one the updated deployment pods come up and the old ones scale down.