GCP Kubernetes Engine을 통한 배포(2)
인프런 강의
- 취준생을 위한 강의를 제작하였습니다.
- 본 블로그를 통해서 강의를 수강하신 분은 게시글 제목과 링크를 수강하여 인프런 메시지를 통해 보내주시기를 바랍니다.
스타벅스 아이스 아메리카노를 선물
로 보내드리겠습니다.
- [비전공자 대환영] 제로베이스도 쉽게 입문하는 파이썬 데이터 분석 - 캐글입문기
1줄 요약
(GCP) GKE
를 활용하여nginx
를 실행해보자.
Step 1. GCP Shell 활성화
- You can list the active account name with this command:
(your_project_id)$ gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* student-04-e46af1f1cd7b@qwiklabs.net
To set the active account, run:
$ gcloud config set account `ACCOUNT`
- You can list the project ID with this command:
(your_project_id)$ gcloud config list project
[core]
project = qwiklabs-gcp-04-79efc1e4ae0f
Your active configuration is: [cloudshell-24251]
Step 2. Create Deployment manifests
- Task 1. Create deployment manifests and deploy to the cluster
(1) Connect to the lab GKE cluster
- In Cloud Shell, type the following command to set the environment variable for the zone and cluster name.
(your_project_id)$ export my_zone=us-central1-a
(your_project_id)$ export my_cluster=standard-cluster-1
- Configure kubectl tab completion in Cloud Shell.
(your_project_id)$ source <(kubectl completion bash)
- In Cloud Shell, configure access to your cluster for the kubectl command-line tool, using the following command:
$ gcloud container clusters get-credentials $my_cluster --zone $my_zone
Fetching cluster endpoint and auth data.
kubeconfig entry generated for standard-cluster-1.
- In Cloud Shell enter the following command to clone the repository to the lab Cloud Shell.
(your_project_id)$ git clone https://github.com/GoogleCloudPlatform/training-data-analyst
- Create a soft link as a shortcut to the working directory.
(your_project_id)$ ln -s ~/training-data-analyst/courses/ak8s/v1.1 ~/ak8s
- Change to the directory that contains the sample files for this lab.
(your_project_id)$ cd ~/ak8s/Deployments/
your_id@cloudshell:~/ak8s/Deployments (your_project_id)$
(2) Create a deployment manifest
- You will create a deployment using a sample deployment manifest called
nginx-deployment.yaml
that has been provided for you. This deployment is configured to run three Pod replicas with a single nginx container in each Pod listening on TCP port 80.- Let’s create
nginx-deployment.yaml
- Let’s create
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
- To deploy your manifest, execute the following command:
~/ak8s/Deployments (your_project_id)$ kubectl apply -f ./nginx-deployment.yaml
- To view a list of deployments, execute the following command:
~/ak8s/Deployments (your_project_id)$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 24s
Step 3.Manually scale up and down the number of Pods in deployments
Sometimes, you want to shut down a Pod instance. Other times, you want ten Pods running. In Kubernetes, you can scale a specific Pod to the desired number of instances. To shut them down, you scale to zero. In this task, you scale Pods up and down in the Google Cloud Console and Cloud Shell.
(1) Scale Pods up and down in the console
- Switch to the Google Cloud Console tab.
- On the Navigation menu, click Kubernetes Engine > Workloads.
- Click nginx-deployment (your deployment) to open the Deployment details page.
- At the top, click ACTIONS > Scale.
- Type 1 and click SCALE.
- This action scales down your cluster. You should see the Pod status being updated under Managed Pods. You might have to click Refresh.
(2) Scale Pods up and down in the shell
- In the Cloud Shell, to view a list of Pods in the deployments, execute the following command:
~/ak8s/Deployments (your_project_id)$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/1 1 1 8m59s
- To scale the Pod back up to three replicas, execute the following command:
~/ak8s/Deployments (your_project_id)$ kubectl scale --replicas=3 deployment nginx-deployment
deployment.apps/nginx-deployment scaled
- To view a list of Pods in the deployments, execute the following command:
~/ak8s/Deployments (your_project_id)$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 10m
Step 4. Trigger a deployment rollout and a deployment rollback
A deployment’s rollout is triggered if and only if the deployment’s Pod template (that is, .spec.template) is changed, for example, if the labels or container images of the template are updated. Other updates, such as scaling the deployment, do not trigger a rollout. In this task, you trigger deployment rollout, and then you trigger deployment rollback.
(1) Trigger a deployment rollout
- To update the version of nginx in the deployment, execute the following command:
~/ak8s/Deployments (your_project_id)$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record
deployment.apps/nginx-deployment image updated
- To view the rollout status, execute the following command:
~/ak8s/Deployments (your_project_id)$ kubectl rollout status deployment.v1.apps/nginx-deployment
deployment "nginx-deployment" successfully rolled out
- To verify the change, get the list of deployments.
~/ak8s/Deployments (your_project_id)$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 13m
- View the rollout history of the deployment.
~/ak8s/Deployments (your_project_id)$ kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
(2) Trigger a deployment rollback
- To roll back to the previous version of the nginx deployment, execute the following command:
~/ak8s/Deployments (your_project_id)$ kubectl rollout undo deployments nginx-deployment
deployment.apps/nginx-deployment rolled back
- View the updated rollout history of the deployment.
~/ak8s/Deployments (your_project_id)$ kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
2 kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record=true
3 <none>
- View the details of the latest deployment revision
~/ak8s/Deployments (your_project_id)$ kubectl rollout history deployment/nginx-deployment --revision=3
Pod Template:
Labels: app=nginx
pod-template-hash=5bf87f5f59
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Step 5. Define the service type in the manifest
- In this task, you create and verify a service that controls inbound traffic to an application. Services can be configured as ClusterIP, NodePort or LoadBalancer types. In this lab, you configure a LoadBalancer.
(1) Define service types in the manifest
- A manifest file called service-nginx.yaml that deploys a LoadBalancer service type has been provided for you. This service is configured to distribute inbound traffic on TCP port 60000 to port 80 on any containers that have the label app: nginx.
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 60000
targetPort: 80
- In the Cloud Shell, to deploy your manifest, execute the following command:
~/ak8s/Deployments (your_project_id)$ kubectl apply -f ./service-nginx.yaml
service/nginx created
- This manifest defines a service and applies it to Pods that correspond to the selector. In this case, the manifest is applied to the nginx container that you deployed in task 1. This service also applies to any other Pods with the app: nginx label, including any that are created after the service.
(2) Verify the LoadBalancer creation
- To view the details of the nginx service, execute the following command:
~/ak8s/Deployments (your_project_id)$ kubectl get service nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.11.254.246 <pending> 60000:32437/TCP 39s
- When the external IP appears, open http://[EXTERNAL_IP]:60000/ in a new browser tab to see the server being served through network load balancing.
It may take a few seconds before the ExternalIP field is populated for your service. This is normal. Just re-run the kubectl get services nginx command every few seconds until the field is populated.
Step 6. Perform a canary deployment
- A canary deployment is a separate deployment used to test a new version of your application. A single service targets both the canary and the normal deployments. And it can direct a subset of users to the canary version to mitigate the risk of new releases. The manifest file nginx-canary.yaml that is provided for you deploys a single pod running a newer version of nginx than your main deployment. In this task, you create a canary deployment using this new deployment file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-canary
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
track: canary
Version: 1.9.1
spec:
containers:
- name: nginx
image: nginx:1.9.1
ports:
- containerPort: 80
-
The manifest for the nginx Service you deployed in the previous task uses a label selector to target the Pods with the app: nginx label. Both the normal deployment and this new canary deployment have the app: nginx label. Inbound connections will be distributed by the service to both the normal and canary deployment Pods. The canary deployment has fewer replicas (Pods) than the normal deployment, and thus it is available to fewer users than the normal deployment.
-
Create the canary deployment based on the configuration file.
~/ak8s/Deployments (your_project_id)$ kubectl apply -f nginx-canary.yaml
deployment.apps/nginx-canary created
- When the deployment is complete, verify that both the nginx and the nginx-canary deployments are present.
~/ak8s/Deployments (your_project_id)$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-canary 1/1 1 1 24s
nginx-deployment 3/3 3 3 21m
-
Switch back to the browser tab that is connected to the external LoadBalancer service ip and refresh the page. You should continue to see the standard Welcome to nginx page.
-
Switch back to the Cloud Shell and scale down the primary deployment to 0 replicas.
~/ak8s/Deployments (your_project_id)$ kubectl scale --replicas=0 deployment nginx-deployment
- Verify that the only running replica is now the Canary deployment:
~/ak8s/Deployments (your_project_id)$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-canary 1/1 1 1 104s
nginx-deployment 0/0 0 0 22m
- Switch back to the browser tab that is connected to the external LoadBalancer service ip and refresh the page. You should continue to see the standard Welcome to nginx page showing that the Service is automatically balancing traffic to the canary deployment.
Session affinity
The service configuration used in the lab does not ensure that all requests from a single client will always connect to the same Pod. Each request is treated separately and can connect to either the normal nginx deployment or to the nginx-canary deployment. This potential to switch between different versions may cause problems if there are significant changes in functionality in the canary release. To prevent this you can set the sessionAffinity
field to ClientIP
in the specification of the service if you need a client’s first request to determine which Pod will be used for all subsequent connections.
For example:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
sessionAffinity: ClientIP
selector:
app: nginx
ports:
- protocol: TCP
port: 60000
targetPort: 80