kubernetes系列教程(六)kubernetes資源管理和服務(wù)質(zhì)量

寫在前面

上一篇文章中kubernetes系列教程(五)深入掌握核心概念pod初步介紹了yaml學(xué)習(xí)kubernetes中重要的一個概念pod,接下來介紹kubernetes系列教程pod的resource資源管理和pod的Quality of service服務(wù)質(zhì)量。

在蓬安等地區(qū),都構(gòu)建了全面的區(qū)域性戰(zhàn)略布局,加強(qiáng)發(fā)展的系統(tǒng)性、市場前瞻性、產(chǎn)品創(chuàng)新能力,以專注、極致的服務(wù)理念,為客戶提供成都網(wǎng)站設(shè)計(jì)、網(wǎng)站建設(shè) 網(wǎng)站設(shè)計(jì)制作按需網(wǎng)站制作,公司網(wǎng)站建設(shè),企業(yè)網(wǎng)站建設(shè),品牌網(wǎng)站設(shè)計(jì),成都全網(wǎng)營銷,成都外貿(mào)網(wǎng)站制作,蓬安網(wǎng)站建設(shè)費(fèi)用合理。

1. Pod資源管理

1.1 resource定義

容器運(yùn)行過程中需要分配所需的資源,如何與cggroup聯(lián)動配合呢?答案是通過定義resource來實(shí)現(xiàn)資源的分配,資源的分配單位主要是cpu和memory,資源的定義分兩種:requests和limits,requests表示請求資源,主要用于初始kubernetes調(diào)度pod時的依據(jù),表示必須滿足的分配資源;limits表示資源的限制,即pod不能超過limits定義的限制大小,超過則通過cggroup限制,pod中定義資源可以通過下面四個字段定義:

  • spec.container[].resources.requests.cpu 請求cpu資源的大小,如0.1個cpu和100m表示分配1/10個cpu;
  • spec.container[].resources.requests.memory 請求內(nèi)存大小,單位可用M,Mi,G,Gi表示;
  • spec.container[].resources.limits.cpu 限制cpu的大小,不能超過閥值,cggroup中限制的值;
  • spec.container[].resources.limits.memory 限制內(nèi)存的大小,不能超過閥值,超過會發(fā)生OOM;

1、開始學(xué)習(xí)如何定義pod的resource資源,如下以定義nginx-demo為例,容器請求cpu資源為250m,限制為500m,請求內(nèi)存資源為128Mi,限制內(nèi)存資源為256Mi,當(dāng)然也可以定義多個容器的資源,多個容器相加就是pod的資源總資源,如下:

[root@node-1 demo]#cat nginx-resource.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-demo
  labels:
    name: nginx-demo
spec:
  containers:
  - name: nginx-demo
    image: nginx:1.7.9
    imagePullPolicy: IfNotPresent
    ports:
    - name: nginx-port-80
      protocol: TCP
      containerPort: 80
    resources:
      requests:
        cpu: 0.25
        memory: 128Mi
      limits:
        cpu: 500m
        memory: 256Mi

2、應(yīng)用pod的配置定義(如之前的pod還存在,先將其刪除kubectl delete pod <pod-name>),或pod命名為另外一個名

[root@node-1 demo]# kubectl apply -f nginx-resource.yaml 
pod/nginx-demo created

3、查看pod資源的分配詳情

[root@node-1 demo]# kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
demo-7b86696648-8bq7h   1/1     Running   0          12d
demo-7b86696648-8qp46   1/1     Running   0          12d
demo-7b86696648-d6hfw   1/1     Running   0          12d
nginx-demo              1/1     Running   0          94s

[root@node-1 demo]# kubectl describe pods nginx-demo  
Name:         nginx-demo
Namespace:    default
Priority:     0
Node:         node-3/10.254.100.103
Start Time:   Sat, 28 Sep 2019 12:10:49 +0800
Labels:       name=nginx-demo
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"nginx-demo"},"name":"nginx-demo","namespace":"default"},"sp...
Status:       Running
IP:           10.244.2.13
Containers:
  nginx-demo:
    Container ID:   docker://55d28fdc992331c5c58a51154cd072cd6ae37e03e05ae829a97129f85eb5ed79
    Image:          nginx:1.7.9
    Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 28 Sep 2019 12:10:51 +0800
    Ready:          True
    Restart Count:  0
    Limits:        #限制資源
      cpu:     500m
      memory:  256Mi
    Requests:      #請求資源
      cpu:        250m
      memory:     128Mi
    Environment:  <none>
    ...省略...

4、Pod的資源如何分配呢?毫無疑問是從node上分配的,當(dāng)我們創(chuàng)建一個pod的時候如果設(shè)置了requests,kubernetes的調(diào)度器kube-scheduler會執(zhí)行兩個調(diào)度過程:filter過濾和weight稱重,kube-scheduler會根據(jù)請求的資源過濾,把符合條件的node篩選出來,然后再進(jìn)行排序,把最滿足運(yùn)行pod的node篩選出來,然后再特定的node上運(yùn)行pod。調(diào)度算法和細(xì)節(jié)可以參考下kubernetes調(diào)度算法介紹。如下是node-3節(jié)點(diǎn)資源的分配詳情:

[root@node-1 ~]# kubectl describe node node-3
...省略...
Capacity:    #節(jié)點(diǎn)上資源的總資源情況,1個cpu,2g內(nèi)存,110個pod
 cpu:                1
 ephemeral-storage:  51473888Ki
 hugepages-2Mi:      0
 memory:             1882352Ki
 pods:               110
Allocatable: #節(jié)點(diǎn)容許分配的資源情況,部分預(yù)留的資源會排出在Allocatable范疇
 cpu:                1
 ephemeral-storage:  47438335103
 hugepages-2Mi:      0
 memory:             1779952Ki
 pods:               110
System Info:
 Machine ID:                 0ea734564f9a4e2881b866b82d679dfc
 System UUID:                FFCD2939-1BF2-4200-B4FD-8822EBFFF904
 Boot ID:                    293f49fd-8a7c-49e2-8945-7a4addbd88ca
 Kernel Version:             3.10.0-957.21.3.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.6.3
 Kubelet Version:            v1.15.3
 Kube-Proxy Version:         v1.15.3
PodCIDR:                     10.244.2.0/24
Non-terminated Pods:         (3 in total) #節(jié)點(diǎn)上運(yùn)行pod的資源的情況,除了nginx-demo之外還有多個pod
  Namespace                  Name                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                           ------------  ----------  ---------------  -------------  ---
  default                    nginx-demo                     250m (25%)    500m (50%)  128Mi (7%)       256Mi (14%)    63m
  kube-system                kube-flannel-ds-amd64-jp594    100m (10%)    100m (10%)  50Mi (2%)        50Mi (2%)      14d
  kube-system                kube-proxy-mh3gq               0 (0%)        0 (0%)      0 (0%)           0 (0%)         12d
Allocated resources:  #已經(jīng)分配的cpu和memory的資源情況
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                350m (35%)   600m (60%)
  memory             178Mi (10%)  306Mi (17%)
  ephemeral-storage  0 (0%)       0 (0%)
Events:              <none>

1.2 資源分配原理

Pod的定義的資源requests和limits作用于kubernetes的調(diào)度器kube-sheduler上,實(shí)際上cpu和內(nèi)存定義的資源會應(yīng)用在container上,通過容器上的cggroup實(shí)現(xiàn)資源的隔離作用,接下來我們介紹下資源分配的原理。

  • spec.containers[].resources.requests.cpu 作用在CpuShares,表示分配cpu 的權(quán)重,爭搶時的分配比例
  • spec.containers[].resources.requests.memory 主要用于kube-scheduler調(diào)度器,對容器沒有設(shè)置意義
  • spec.containers[].resources.limits.cpu 作用CpuQuota和CpuPeriod,單位為微秒,計(jì)算方法為:CpuQuota/CpuPeriod,表示最大cpu最大可使用的百分比,如500m表示允許使用1個cpu中的50%資源
  • spec.containers[].resources.limits.memory 作用在Memory,表示容器最大可用內(nèi)存大小,超過則會OOM

以上面定義的nginx-demo為例,研究下pod中定義的requests和limits應(yīng)用在docker生效的參數(shù):

1、查看pod所在的node節(jié)點(diǎn),nginx-demo調(diào)度到node-3節(jié)點(diǎn)上

[root@node-1 ~]# kubectl get pods -o wide nginx-demo
NAME         READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
nginx-demo   1/1     Running   0          96m   10.244.2.13   node-3   <none>           <none>

2、獲取容器的id號,可以通過kubectl describe pods nginx-demo的containerID獲取到容器的id,或者登陸到node-3節(jié)點(diǎn)通過名稱過濾獲取到容器的id號,默認(rèn)會有兩個pod:一個通過pause鏡像創(chuàng)建,另外一個通過應(yīng)用鏡像創(chuàng)建

[root@node-3 ~]# docker container  list |grep nginx
55d28fdc9923        84581e99d807           "nginx -g 'daemon of…"   2 hours ago         Up 2 hours                                   k8s_nginx-demonginx-demo_default_66958ef7-507a-41cd-a688-7a4976c6a71e_0
2fe0498ea9b5        k8s.gcr.io/pause:3.1   "/pause"                 2 hours ago         Up 2 hours                                   k8s_POD_nginx-demo_default_66958ef7-507a-41cd-a688-7a4976c6a71e_0

3、查看docker容器詳情信息

[root@node-3 ~]# docker container inspect 55d28fdc9923
[
...部分輸出省略...
    {
        "Image": "sha256:84581e99d807a703c9c03bd1a31cd9621815155ac72a7365fd02311264512656",
        "ResolvConfPath": "/var/lib/docker/containers/2fe0498ea9b5dfe1eb63eba09b1598a8dfd60ef046562525da4dcf7903a25250/resolv.conf",
        "HostConfig": {
            "Binds": [
                "/var/lib/kubelet/pods/66958ef7-507a-41cd-a688-7a4976c6a71e/volumes/kubernetes.io~secret/default-token-5qwmc:/var/run/secrets/kubernetes.io/serviceaccount:ro",
                "/var/lib/kubelet/pods/66958ef7-507a-41cd-a688-7a4976c6a71e/etc-hosts:/etc/hosts",
                "/var/lib/kubelet/pods/66958ef7-507a-41cd-a688-7a4976c6a71e/containers/nginx-demo/1cc072ca:/dev/termination-log"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {
                    "max-size": "100m"
                }
            },
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 256,        CPU分配的權(quán)重,作用在requests.cpu上
            "Memory": 268435456,     內(nèi)存分配的大小,作用在limits.memory上
            "NanoCpus": 0,
            "CgroupParent": "kubepods-burstable-pod66958ef7_507a_41cd_a688_7a4976c6a71e.slice",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 100000,    CPU分配的使用比例,和CpuQuota一起作用在limits.cpu上
            "CpuQuota": 50000,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 268435456,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
        },   
    }
]

1.3. cpu資源測試

pod中cpu的限制主要通過requests.cpu和limits.cpu來定義,limits是不能超過的cpu大小,我們通過stress鏡像來驗(yàn)證,stress是一個cpu和內(nèi)存的壓側(cè)工具,通過指定args參數(shù)的定義壓側(cè)cpu的大小。監(jiān)控pod的cpu和內(nèi)存可通過kubectl top的方式來查看,依賴于監(jiān)控組件如metric-server或promethus,當(dāng)前沒有安裝,我們通過docker stats的方式來查看。

1、通過stress鏡像定義一個pod,分配0.25個cores和最大限制0.5個core使用比例

[root@node-1 demo]# cat cpu-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: cpu-demo
  namespace: default
  annotations: 
    kubernetes.io/description: "demo for cpu requests and"
spec:
  containers:
  - name: stress-cpu
    image: vish/stress
    resources:
      requests:
        cpu: 250m
      limits:
        cpu: 500m
    args:
    - -cpus
    - "1"

2、應(yīng)用yaml文件生成pod

[root@node-1 demo]# kubectl apply -f cpu-demo.yaml 
pod/cpu-demo created

3、查看pod資源分配詳情

[root@node-1 demo]# kubectl describe pods cpu-demo 
Name:         cpu-demo
Namespace:    default
Priority:     0
Node:         node-2/10.254.100.102
Start Time:   Sat, 28 Sep 2019 14:33:12 +0800
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"kubernetes.io/description":"demo for cpu requests and"},"name":"cpu-demo","nam...
              kubernetes.io/description: demo for cpu requests and
Status:       Running
IP:           10.244.1.14
Containers:
  stress-cpu:
    Container ID:  docker://14f93767ad37b92beb91e3792678f60c9987bbad3290ae8c29c35a2a80101836
    Image:         progrium/stress
    Image ID:      docker-pullable://progrium/stress@sha256:e34d56d60f5caae79333cee395aae93b74791d50e3841986420d23c2ee4697bf
    Port:          <none>
    Host Port:     <none>
    Args:
      -cpus
      1
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sat, 28 Sep 2019 14:34:28 +0800
      Finished:     Sat, 28 Sep 2019 14:34:28 +0800
    Ready:          False
    Restart Count:  3
    Limits:         #cpu限制使用的比例
      cpu:  500m
    Requests:       #cpu請求的大小
      cpu:  250m

4、登陸到特定的node節(jié)點(diǎn),通過docker container stats查看容器的資源使用詳情

kubernetes系列教程(六)kubernetes資源管理和服務(wù)質(zhì)量

在pod所屬的node上通過top查看,cpu的使用率限制百分比為50%。

kubernetes系列教程(六)kubernetes資源管理和服務(wù)質(zhì)量

通過上面的驗(yàn)證可以得出結(jié)論,我們在stress容器中定義使用1個core,通過limits.cpu限定可使用的cpu大小是500m,測試驗(yàn)證pod的資源已在容器內(nèi)部或宿主機(jī)上都嚴(yán)格限制在50%(node機(jī)器上只有一個cpu,如果有2個cpu則會分?jǐn)倿?5%)。

1.4 memory資源測試

1、通過stress鏡像測試驗(yàn)證requests.memory和limits.memory的生效范圍,limits.memory定義容器可使用的內(nèi)存資源大小,當(dāng)超過內(nèi)存設(shè)定的大小后容器會發(fā)生OOM,如下定義一個測試的容器,最大內(nèi)存不能超過512M,使用stress鏡像--vm-bytes定義壓側(cè)內(nèi)存大小為256Mi

[root@node-1 demo]# cat memory-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: memory-stress-demo
  annotations:
    kubernetes.io/description: "stress demo for memory limits"
spec:
  containers:
  - name: memory-stress-limits
    image: polinux/stress
    resources:
      requests:
        memory: 128Mi
      limits:
        memory: 512Mi
    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "256M", "--vm-hang", "1"]

2、應(yīng)用yaml文件生成pod

[root@node-1 demo]# kubectl apply -f memory-demo.yaml 
pod/memory-stress-demo created

[root@node-1 demo]# kubectl get pods memory-stress-demo -o wide 
NAME                 READY   STATUS    RESTARTS   AGE   IP            NODE     NOMINATED NODE   READINESS GATES
memory-stress-demo   1/1     Running   0          41s   10.244.1.19   node-2   <none>           <none>

3、查看資源的分配情況

[root@node-1 demo]# kubectl describe  pods memory-stress-demo
Name:         memory-stress-demo
Namespace:    default
Priority:     0
Node:         node-2/10.254.100.102
Start Time:   Sat, 28 Sep 2019 15:13:06 +0800
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"kubernetes.io/description":"stress demo for memory limits"},"name":"memory-str...
              kubernetes.io/description: stress demo for memory limits
Status:       Running
IP:           10.244.1.16
Containers:
  memory-stress-limits:
    Container ID:  docker://c7408329cffab2f10dd860e50df87bd8671e65a0f8abb4dae96d059c0cb6bb2d
    Image:         polinux/stress
    Image ID:      docker-pullable://polinux/stress@sha256:6d1825288ddb6b3cec8d3ac8a488c8ec2449334512ecb938483fc2b25cbbdb9a
    Port:          <none>
    Host Port:     <none>
    Command:
      stress
    Args:
      --vm
      1
      --vm-bytes
      256Mi
      --vm-hang
      1
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Sat, 28 Sep 2019 15:14:08 +0800
      Finished:     Sat, 28 Sep 2019 15:14:08 +0800
    Ready:          False
    Restart Count:  3
    Limits:          #內(nèi)存限制大小
      memory:  512Mi
    Requests:         #內(nèi)存請求大小
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qwmc (ro)

4、查看容器內(nèi)存資源的使用情況,分配256M內(nèi)存,最大可使用為512Mi,利用率為50%,此時沒有超過limits限制的大小,容器運(yùn)行正常

kubernetes系列教程(六)kubernetes資源管理和服務(wù)質(zhì)量

5、當(dāng)容器內(nèi)部超過內(nèi)存的大小會怎么樣呢,我們將--vm-byte設(shè)置為513M,容器會嘗試運(yùn)行,超過內(nèi)存后會OOM,kube-controller-manager會不停的嘗試重啟容器,RESTARTS的次數(shù)會不停的增加。

[root@node-1 demo]# cat memory-demo.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: memory-stress-demo
  annotations:
    kubernetes.io/description: "stress demo for memory limits"
spec:
  containers:
  - name: memory-stress-limits
    image: polinux/stress
    resources:
      requests:
        memory: 128Mi
      limits:
        memory: 512Mi
    command: ["stress"]
    args: ["--vm", "1", "--vm-bytes", "520M", "--vm-hang", "1"] . #容器中使用內(nèi)存為520M

查看容器的狀態(tài)為OOMKilled,RESTARTS的次數(shù)不斷的增加,不停的嘗試重啟
[root@node-1 demo]# kubectl get pods memory-stress-demo 
NAME                 READY   STATUS      RESTARTS   AGE
memory-stress-demo   0/1     OOMKilled   3          60s

2. Pod服務(wù)質(zhì)量

服務(wù)質(zhì)量QOS(Quality of Service)主要用于pod調(diào)度和驅(qū)逐時參考的重要因素,不同的QOS其服務(wù)質(zhì)量不同,對應(yīng)不同的優(yōu)先級,主要分為三種類型的Qos:

  • BestEffort 盡最大努力分配資源,默認(rèn)沒有指定resource分配的Qos,優(yōu)先級最低;
  • Burstable 可波動的資源,至少需要分配到requests中的資源,常見的QOS;
  • Guaranteed 完全可保障資源,requests和limits定義的資源相同,優(yōu)先級最高。

2.1 BestEffort最大努力

1、Pod中沒有定義resource,默認(rèn)的Qos策略為BestEffort,優(yōu)先級別最低,當(dāng)資源比較進(jìn)展是需要驅(qū)逐evice時,優(yōu)先驅(qū)逐B(yǎng)estEffort定義的Pod,如下定義一個BestEffort的Pod

[root@node-1 demo]# cat nginx-qos-besteffort.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-qos-besteffort
  labels:
    name: nginx-qos-besteffort
spec:
  containers:
  - name: nginx-qos-besteffort
    image: nginx:1.7.9
    imagePullPolicy: IfNotPresent
    ports:
    - name: nginx-port-80
      protocol: TCP
      containerPort: 80
    resources: {}

2、創(chuàng)建pod并查看Qos策略,qosClass為BestEffort

[root@node-1 demo]# kubectl apply -f nginx-qos-besteffort.yaml 
pod/nginx-qos-besteffort created

查看Qos策略
[root@node-1 demo]# kubectl get pods nginx-qos-besteffort -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"nginx-qos-besteffort"},"name":"nginx-qos-besteffort","namespace":"default"},"spec":{"containers":[{"image":"nginx:1.7.9","imagePullPolicy":"IfNotPresent","name":"nginx-qos-besteffort","ports":[{"containerPort":80,"name":"nginx-port-80","protocol":"TCP"}],"resources":{}}]}}
  creationTimestamp: "2019-09-28T11:12:03Z"
  labels:
    name: nginx-qos-besteffort
  name: nginx-qos-besteffort
  namespace: default
  resourceVersion: "1802411"
  selfLink: /api/v1/namespaces/default/pods/nginx-qos-besteffort
  uid: 56e4a2d5-8645-485d-9362-fe76aad76e74
spec:
  containers:
  - image: nginx:1.7.9
    imagePullPolicy: IfNotPresent
    name: nginx-qos-besteffort
    ports:
    - containerPort: 80
      name: nginx-port-80
      protocol: TCP
    resources: {}
    terminationMessagePath: /dev/termination-log
...省略...
status:
  hostIP: 10.254.100.102
  phase: Running
  podIP: 10.244.1.21
  qosClass: BestEffort  #Qos策略
  startTime: "2019-09-28T11:12:03Z"

3、刪除測試Pod

[root@node-1 demo]# kubectl delete pods nginx-qos-besteffort 
pod "nginx-qos-besteffort" deleted

2.2 Burstable可波動

1、Pod的服務(wù)質(zhì)量為Burstable,僅次于Guaranteed的服務(wù)質(zhì)量,至少需要一個container定義了requests,且requests定義的資源小于limits資源

[root@node-1 demo]# cat nginx-qos-burstable.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-qos-burstable
  labels:
    name: nginx-qos-burstable
spec:
  containers:
  - name: nginx-qos-burstable
    image: nginx:1.7.9
    imagePullPolicy: IfNotPresent
    ports:
    - name: nginx-port-80
      protocol: TCP
      containerPort: 80
    resources: 
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 200m
        memory: 256Mi

2、應(yīng)用yaml文件生成pod并查看Qos類型

[root@node-1 demo]# kubectl apply -f nginx-qos-burstable.yaml 
pod/nginx-qos-burstable created

查看Qos類型
[root@node-1 demo]# kubectl describe pods nginx-qos-burstable 
Name:         nginx-qos-burstable
Namespace:    default
Priority:     0
Node:         node-2/10.254.100.102
Start Time:   Sat, 28 Sep 2019 19:27:37 +0800
Labels:       name=nginx-qos-burstable
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"nginx-qos-burstable"},"name":"nginx-qos-burstable","namespa...
Status:       Running
IP:           10.244.1.22
Containers:
  nginx-qos-burstable:
    Container ID:   docker://d1324b3953ba6e572bfc63244d4040fee047ed70138b5a4bad033899e818562f
    Image:          nginx:1.7.9
    Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 28 Sep 2019 19:27:39 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  256Mi
    Requests:
      cpu:        100m
      memory:     128Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qwmc (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-5qwmc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5qwmc
    Optional:    false
QoS Class:       Burstable  #服務(wù)質(zhì)量是可波動的Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  95s   default-scheduler  Successfully assigned default/nginx-qos-burstable to node-2
  Normal  Pulled     94s   kubelet, node-2    Container image "nginx:1.7.9" already present on machine
  Normal  Created    94s   kubelet, node-2    Created container nginx-qos-burstable
  Normal  Started    93s   kubelet, node-2    Started container nginx-qos-burstable

2.3 Guaranteed完全保障

1、resource中定義的cpu和memory必須包含有requests和limits,切requests和limits的值必須相同,其優(yōu)先級別最高,當(dāng)出現(xiàn)調(diào)度和驅(qū)逐時優(yōu)先保障該類型的Qos,如下定義一個nginx-qos-guaranteed的容器,requests.cpu和limits.cpu相同,同理requests.memory和limits.memory.

[root@node-1 demo]# cat nginx-qos-guaranteed.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-qos-guaranteed
  labels:
    name: nginx-qos-guaranteed
spec:
  containers:
  - name: nginx-qos-guaranteed
    image: nginx:1.7.9
    imagePullPolicy: IfNotPresent
    ports:
    - name: nginx-port-80
      protocol: TCP
      containerPort: 80
    resources: 
      requests:
        cpu: 200m
        memory: 256Mi
      limits:
        cpu: 200m
        memory: 256Mi

2、應(yīng)用yaml文件生成pod并查看pod的Qos類型為可完全保障Guaranteed

[root@node-1 demo]# kubectl apply -f nginx-qos-guaranteed.yaml 
pod/nginx-qos-guaranteed created

[root@node-1 demo]# kubectl describe pods nginx-qos-guaranteed 
Name:         nginx-qos-guaranteed
Namespace:    default
Priority:     0
Node:         node-2/10.254.100.102
Start Time:   Sat, 28 Sep 2019 19:37:15 +0800
Labels:       name=nginx-qos-guaranteed
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"name":"nginx-qos-guaranteed"},"name":"nginx-qos-guaranteed","names...
Status:       Running
IP:           10.244.1.23
Containers:
  nginx-qos-guaranteed:
    Container ID:   docker://cf533e0e331f49db4e9effb0fbb9249834721f8dba369d281c8047542b9f032c
    Image:          nginx:1.7.9
    Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Sat, 28 Sep 2019 19:37:16 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     200m
      memory:  256Mi
    Requests:
      cpu:        200m
      memory:     256Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5qwmc (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-5qwmc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-5qwmc
    Optional:    false
QoS Class:       Guaranteed #服務(wù)質(zhì)量為可完全保障Guaranteed
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  25s   default-scheduler  Successfully assigned default/nginx-qos-guaranteed to node-2
  Normal  Pulled     24s   kubelet, node-2    Container image "nginx:1.7.9" already present on machine
  Normal  Created    24s   kubelet, node-2    Created container nginx-qos-guaranteed
  Normal  Started    24s   kubelet, node-2    Started container nginx-qos-guaranteed

寫在最后

本章是kubernetes系列教程第六篇文章,通過介紹resource資源的分配和服務(wù)質(zhì)量Qos,關(guān)于resource有節(jié)點(diǎn)使用建議:

  • requests和limits資源定義推薦不超過1:2,避免分配過多資源而出現(xiàn)資源爭搶,發(fā)生OOM;
  • pod中默認(rèn)沒有定義resource,推薦給namespace定義一個limitrange,確保pod能分到資源;
  • 防止node上資源過度而出現(xiàn)機(jī)器hang住或者OOM,建議node上設(shè)置保留和驅(qū)逐資源,如保留資源--system-reserved=cpu=200m,memory=1G,驅(qū)逐條件--eviction hard=memory.available<500Mi。

附錄

容器計(jì)算資源管理:https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/

pod內(nèi)存資源管理:https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/

pod cpu資源管理:https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/

服務(wù)質(zhì)量QOS:https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/

Docker關(guān)于CPU的限制:https://www.cnblogs.com/sparkdev/p/8052522.html


當(dāng)你的才華撐不起你的野心時,你就應(yīng)該靜下心來學(xué)習(xí)

返回kubernetes系列教程目錄

當(dāng)前題目:kubernetes系列教程(六)kubernetes資源管理和服務(wù)質(zhì)量
文章分享:http://muchs.cn/article12/jpgcdc.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供網(wǎng)站設(shè)計(jì)、營銷型網(wǎng)站建設(shè)、網(wǎng)站設(shè)計(jì)公司、建站公司、品牌網(wǎng)站設(shè)計(jì)、網(wǎng)站改版

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請盡快告知,我們將會在第一時間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場,如需處理請聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時需注明來源: 創(chuàng)新互聯(lián)

成都app開發(fā)公司