资源管理核心概念

资源管理核心概念

K8s 的设计理念-分层架构

mark

核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,对内提供插件式应用执行环境

应用层:部署(无状态应用(无集群关系)、有状态应用(数据库主从 Rredis集群)、批处理任务、集群应用等)和路由(服务发现、DNS解析等)。有状态应用一般是跑在物理机上面

管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)

接口层:kubectl命令行工具、客户端SDK以及集群联邦

生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范畴

1.Kubernetes 外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、OTS应用、ChatOps等
2.Kubernetes 内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的配置和管理等

K8s 的设计理念-API设计原则

Kubernetes 设计理念与分布式系统

  • 分析和理解Kubernetes的设计理念可以使我们更深入地了解Kubernetes系统,更好地利用它管理分布式部署的云原生应用,另一方面也可以让我们借鉴其在分布式系统设计方面的经验。

API设计原则

  • 对于云计算系统,系统API实际上处于系统设计的统领地位,正如本文前面所说,K8s集群系统每支持一项新功能,引入一项新技术,一定会新引入对应的API对象,支持对该功能的管理操作,理解掌握的API,就好比抓住了K8s系统的牛鼻子。K8s系统API的设计有以下几条原则:

    • 所有API应该是声明式的。正如前文所说,声明式的操作,相对于命令式操作,对于重复操作的效果是稳定的,这对于容易出现数据丢失或重复的分布式环境来说是很重要的。另外,声明式操作更容易被用户使用,可以使系统向用户隐藏实现的细节,隐藏实现的细节的同时,也就保留了系统未来持续优化的可能性。此外,声明式的API,同时隐含了所有的API对象都是名词性质的,例如Service、Volume这些API都是名词,这些名词描述了用户所期望得到的一个目标分布式对象。

    • API对象是彼此互补而且可组合的。这里面实际是鼓励API对象尽量实现面向对象设计时的要求,即“高内聚,松耦合”,对业务相关的概念有一个合适的分解,提高分解出来的对象的可重用性。事实上,K8s这种分布式系统管理平台,也是一种业务系统,只不过它的业务就是调度和管理容器服务。

    • 高层API以操作意图为基础设计。如何能够设计好API,跟如何能用面向对象的方法设计好应用系统有相通的地方,高层设计一定是从业务出发,而不是过早的从技术实现出发。因此,针对K8s的高层API设计,一定是以K8s的业务为基础出发,也就是以系统调度管理容器的操作意图为基础设计。

    • 低层API根据高层API的控制需要设计。设计实现低层API的目的,是为了被高层API使用,考虑减少冗余、提高重用性的目的,低层API的设计也要以需求为基础,要尽量抵抗受技术实现影响的诱惑

    • 尽量避免简单封装,不要有在外部API无法显式知道的内部隐藏的机制。简单的封装,实际没有提供新的功能,反而增加了对所封装API的依赖性。内部隐藏的机制也是非常不利于系统维护的设计方式,例如PetSet和ReplicaSet,本来就是两种Pod集合,那么K8s就用不同API对象来定义它们,而不会说只用同一个ReplicaSet,内部通过特殊的算法再来区分这个ReplicaSet是有状态的还是无状态。

    • API操作复杂度与对象数量成正比。这一条主要是从系统性能角度考虑,要保证整个系统随着系统规模的扩大,性能不会迅速变慢到无法使用,那么最低的限定就是API的操作复杂度不能超过O(N),N是对象的数量,否则系统就不具备水平伸缩性了

    • API对象状态不能依赖于网络连接状态。由于众所周知,在分布式环境下,网络连接断开是经常发生的事情,因此要保证API对象状态能应对网络的不稳定,API对象的状态就不能依赖于网络连接状态

    • 尽量避免让操作机制依赖于全局状态,因为在分布式系统中要保证全局状态的同步是非常困难的。

API-对象

  • 是 K8s 集群中的管理操作单元

mark

类别 名称
工作负载型资源对象 Pod Replicaset ReplicationController Deployments StatefulSets Daemonset Job CronJob
服务发现及负载均衡 Service Ingress
配置与存储 Volume Persistent Volume CSl configmap secret
集群资源 Namespace Node Role ClusterRole RoleBinding ClusterRoleBinding
元数据资源 HPA PodTemplate LimitRang

K8s 命令使用

mark

Kubectl 概述

使用以下语法 kubectl 从终端窗口运行命令:

kubectl [command] [TYPE] [NAME] [flags]
其中 commandTYPENAMEflags 分别是:

  • command:指定要对一个或多个资源执行的操作,例如 creategetdescribedelete

  • TYPE:指定资源类型。资源类型不区分大小写,可以指定单数、复数或缩写形式。例如,以下命令输出相同的结果。

kubectl get pod pod1
kubectl get pods pod1
kubectl get po pod1
root@k8s-master1:~# kubectl get po
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-c2plf           1/1     Running   3          5d4h
net-test1-5fcc69db59-xzvrl           1/1     Running   3          5d4h
net-test2-8456fd74f7-ckpsr           1/1     Running   3          5d4h
net-test2-8456fd74f7-nbzx8           1/1     Running   3          5d4h
nginx-deployment-5f4dc447b5-wcg2t    1/1     Running   0          46h
tomcat-deployment-5cd65b4d74-d4x7s   1/1     Running   2          4d19h
root@k8s-master1:~# kubectl get pod
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-c2plf           1/1     Running   3          5d4h
net-test1-5fcc69db59-xzvrl           1/1     Running   3          5d4h
net-test2-8456fd74f7-ckpsr           1/1     Running   3          5d4h
net-test2-8456fd74f7-nbzx8           1/1     Running   3          5d4h
nginx-deployment-5f4dc447b5-wcg2t    1/1     Running   0          46h
tomcat-deployment-5cd65b4d74-d4x7s   1/1     Running   2          4d19h
root@k8s-master1:~# kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-c2plf           1/1     Running   3          5d4h
net-test1-5fcc69db59-xzvrl           1/1     Running   3          5d4h
net-test2-8456fd74f7-ckpsr           1/1     Running   3          5d4h
net-test2-8456fd74f7-nbzx8           1/1     Running   3          5d4h
nginx-deployment-5f4dc447b5-wcg2t    1/1     Running   0          46h
tomcat-deployment-5cd65b4d74-d4x7s   1/1     Running   2          4d19h
  • NAME:指定资源的名称。名称区分大小写。如果省略名称,则显示所有资源的详细信息 kubectl get pods

在对多个资源执行操作时,您可以按类型和名称指定每个资源,或指定一个或多个文件:

  • 要按类型和名称指定资源:

    • 要对所有类型相同的资源进行分组,请执行以下操作:TYPE1 name1 name2 name<#>。例子:kubectl get pod example-pod1 example-pod2
    • 分别指定多个资源类型:TYPE1/name1 TYPE1/name2 TYPE2/name3 TYPE<#>/name<#>。例子:kubectl get pod/example-pod1 replicationcontroller/example-rc1
  • 用一个或多个文件指定资源:-f file1 -f file2 -f file<#>

    • 使用 YAML 而不是 JSON 因为 YAML 更容易使用,特别是用于配置文件时。例子:kubectl get pod -f ./pod.yaml
  • flags: 指定可选的参数。例如,可以使用 -s-server 参数指定 Kubernetes API 服务器的地址和端口。

操作命令和语法

mark

mark

root@k8s-master1:~# kubectl get service
NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes              ClusterIP   192.168.0.1     <none>        443/TCP        5d7h
magedu-nginx-service    NodePort    192.168.3.55    <none>        80:30004/TCP   2d2h
magedu-tomcat-service   NodePort    192.168.1.200   <none>        80:30005/TCP   4d19h
root@k8s-master1:~# kubectl describe service magedu-tomcat-service #查看 service 中 service magedu-tomcat-service 的详细信息
Name:                     magedu-tomcat-service
Namespace:                default
Labels:                   app=magedu-tomcat-service-label
Annotations:              kubectl.kubernetes.io/last-applied-configuration:
                            {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"magedu-tomcat-service-label"},"name":"magedu-tomcat-serv...
Selector:                 app=tomcat
Type:                     NodePort
IP:                       192.168.1.200
Port:                     http  80/TCP
TargetPort:               8080/TCP
NodePort:                 http  30005/TCP
Endpoints:                10.10.5.17:8080
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
root@k8s-master1:~# kubectl explain pod  #查看某个对象的方法 层层嵌套 

root@k8s-master1:~# kubectl explain pod.apiVersion

root@k8s-master1:~# kubectl explain deployment.spec.selector
  • Create 和 apply 的区别
    • apply 支持对 yaml 文件的多次修改和动态生效,修改完成重建执行 apply -f file.yaml。create 单次创建资源后期如果修改 yaml 文件想生效,那么需要删除之前的资源再重新创建。而且文件创建过程需要在删除之前修改,然后再重新创建(删除-修改-再创建)。
root@k8s-master1:~# kubectl cluster-info
Kubernetes master is running at https://192.168.26.248:6443
KubeDNS is running at https://192.168.26.248:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root@k8s-master1:~# kubectl cordon --help
Mark node as unschedulable.

Examples:
  # Mark node "foo" as unschedulable.
  kubectl cordon foo

Options:
      --dry-run=false: If true, only print the object that would be sent, without sending it.
  -l, --selector='': Selector (label query) to filter on

Usage:
  kubectl cordon NODE [options]

Use "kubectl options" for a list of global command-line options (applies to all commands).
root@k8s-master1:~# kubectl cordon k8s-master1 #设置该节点不被调度
node/k8s-master1 cordoned
root@k8s-master1:~# kubectl get node
NAME          STATUS                     ROLES    AGE    VERSION
k8s-master1   Ready,SchedulingDisabled   master   5d8h   v1.17.4
k8s-master2   Ready                      master   5d8h   v1.17.4
k8s-master3   Ready                      master   5d7h   v1.17.4
node-1        Ready                      <none>   5d7h   v1.17.4
node-2        Ready                      <none>   5d7h   v1.17.4
node-3        Ready                      <none>   5d7h   v1.17.4
root@k8s-master1:~# kubectl uncordon k8s-master1 #取消该节点不被调度
node/k8s-master1 uncordoned
root@k8s-master1:~# kubectl get node
NAME          STATUS   ROLES    AGE    VERSION
k8s-master1   Ready    master   5d8h   v1.17.4
k8s-master2   Ready    master   5d8h   v1.17.4
k8s-master3   Ready    master   5d7h   v1.17.4
node-1        Ready    <none>   5d7h   v1.17.4
node-2        Ready    <none>   5d7h   v1.17.4
node-3        Ready    <none>   5d7h   v1.17.4
root@k8s-master1:~# kubectl drain --help #驱逐无状态服务,用于 node 节点紧急下线
Drain node in preparation for maintenance.

...省略...

Usage:
  kubectl drain NODE [options]

Use "kubectl options" for a list of global command-line options (applies to all commands).
root@k8s-master1:~# kubectl api-resources
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
events                            ev                                          true         Event
limitranges                       limits                                      true         LimitRange
namespaces                        ns                                          false        Namespace
nodes                             no                                          false        Node
persistentvolumeclaims            pvc                                         true         PersistentVolumeClaim
persistentvolumes                 pv                                          false        PersistentVolume
pods                              po                                          true         Pod
podtemplates                                                                  true         PodTemplate
replicationcontrollers            rc                                          true         ReplicationController
resourcequotas                    quota                                       true         ResourceQuota
secrets                                                                       true         Secret
serviceaccounts                   sa                                          true         ServiceAccount
services                          svc                                         true         Service
mutatingwebhookconfigurations                  admissionregistration.k8s.io   false        MutatingWebhookConfiguration
validatingwebhookconfigurations                admissionregistration.k8s.io   false        ValidatingWebhookConfiguration
customresourcedefinitions         crd,crds     apiextensions.k8s.io           false        CustomResourceDefinition
apiservices                                    apiregistration.k8s.io         false        APIService
controllerrevisions                            apps                           true         ControllerRevision
daemonsets                        ds           apps                           true         DaemonSet
deployments                       deploy       apps                           true         Deployment
replicasets                       rs           apps                           true         ReplicaSet
statefulsets                      sts          apps                           true         StatefulSet
tokenreviews                                   authentication.k8s.io          false        TokenReview
localsubjectaccessreviews                      authorization.k8s.io           true         LocalSubjectAccessReview
selfsubjectaccessreviews                       authorization.k8s.io           false        SelfSubjectAccessReview
selfsubjectrulesreviews                        authorization.k8s.io           false        SelfSubjectRulesReview
subjectaccessreviews                           authorization.k8s.io           false        SubjectAccessReview
horizontalpodautoscalers          hpa          autoscaling                    true         HorizontalPodAutoscaler
cronjobs                          cj           batch                          true         CronJob
jobs                                           batch                          true         Job
certificatesigningrequests        csr          certificates.k8s.io            false        CertificateSigningRequest
leases                                         coordination.k8s.io            true         Lease
endpointslices                                 discovery.k8s.io               true         EndpointSlice
events                            ev           events.k8s.io                  true         Event
ingresses                         ing          extensions                     true         Ingress
ingresses                         ing          networking.k8s.io              true         Ingress
networkpolicies                   netpol       networking.k8s.io              true         NetworkPolicy
runtimeclasses                                 node.k8s.io                    false        RuntimeClass
poddisruptionbudgets              pdb          policy                         true         PodDisruptionBudget
podsecuritypolicies               psp          policy                         false        PodSecurityPolicy
clusterrolebindings                            rbac.authorization.k8s.io      false        ClusterRoleBinding
clusterroles                                   rbac.authorization.k8s.io      false        ClusterRole
rolebindings                                   rbac.authorization.k8s.io      true         RoleBinding
roles                                          rbac.authorization.k8s.io      true         Role
priorityclasses                   pc           scheduling.k8s.io              false        PriorityClass
csidrivers                                     storage.k8s.io                 false        CSIDriver
csinodes                                       storage.k8s.io                 false        CSINode
storageclasses                    sc           storage.k8s.io                 false        StorageClass
volumeattachments                              storage.k8s.io                 false        VolumeAttachment

输出选项和语法

mark

mark

root@k8s-master1:~# kubectl get pod 
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-c2plf           1/1     Running   3          5d6h
net-test1-5fcc69db59-xzvrl           1/1     Running   3          5d6h
net-test2-8456fd74f7-ckpsr           1/1     Running   3          5d6h
net-test2-8456fd74f7-nbzx8           1/1     Running   3          5d6h
nginx-deployment-5f4dc447b5-wcg2t    1/1     Running   0          2d
tomcat-deployment-5cd65b4d74-d4x7s   1/1     Running   2          4d21h
root@k8s-master1:~# kubectl get pod -o wide #输出更多信息
NAME                                 READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
net-test1-5fcc69db59-c2plf           1/1     Running   3          5d6h    10.10.5.18   node-3   <none>           <none>
net-test1-5fcc69db59-xzvrl           1/1     Running   3          5d6h    10.10.4.14   node-2   <none>           <none>
net-test2-8456fd74f7-ckpsr           1/1     Running   3          5d6h    10.10.6.16   node-1   <none>           <none>
net-test2-8456fd74f7-nbzx8           1/1     Running   3          5d6h    10.10.4.15   node-2   <none>           <none>
nginx-deployment-5f4dc447b5-wcg2t    1/1     Running   0          2d      10.10.4.16   node-2   <none>           <none>
tomcat-deployment-5cd65b4d74-d4x7s   1/1     Running   2          4d21h   10.10.5.17   node-3   <none>           <none>
root@k8s-master1:~# kubectl get pod -o json #输出为 json 格式显示
{
    "apiVersion": "v1",
    "items": [
        {
            "apiVersion": "v1",
            "kind": "Pod",
            "metadata": {
                "creationTimestamp": "2020-03-29T09:12:07Z",
                "generateName": "net-test1-5fcc69db59-",
                "labels": {
                    "pod-template-hash": "5fcc69db59",
                    "run": "net-test1"
                },
                "name": "net-test1-5fcc69db59-c2plf",
...省略...
root@k8s-master1:~# kubectl get pod -o yaml #输出为 yaml 格式显示
apiVersion: v1
items:
- apiVersion: v1
  kind: Pod
  metadata:
    creationTimestamp: "2020-03-29T09:12:07Z"
    generateName: net-test1-5fcc69db59-
    labels:
      pod-template-hash: 5fcc69db59
      run: net-test1
    name: net-test1-5fcc69db59-c2plf
    namespace: default
    ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true

K8s 的牛鼻子--API

K8s 的几个重要概念

  • 对象调用 K8s 是和什么打交道的? 通过声明式 API
  • 怎么调用 K8s 的声明式 API? 通过 yaml 文件
apiVersion: apps/v1 #创建该对象所使用的 Kubernetes API 的版本
kind: Deployment #想要创建的对象的类型
metadata: #帮助识别对象唯一性的元数据,包括一个 name 名称,可选的 namespace
  name: nginx-deployment
  labels:
    app: nginx
spec: #定义 deployment 中容器的详细信息,可以定义多个容器,名称不能冲突
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: k8s.harbor.com/base-images/nginx:1.14.2
        ports:
        - containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: magedu-nginx-service-label
  name: magedu-nginx-service
  namespace: default
spec:
  type: NodePort
  ports:
  - name: http
    port: 80 #service 端口,通过 service 端口的 80 转发到容器 (targetPort) pod 端口 80
    protocol: TCP
    targetPort: 80 #pod 端口
    nodePort: 30004 #宿主机端口,通过宿主机端口 300004 访问到 service 端口 80
  selector:
    app: nginx
  • 必须字段怎么申明?
    • 1.apiVersion-创建该对象所使用的 Kubernetes API 的版本
    • 2.kind-想要创建的对象的类型
    • 3.metadata-帮助识别对象唯一性的数据,包括一个 name 名称、可选的 namespace
    • 4.spec-定义 deployment 中容器的详细信息
    • 5.status-Pod 创建完成后 K8s 自动生成 status 状态
root@k8s-master1:~# kubectl get pod
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-c2plf           1/1     Running   3          5d6h
net-test1-5fcc69db59-xzvrl           1/1     Running   3          5d6h
net-test2-8456fd74f7-ckpsr           1/1     Running   3          5d6h
net-test2-8456fd74f7-nbzx8           1/1     Running   3          5d6h
nginx-deployment-5f4dc447b5-wcg2t    1/1     Running   0          2d
tomcat-deployment-5cd65b4d74-d4x7s   1/1     Running   2          4d21h
root@k8s-master1:~# kubectl describe pod tomcat-deployment-5cd65b4d74-d4x7s
Name:         tomcat-deployment-5cd65b4d74-d4x7s
Namespace:    default
Priority:     0
Node:         node-3/192.168.26.164
Start Time:   Mon, 30 Mar 2020 02:24:23 +0800
Labels:       app=tomcat
              pod-template-hash=5cd65b4d74
Annotations:  <none>
Status:       Running #状态是否 runing
IP:           10.10.5.17
IPs:
  IP:           10.10.5.17
Controlled By:  ReplicaSet/tomcat-deployment-5cd65b4d74
Containers:
  tomcat:
    Container ID:   docker://1cd04d9b73e80ea8cbd849f62ba2c19a4ef83ccc44ddbb168d278fb6f4c07a5d
    Image:          k8s.harbor.com/bokebi/tomcat:app
    Image ID:       docker-pullable://k8s.harbor.com/bokebi/tomcat@sha256:76e6f187899ec9d58c865c71458bbd8a780d6e019e093b0513e886ef4d3f0aeb
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running #状态是否 runing
      Started:      Wed, 01 Apr 2020 18:33:57 +0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    143
      Started:      Wed, 01 Apr 2020 15:44:35 +0800
      Finished:     Wed, 01 Apr 2020 18:33:18 +0800
    Ready:          True
    Restart Count:  2
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qnr5k (ro)
Conditions:
  Type              Status #状态的查询
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-qnr5k:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qnr5k
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

yaml 文件及语法基础

  • 需要提前创建好 yaml 文件,并创建好 pod 运行所需要的 namespace、yaml 文件等资源
root@k8s-master1:~# cd /opt/
root@k8s-master1:/opt# mkdir file-yaml
root@k8s-master1:/opt#cd file-yaml/
root@k8s-master1:/opt/file-yaml# vim linux39-ns.yml
apiVersion: v1 #API 版本
kind: Namespace #类型为 namespac
metadata: #定义元数据
  name: linux39 #namespace 名称

root@k8s-master1:/opt/file-yaml# kubectl get ns
NAME                   STATUS   AGE
default                Active   5d9h
kube-node-lease        Active   5d9h
kube-public            Active   5d9h
kube-system            Active   5d9h
kubernetes-dashboard   Active   5d1h
test                   Active   2d

root@k8s-master1:/opt/file-yaml# kubectl apply -f linux39-ns.yml 
namespace/linux39 created

root@k8s-master1:/opt/file-yaml# kubectl get ns
NAME                   STATUS   AGE
default                Active   5d9h
kube-node-lease        Active   5d9h
kube-public            Active   5d9h
kube-system            Active   5d9h
kubernetes-dashboard   Active   5d1h
linux39                Active   12s
test                   Active   2d

mark

大小写敏感
使用缩进表示层级关系
缩进时不允许使用 Tal 键,只允许使用空格
缩进的空格数目不重要,只要相同层级的元素左侧对齐即可
使用 "#" 表示注释,从这个字符一直到行尾,都会被解析器忽略
比 json 更适用于配置文件
  • yaml 文件主要特性
上下级关系
列表
键值对 (也称为 maps,即 key:value 格式的键值对数据)

Nginx 业务 yaml 文件详解

# pwd
/opt/k8s-data/yaml/bokebi

# mkdir nginx tomcat-app1 tomcat-app2

# cd nginx

# pwd
/opt/k8s-data/yaml/bokebi/nginx

# cat nginx.yaml
---
kind: Deployment #类型,是 deployment 控制器,kubectl explain Deployment
apiversion: extensions/vlbetal #API 版本,kubectl explain Deployment.apiversion
metadata: #pod 的元数据信息,kubectl explain Deployment.metadata
  labels: #自定义 pod 的标签,kubectl explain Deployment.metadata.labels
    app: bokebi-nginx-deployment-label #标签名称为 app 值为 bokebi-nginx-deployment-label,后面会用到此标签
  name: bokebi-nginx-deployment #pod 的名称
  namespace: bokebi #pod 的 namespace,默认是 default
spec: #定义 deployment 中容器的详细信息,kubectl explain Deployment.spec
  replicas: 1 #创建出的 pod 的副本数,即多少个 pod,默认值为 1
  selsctor: #定义标签选择器
    matchLabels: #定义匹配的标签,必须要设置
      app: #bokebi-nginx-selector #匹配的目标标签
  template: #定义模板,必须定义,模板是起到描述要创建的 pod 的作用
    metadata: #定义模板元数据
      labels: #定义模板 label,kubectl explain Deployment.spec.template.metadata.labels
        app: bokebi-nginx-selector # 定义标签,等于 Deployment.spec.seletor.matchLabels.app 的值
    spec: #定义 pod 信息
      containners #定义 pod 中容器列表,可以多个至少一个,pod 不能动态增减容器
      - name: bokebi-nginx-container #容器名称
        image: k8s.harbor.com/base-images/nginx:1.14.2 #镜像地址
        #command: ["/apps/tomcat/bin/run_tomcat.sh"] #容器启动
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always #拉取镜像策略
        port: #定义容器端口列表
        - containerPort: 80 #定义一个端口
          protocol: TCP #端口协议
          name: http #端口名称
        - containerPort: 443 #定义一个端口
          protocol: TCP #端口协议
          name: https #端口名称
        env: #配置环境变量
        - name: "password" #变量名称,必须要用引号引起来
          value: "123456" #当前变量的值
        - name: "age" #另一个变量名称
          value: "18" #另一个变量的值
        resources: #对资源的请求设置和限制设置
          limits: #资源限制设置,上限
            cpu: 2 #cpu 的限制,单位为 core 数,可以写为 2 或者 2000m 等 CPU 的压缩值,2000 毫核
            memory: 2Gi #内存限制,单位可以为 Mib/Gib,将用于 docker run --memory 参数
          requests: #资源请求的设置
            cpu:1 #cpu 请求数,容器启动的初始可用数量,可以写 1 或者 1000m 等 CPU 的压缩值,1000 毫核
            memory: 512Mi #内存请求大小,容器启动的初始可用数量,用于调度 pod 的时候使用
---
kind: Service #类型为 service
apiVersion: v1 #service API 版本,kubectl explain Service.apiVersion
metadata: #定义 service 元数据,kubectl explain Service.metadata
  labels: #自定义标签,kubectl explain Service.metadata.labels
    app: bokebi-nginx #定义 service 标签的内容
  name: bokebi-nginx-spec #定义 service 的名称,此名称会被 DNS 解析
  namespace: bokebi #该 service 隶属于的 namespaces 名称,即把 service 创建到哪个 namespace 里面
spec: #定义 service 的详细信息,kubectl explain Service.spec
  type: NodePort #service 的类型,定义服务的访问方式,默认为 ClusterIP,kubectl explain Service.spec.type
  ports: #定义访问端口,kubectl explain Service.spec.ports
  - name: #定义一个端口名称
    port: 80 #service 80 端口
    protocol: TCP #协议类型
    targetPort: 80 #目标 pod 的端口
    nodePort: 30001 #node 节点暴漏的端口
  - name: https #SSL 端口
    port: 443 #service 443 端口
    protocol: TCP #端口协议
    targetPort: 443 #目标 pod 端口
    modePort: 300043 #node 节点暴露的 SSL 端口
  selector: #service 的标签选择器,定义要访问的目标 pod
    app: bokebi-nginx #将流量路由到选择的 pod 上,需等于 kubectl explain Deployment.spec.selector.matchLabels
spec 和 status 的区别
    spec 是期望状态 (你想让 K8s 做到那些事情)
    status 是实际状态 (K8s 实际上能做到的事情)
Pod 概述:
    1.Pod 是 k8s 中的最小单元
    2.一个 Pod 中可以运行一个容器,也可以运行多个容器
    3.运行多个容器的话,这些容器是一起被调度的
    4.Pod 的生命周期是短暂的,不会自愈,是用完就销毁的实体
    5.一般我们是通过 Controller 来创建和管理pod的

Controller:控制器

实验示例

root@k8s-master1:/opt/k8s-data/yaml/namespace# pwd
/opt/k8s-data/yaml/namespace

root@k8s-master1:/opt/k8s-data/yaml/namespace# ll
total 16
drwxr-xr-x 2 root root 4096 Apr  4 00:55 ./
drwxr-xr-x 4 root root 4096 Apr  1 23:48 ../
-rw-r--r-- 1 root root   60 Apr  4 00:55 linux39-ns.yml
-rw-r--r-- 1 root root   60 Apr  4 00:55 linux40-ns.yml

root@k8s-master1:/opt/k8s-data/yaml/namespace# cat linux*
apiVersion: v1 #API 版本
kind: Namespace #类型为 namespace
metadata: #定义元数据
  name: linux39 #namespace 名称

apiVersion: v1 #API 版本
kind: Namespace #类型为 namespace
metadata: #定义元数据
  name: linux40 #namespace 名称

root@k8s-master1:/opt/k8s-data/yaml/namespace# kubectl apply -f linux39-ns.yml 
namespace/linux39 created
root@k8s-master1:/opt/k8s-data/yaml/namespace# kubectl apply -f linux40-ns.yml 
namespace/linux40 created
root@k8s-master1:/opt/k8s-data/yaml/namespace# kubectl get ns
NAME                   STATUS   AGE
default                Active   5d10h
kube-node-lease        Active   5d10h
kube-public            Active   5d10h
kube-system            Active   5d10h
kubernetes-dashboard   Active   5d2h
linux39                Active   17s
linux40                Active   12s
test                   Active   2d1h

case1:pod 的控制器类型

root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl apply -f deployment.yml 
deployment.apps/nginx-deployment created

root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
NAME                                READY   STATUS              RESTARTS   AGE
nginx-deployment-6997c89dfb-p5t2v   0/1     ContainerCreating   0          13s #处于创建状态,正在拉取镜像
nginx-deployment-6997c89dfb-zgb7f   0/1     ContainerCreating   0          13s

root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-6997c89dfb-6jjm7   1/1     Running   0          65s #已经正常运行
nginx-deployment-6997c89dfb-p5t2v   1/1     Running   0          4m3s

ReplicaSet

root@master-1:/opt/k8s-data/yaml/linux39/case1# vim rs.yml #ReplicaSet 控制器
apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: ReplicaSet  #ReplicaSet 控制器
metadata:
  name: frontend
  namespace: linux39
spec:
  replicas: 3
  selector:
    #matchLabels:
    #  app: ng-rs-80
    matchExpressions:
      - {key: app, operator: In, values: [ng-rs-80,ng-rs-81]} #正则匹配多个
  template:
    metadata:
      labels:
        app: ng-rs-80
    spec:
      containers:
      - name: ng-rs-80
        image: nginx
        ports:
        - containerPort: 80
  • 先把之前创建的资源删除
root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl delete -f deployment.yml 
deployment.apps "nginx-deployment" deleted
root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
No resources found in linux39 namespace.
  • 用 ReplicaSet 创建新的资源
root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
NAME             READY   STATUS              RESTARTS   AGE
frontend-7fhcv   1/1     ContainerCreating   0          89s
frontend-c8sgg   1/1     ContainerCreating   0          89s
frontend-msr4g   1/1     ContainerCreating   0          89s

root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
NAME             READY   STATUS    RESTARTS   AGE
frontend-7fhcv   1/1     Running   0          89s
frontend-c8sgg   1/1     Running   0          89s
frontend-msr4g   1/1     Running   0          89s
  • ReplicaSet 需要先删除之后才能再次修改,除非加一些参数选项
root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl delete -f rs.yml 
replicaset.apps "frontend" deleted

root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl create --help
Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
  # Create a pod using the data in pod.json.
  kubectl create -f ./pod.json

...省略...

root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl create -f rs.yml --save-config=true #再次创建添加的参数
replicaset.apps/frontend created

root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# vim rs.yml 

...省略...

spec:
  replicas: 2 #把副本的数量改为2个

...省略...

root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl apply -f rs.yml #再次创建还是依赖apply
replicaset.apps/frontend configured

root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39 
NAME             READY   STATUS    RESTARTS   AGE
frontend-fxqkf   1/1     Running   0          47s
frontend-vs9hz   1/1     Running   0          47s

ReplicationController (快淘汰)

apiVersion: v1
kind: ReplicationController #ReplicationController 控制器
metadata: 
  name: ng-rc
  namespace: linux39
spec:
  replicas: 2
  selector:
    app: ng-rc-80
    #app1: ng-rc-81

  template:
    metadata:
      labels:
        app: ng-rc-80
        #app1: ng-rc-81
    spec:
      containers:
      - name: ng-rc-80
        image: nginx
        ports:
        - containerPort: 80
​```c
root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl delete -f rs.yml 
replicaset.apps "frontend" deleted
root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
No resources found in linux39 namespace.
​```c
root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl apply -f rc.yml 
replicationcontroller/ng-rc created
root@k8s-master1:/opt/k8s-data/yaml/linux39/case1# kubectl get pod -n linux39
NAME          READY   STATUS    RESTARTS   AGE
ng-rc-mv6rh   1/1     Running   0          16s
ng-rc-zpzlr   1/1     Running   0          16s

Service

Why:pod重启之后ip就变了,pod之间直接访问会有问题
What:解耦了服务和应用。简化服务的调用
How:声明一个service对象
一般常用的有两种:
k8s集群内的service:selector 指定 pod,自动创建 Endpoints
k8s集群外的service:手动创建 Endpoints,指定外部服务的 ip,端口和协议
kube-proxy 和 service 的关系:
kube-proxy-----------> k8s-apiserver
             watch

kube-proxy 监听着 k8s-apiserver,一旦 service 资源发生变化(调 k8s-api 修改 service 信息),kube-proxy 就会生成对应的负载调度的调整,这样就保证 service 的最新状态。
kube-proxy有三种调度模型:
userspace:k8s 1.1 之前
iptables:k8s 1.10 之前
ipvs:k8s 1.11 之后,如果没有开启 ipvs,则自动降级为 iptables

Service 和 deployment 实现一个 nginx

root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# pwd
/opt/k8s-data/yaml/linux39/case2
root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# ll
total 3916
drwxr-xr-x 2 root root    4096 Apr  4 01:47 ./
drwxr-xr-x 4 root root    4096 Apr  4 01:46 ../
-rw-r--r-- 1 root root     542 Mar 30 18:33 1-deploy_node.yml
-rw-r--r-- 1 root root     214 Mar 30 18:33 2-svc_service.yml
-rw-r--r-- 1 root root     233 Mar 30 18:33 3-svc_NodePort.yml
-rw-r--r-- 1 root root 3983872 Mar 30 18:33 busybox-online.tar.gz
-rw-r--r-- 1 root root     277 Mar 30 18:33 busybox.yaml
root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# vim 1-deploy_node.yml
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: linux39
spec:
  replicas: 1
  selector:
    #matchLabels: #rs or deployment
    #  app: ng-deploy3-80
    matchExpressions:
      - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx:1.17.5 
        ports:
        - containerPort: 80
      #nodeSelector:
      #  env: group1
root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# kubectl apply -f 1-deploy_node.yml 
deployment.apps/nginx-deployment created

root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# kubectl get pod -n linux39
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-6997c89dfb-727wx   1/1     Running   0          10s

ClusterIP 内部访问

root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# pwd
/opt/k8s-data/yaml/linux39/case2
root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# vim 2-svc_service.yml 
root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# cat 2-svc_service.yml 
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80 
  namespace: linux39
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
  type: ClusterIP
  selector:
    app: ng-deploy-80
root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# kubectl get svc -n linux39 #获取 linux39 的 service 集群地址
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
ng-deploy-80   ClusterIP   192.168.1.206   <none>        80/TCP    23s
  • 进入到 pod 服务器
root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# kubectl exec -it net-test1-5fcc69db59-xzvrl sh
/ # apk add curl #安装 curl 命令
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.11/community/x86_64/APKINDEX.tar.gz
(1/4) Installing ca-certificates (20191127-r1)
(2/4) Installing nghttp2-libs (1.40.0-r0)
(3/4) Installing libcurl (7.67.0-r0)
(4/4) Installing curl (7.67.0-r0)
Executing busybox-1.31.1-r9.trigger
Executing ca-certificates-20191127-r1.trigger
OK: 7 MiB in 18 packages
/ # curl 192.168.1.206 #访问svc集权地址 看是否能够联通
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title> #nginx 默认页面

...省略...

</body>
</html>
/ # ping ng-deploy-80 #ping 一下 service NAME 不通
ping: bad address 'ng-deploy-80'

busybox 镜像创建

root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# docker pull busybox
...省略...

root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# docker images
busybox                                                                       latest              83aa35aa1c79        3 weeks ago         1.22MB
...省略...

root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# docker tag busybox:latest k8s.harbor.com/base-images/busybox

root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# docker push k8s.harbor.com/base-images/busybox
...省略...

mark

root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# cat busybox.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: linux39 #default namespace 的 DNS
spec:
  containers:
  - image: k8s.harbor.com/base-images/busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: Always 
    name: busybox
  restartPolicy: Always
root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# kubectl apply -f busybox.yaml 
pod/busybox created
root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# kubectl get pod -n linux39
NAME                                READY   STATUS    RESTARTS   AGE
busybox                             1/1     Running   0          8s
nginx-deployment-6997c89dfb-9vblx   1/1     Running   0          28m
  • 测试在同一个 namespace 中能够 ping 通
  • 这里进入到 busybox 的 pod 中进行测试:进行 ping 之前
root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# kubectl exec -it busybox sh -n linux39
/ # ping ng-deploy-80 #访问 service 的 NAME
PING ng-deploy-80 (192.168.1.206): 56 data bytes #虽然ping不通但是能解析DNS 也说明能够访问
^C
--- ng-deploy-80 ping statistics ---
6 packets transmitted, 0 packets received, 100% packet loss
/ # wget ng-deploy-80 #下载 index.html 页面
Connecting to ng-deploy-80 (192.168.1.206:80)
saving to 'index.html'
index.html           100% |****************************************************|   612  0:00:00 ETA
'index.html' saved
/ # cat index.html #验证 index.html 页面
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...省略...

/ # 
  • 结论:在同一个 namespace 中 可以直接通过 service 的 NAME 进行访问 在 yaml 文件中直接指定 service-name 即可以实现微服务对外不访问

service对外访问

root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# cat 3-svc_NodePort.yml 
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80 
  namespace: linux39
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80 #pod中的服务端口 一定要写对
    nodePort: 30012 #宿主机暴露端口
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80
root@k8s-master1:/opt/k8s-data/yaml/linux39/case2# kubectl apply -f 3-svc_NodePort.yml 
service/ng-deploy-80 configured
  • 30012 这个自定义端口在K8S集群中每一个宿主机都会监听的一个端口,只要有这个端口就能通过这个端口访问service 地址进而 service 端口转发到内部的 pod 节点
root@node-1:~# ss -ntl
root@node-2:~# ss -ntl
root@node-3:~# ss -ntl

mark

Volume

  • 容器中的文件在磁盘上是临时存放的,这给容器中运行的特殊应用程序到来了一些问题。首先,当容器崩溃时,kubelet 将重启容器,容器中的文件将会丢失————因为容器会以干净的状态重建。其次,当在一个 Pod 中同时运行多个容器时,常常需要在这些容器之间共享文件。Kubernetes 抽象出 Volume 对象来解决这两个问题。
数据和镜像解耦,以及容器间的数据共享。
k8s抽象出的一个对象,用来保存数据,做存储用。
常用的几种卷:
    emptyDir:本地临时卷
    hostPath:本地卷
    nfs等:共享卷
    configmap: 配置文件

CASE3:emptyDir

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: k8s.gcr.io/test-webserver
    name: test-container
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir: {}
root@k8s-master1:/opt/k8s-data/yaml/linux39/case3# pwd
/opt/k8s-data/yaml/linux39/case3
root@k8s-master1:/opt/k8s-data/yaml/linux39/case3# cat deploy_empty.yml 
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: linux39
spec:
  replicas: 1
  selector:
    matchLabels: #rs or deployment
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx 
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /cache
          name: cache-volume
      volumes:
      - name: cache-volume
        emptyDir: {}
root@k8s-master1:/opt/k8s-data/yaml/linux39/case3# pwd
/opt/k8s-data/yaml/linux39/case3
root@k8s-master1:/opt/k8s-data/yaml/linux39/case3# kubectl apply -f deploy_empty.yml 
deployment.apps/nginx-deployment created
root@k8s-master1:/opt/k8s-data/yaml/linux39/case3# kubectl get pod -n linux39
NAME                                READY   STATUS    RESTARTS   AGE
busybox                             1/1     Running   0          28m
nginx-deployment-7cc86d98d5-mk95q   1/1     Running   0          12s
root@k8s-master1:/opt/k8s-data/yaml/linux39/case3# kubectl exec -it nginx-deployment-7cc86d98d5-mk95q bash -n linux39
root@nginx-deployment-7cc86d98d5-mk95q:/# cd /cache/
root@nginx-deployment-7cc86d98d5-mk95q:/cache# echo 112233 > linux39.txt
root@nginx-deployment-7cc86d98d5-mk95q:/cache# exit
exit
root@k8s-master1:/opt/k8s-data/yaml/linux39/case3# kubectl get pod -n linux39 -o wide #查看创建的数据在哪一个节点 发现在 node-3 中
NAME                                READY   STATUS    RESTARTS   AGE    IP           NODE     NOMINATED NODE   READINESS GATES
nginx-deployment-7cc86d98d5-mk95q   1/1     Running   0          2m1s   10.10.6.53   node-1   <none>           <none>
  • 在 node-1 中查找数据的存储目录
root@node-1:~# find / -name linux39.txt
/var/lib/kubelet/pods/daad624e-23a0-4850-a523-376b9a4c6969/volumes/kubernetes.io~empty-dir/cache-volume/linux39.txt
  • 上面的路径是固定的。但是 pod 的编号 ID(daad624e-23a0-4850-a523-376b9a4c6969) 不是固定的,这里可以用 * 进行匹配。emptyDir 的数据在 pod 被删除的同时数据也会被删除清空!

CASE4:hostPath

  • hostPath: https://kubernetes.io/zh/docs/concepts/storage/volumes/#hostpath
  • hostPath 卷能将主机节点文件系统上的文件或目录挂载到您的 Pod 中。 虽然这不是大多数 Pod 需要的,但是它为一些应用程序提供了强大的逃生舱。
  • hostPath 卷将主机节点的文件系统中的文件或目录挂载到集群中,pod 删除的时候,卷不会被删除。
root@k8s-master1:/opt/k8s-data/yaml/linux39/case4# pwd
/opt/k8s-data/yaml/linux39/case4
root@k8s-master1:/opt/k8s-data/yaml/linux39/case4# cat deploy_hostPath.yml 
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx 
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /data/mysql
          name: data-volume #和下面volumes的挂载名称要一致
      volumes:
      - name: data-volume #定义挂载的名称
        hostPath: #指定使用什么类型 什么样子的卷 
          path: /data/mysql
root@k8s-master1:/opt/k8s-data/yaml/linux39/case4# kubectl apply -f deploy_hostPath.yml 
deployment.apps/nginx-deployment-2 created
root@k8s-master1:/opt/k8s-data/yaml/linux39/case4# kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-c2plf            1/1     Running   3          5d10h
net-test1-5fcc69db59-xzvrl            1/1     Running   3          5d10h
net-test2-8456fd74f7-ckpsr            1/1     Running   3          5d9h
net-test2-8456fd74f7-nbzx8            1/1     Running   3          5d9h
nginx-deployment-2-7944748bc4-5gg2m   1/1     Running   0          14s
nginx-deployment-5f4dc447b5-wcg2t     1/1     Running   0          2d3h
tomcat-deployment-5cd65b4d74-d4x7s    1/1     Running   2          5d
root@k8s-master1:/opt/k8s-data/yaml/linux39/case4# kubectl get pod -o wide ##查看在哪一个 node 节点中,为 node-1
NAME                                  READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
net-test1-5fcc69db59-c2plf            1/1     Running   3          5d10h   10.10.5.18   node-3   <none>           <none>
net-test1-5fcc69db59-xzvrl            1/1     Running   3          5d10h   10.10.4.14   node-2   <none>           <none>
net-test2-8456fd74f7-ckpsr            1/1     Running   3          5d9h    10.10.6.16   node-1   <none>           <none>
net-test2-8456fd74f7-nbzx8            1/1     Running   3          5d9h    10.10.4.15   node-2   <none>           <none>
nginx-deployment-2-7944748bc4-5gg2m   1/1     Running   0          37s     10.10.6.54   node-1   <none>           <none>
nginx-deployment-5f4dc447b5-wcg2t     1/1     Running   0          2d3h    10.10.4.16   node-2   <none>           <none>
tomcat-deployment-5cd65b4d74-d4x7s    1/1     Running   2          5d      10.10.5.17   node-3   <none>           <none>
  • 可以去node-1主机中查看是否自动创建了一个/data/mysql的目录
root@k8s-master1:/opt/k8s-data/yaml/linux39/case4# kubectl exec -it nginx-deployment-2-7944748bc4-5gg2m bash 
root@nginx-deployment-2-7944748bc4-5gg2m:# cd /data/mysql
root@nginx-deployment-2-7944748bc4-5gg2m:/data/mysql# mkdir logs
root@nginx-deployment-2-7944748bc4-5gg2m:/data/mysql# cd logs
root@nginx-deployment-2-7944748bc4-5gg2m:/data/mysql# echo bokebi > logs/nginx.logs
  • 在 node-1 中验证数据的存储目录
root@node-1:~# cd /data/mysql/logs/
root@node-1:/data/mysql/logs# ls
nginx.logs
root@node-1:/data/mysql/logs# cat nginx.logs  #这份文件宿主机和pod容器就一份,同修改同删除
bokebi
  • 验证:在pod被删除时候,之前的数据是否被删除
root@k8s-master1:/opt/k8s-data/yaml/linux39/case4# kubectl delete -f deploy_hostPath.yml 
deployment.apps "nginx-deployment-2" deleted
root@node-1:~# cat /data/mysql/logs/nginx.logs 
bokebi
  • 总结:hostPath卷将主机节点的文件系统中的文件或目录挂载到集群中,pod删除的时候,卷不会被删除。

CASE5:nfs Volume

  • nfs:
  • nfs 卷允许将现有的 NFS(网络文件系统)共享挂载到容器中。不像 emptyDir,当删除 Pod 时,nfs 卷的内容被保留,卷仅仅是被卸载。这意味着NFS 卷可以预填充数据,并且可以在pod 之间“切换”数据。NFS 可以被多个写入者同时挂载。

警告:在您使用 NFS 卷之前,必须运行自己的 NFS 服务器并将目标 share 导出备用。

  • 这里实验复用 HA-service 的机器:192.168.26.134
root@haproxy-server:~# apt install nfs-server -y

root@haproxy-server:~# mkdir /data/k8sdata -p

root@haproxy-server:~# vim /etc/exports

...省略... 
/data/k8sdata *(rw,no_root_squash)

root@haproxy-server:~# systemctl restart nfs-server.service 

root@haproxy-server:~# systemctl enable nfs-server.service 
  • 在 node 节点上面查看共享出来的挂载卷。如果看不到说明配置有问题,后面实验也挂不上。
root@node-1:~# apt install nfs-common -y
Reading package lists... Done
Building dependency tree       
Reading state information... Done
nfs-common is already the newest version (1:1.3.4-2.1ubuntu5.2).
0 upgraded, 0 newly installed, 0 to remove and 168 not upgraded.
root@node-1:~# showmount -e 192.168.26.134
Export list for 192.168.26.134:
/data/k8sdata *
  • 在node节点的宿主机挂载
root@node-1:~# mount -t nfs 192.168.26.134:/data/k8sdata /mnt
  • 实验
root@k8s-master1:/opt/k8s-data/yaml/linux39/case5# pwd
/opt/k8s-data/yaml/linux39/case5
root@k8s-master1:/opt/k8s-data/yaml/linux39/case5# ll
total 16
drwxr-xr-x 2 root root 4096 Apr  4 03:58 ./
drwxr-xr-x 7 root root 4096 Apr  4 02:49 ../
-rw-r--r-- 1 root root  804 Mar 30 18:33 deploy_nfs2.yml
-rw-r--r-- 1 root root  800 Apr  4 03:58 deploy_nfs.yml
root@k8s-master1:/opt/k8s-data/yaml/linux39/case5# cat deploy_nfs.yml 
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx 
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /usr/share/nginx/html/mysite #挂载到 nfs 服务器的 pod 的指定目录
          name: my-nfs-volume
      volumes:
      - name: my-nfs-volume
        nfs:
          server: 192.168.26.134 #nfs 服务器地址
          path: /data/k8sdata #nfs 共享服务器共享的目录

---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 30011
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80
ot@k8s-master1:/opt/k8s-data/yaml/linux39/case5# kubectl apply -f deploy_nfs.yml 
deployment.apps/nginx-deployment-3 created
service/ng-deploy-80 unchanged
root@k8s-master1:/opt/k8s-data/yaml/linux39/case5# kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-c2plf            1/1     Running   3          5d10h
net-test1-5fcc69db59-xzvrl            1/1     Running   3          5d10h
net-test2-8456fd74f7-ckpsr            1/1     Running   3          5d10h
net-test2-8456fd74f7-nbzx8            1/1     Running   3          5d10h
nginx-deployment-3-688784467c-nb9st   1/1     Running   0          3m27s
nginx-deployment-5f4dc447b5-wcg2t     1/1     Running   0          2d4h
tomcat-deployment-5cd65b4d74-d4x7s    1/1     Running   2          5d1h
root@k8s-master1:/opt/k8s-data/yaml/linux39/case5# kubectl exec -it nginx-deployment-3-688784467c-nb9st bash
root@nginx-deployment-3-688784467c-nb9st:/# df -Th
Filesystem                   Type     Size  Used Avail Use% Mounted on
overlay                      overlay   98G  5.9G   88G   7% /
tmpfs                        tmpfs     64M     0   64M   0% /dev
tmpfs                        tmpfs    2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/sda1                    ext4      98G  5.9G   88G   7% /etc/hosts
shm                          tmpfs     64M     0   64M   0% /dev/shm
192.168.26.134:/data/k8sdata nfs4      98G  4.7G   89G   5% /usr/share/nginx/html/mysite
tmpfs                        tmpfs    2.0G   12K  2.0G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                        tmpfs    2.0G     0  2.0G   0% /proc/acpi
tmpfs                        tmpfs    2.0G     0  2.0G   0% /proc/scsi
tmpfs                        tmpfs    2.0G     0  2.0G   0% /sys/firmware

mark

  • 在共享数据中创建数据,看挂载node节点是否能够访问?
root@haproxy-server:/data# cd /data/k8sdata/
root@haproxy-server:/data/k8sdata# echo "linux39 test page" > linux39.html

mark

  • 新建一个 pod 看一个存储是否能够被多个 pod 所挂载
root@k8s-master1:/opt/k8s-data/yaml/linux39/case5# cat deploy_nfs2.yml 
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-site2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-82
  template:
    metadata:
      labels:
        app: ng-deploy-82
    spec:
      containers:
      - name: ng-deploy-82
        image: nginx 
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /usr/share/nginx/html/mysite #挂载到 nfs 服务器的 pod 的指定目录
          name: my-nfs-volume
      volumes:
      - name: my-nfs-volume
        nfs:
          server: 192.168.26.134 #nfs 服务器地址
          path: /data/k8sdata #nfs 共享服务器共享的目录

---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-82
spec:
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30032 #对外暴露访问端口
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-82
root@k8s-master1:/opt/k8s-data/yaml/linux39/case5# kubectl apply -f deploy_nfs2.yml 
deployment.apps/nginx-deployment-site2 unchanged
service/ng-deploy-82 created
root@k8s-master1:/opt/k8s-data/yaml/linux39/case5# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-c2plf                1/1     Running   3          5d11h
net-test1-5fcc69db59-xzvrl                1/1     Running   3          5d11h
net-test2-8456fd74f7-ckpsr                1/1     Running   3          5d10h
net-test2-8456fd74f7-nbzx8                1/1     Running   3          5d10h
nginx-deployment-3-688784467c-nb9st       1/1     Running   0          20m
nginx-deployment-5f4dc447b5-wcg2t         1/1     Running   0          2d4h
nginx-deployment-site2-66756d5865-hx6rx   1/1     Running   0          23s
tomcat-deployment-5cd65b4d74-d4x7s        1/1     Running   2          5d1h
root@k8s-master1:/opt/k8s-data/yaml/linux39/case5# kubectl exec -it nginx-deployment-site2-66756d5865-hx6rx bash
root@nginx-deployment-site2-66756d5865-hx6rx:/# df -Th
Filesystem                   Type     Size  Used Avail Use% Mounted on
overlay                      overlay   98G  6.6G   87G   8% /
tmpfs                        tmpfs     64M     0   64M   0% /dev
tmpfs                        tmpfs    2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/sda1                    ext4      98G  6.6G   87G   8% /etc/hosts
shm                          tmpfs     64M     0   64M   0% /dev/shm
192.168.26.134:/data/k8sdata nfs4      98G  4.7G   89G   5% /usr/share/nginx/html/mysite
tmpfs                        tmpfs    2.0G   12K  2.0G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs                        tmpfs    2.0G     0  2.0G   0% /proc/acpi
tmpfs                        tmpfs    2.0G     0  2.0G   0% /proc/scsi
tmpfs                        tmpfs    2.0G     0  2.0G   0% /sys/firmware

mark

  • 实现了,一个 nfs 能被多个 pod 同时挂载,从而数据共享。
  • 在一个 pod 里面同时挂载多个 NFS 服务,实现 pod 里面有多个数据源,类似服务的多个站点数据区分。
root@haproxy-server:~# mkdir /data/linux39
root@haproxy-server:~# vim /etc/exports

...省略...
/data/k8sdata *(rw,no_root_squash)
/data/linux39 *(rw,no_root_squash)

root@haproxy-server:~# systemctl restart nfs-server.service
  • node节点上查看是否共享成功搜索到,192.168.26.134 的共享挂载点
root@node-1:~# showmount -e 192.168.26.134
Export list for 192.168.26.134:
/data/linux39 *
/data/k8sdata *
root@k8s-master1:/opt/k8s-data/yaml/linux39/case5# cat deploy_nfs.yml 
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-3
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx 
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /usr/share/nginx/html/mysite #对应/data/k8sdata目录挂载
          name: my-nfs-volume #区分挂载哪个nfs
        - mountPath: /usr/share/nginx/html #对应 /data/linux39目录挂载
          name: linux39-nfs-volume #区分挂载哪个nfs
      volumes:
      - name: my-nfs-volume
        nfs:
          server: 192.168.26.134
          path: /data/k8sdata
      - name: linux39-nfs-volume
        nfs:
          server: 192.168.26.134
          path: /data/linux39
---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 30016
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80
root@k8s-master1:/opt/k8s-data/yaml/linux39/case5# kubectl get pod
NAME                                  READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-c2plf            1/1     Running   3          5d11h
net-test1-5fcc69db59-xzvrl            1/1     Running   3          5d11h
net-test2-8456fd74f7-ckpsr            1/1     Running   3          5d11h
net-test2-8456fd74f7-nbzx8            1/1     Running   3          5d11h
nginx-deployment-3-674cf8d848-ltg7q   1/1     Running   0          50s
nginx-deployment-5f4dc447b5-wcg2t     1/1     Running   0          2d5h
tomcat-deployment-5cd65b4d74-d4x7s    1/1     Running   2          5d2h
root@k8s-master1:/opt/k8s-data/yaml/linux39/case5# kubectl exec -it nginx-deployment-3-674cf8d848-ltg7q bash
root@nginx-deployment-3-674cf8d848-ltg7q:/# df -Th
Filesystem                   Type     Size  Used Avail Use% Mounted on
overlay                      overlay   98G  5.9G   88G   7% /
tmpfs                        tmpfs     64M     0   64M   0% /dev
tmpfs                        tmpfs    2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/sda1                    ext4      98G  5.9G   88G   7% /etc/hosts
shm                          tmpfs     64M     0   64M   0% /dev/shm
192.168.26.134:/data/linux39 nfs4      98G  4.7G   89G   5% /usr/share/nginx/html
tmpfs                        tmpfs    2.0G   12K  2.0G   1% /run/secrets/kubernetes.io/serviceaccount
192.168.26.134:/data/k8sdata nfs4      98G  4.7G   89G   5% /usr/share/nginx/html/mysite
tmpfs                        tmpfs    2.0G     0  2.0G   0% /proc/acpi
tmpfs                        tmpfs    2.0G     0  2.0G   0% /proc/scsi
tmpfs                        tmpfs    2.0G     0  2.0G   0% /sys/firmware

mark

  • NFS 服务挂载的时候,不仅仅给 pod 节点 pod 网段共享目录的权限,也要给宿主机网段权限,不然挂载报错:授权范围得注意,而且不是用 mount 去挂载

  • 如:

root@HA-server1:/data/k8sdata# vim /etc/exports 
/data/k8sdata 10.10.0.0/16 192.168.26.0/24(rw,no_root_squash) #用空格隔开
/data/linux39 10.10.0.0/16 192.168.26.0/24(rw,no_root_squash)

CASE6:configmap

  • configmap: https://kubernetes.io/zh/docs/concepts/storage/volumes/#configmap
  • configmap 资源提供了向 Pod 注入配置数据的方法。 ConfigMap 对象中存储的数据可以被 configMap 类型的卷引用,然后被应用到 Pod 中运行的容器化应用。
  • 配置信息大部分放到镜像里面,如果同一些配置信息给多个 pod 复用 可以用 configmap
root@k8s-master1:/opt/k8s-data/yaml/linux39/case6# pwd
/opt/k8s-data/yaml/linux39/case6
root@k8s-master1:/opt/k8s-data/yaml/linux39/case6# cat deploy_configmap.yml 
apiVersion: v1
kind: ConfigMap #类型是 configMap
metadata:
  name: nginx-config #通过这个名称来调用
data:
 default: | #default 一个 key 的名称 下面为定义的服务内容
    server {
       listen       80;
       server_name  www.mysite.com;
       index        index.html;

       location / {
           root /data/nginx/html;
           if (!-e $request_filename) {
               rewrite ^/(.*) /index.html last;
           }
       }
    }

---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx 
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /data/nginx/html
          name: nginx-static-dir
        - name: nginx-config #调用 nginx-config 这个的类型是 configMap
          mountPath:  /etc/nginx/conf.d
      volumes:
      - name: nginx-static-dir
        hostPath:
          path: /data/nginx/linux39
      - name: nginx-config
        configMap:
          name: nginx-config
          items:
             - key: default #key 调用最前面的 default
               path: mysite.conf #挂载容器的路径,结合起来就是 /etc/nginx/conf.d/mysite.conf

---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 30019 #对外暴露访问端口
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80
root@k8s-master1:/opt/k8s-data/yaml/linux39/case6# kubectl get pod
NAME                                 READY   STATUS    RESTARTS   AGE
net-test1-5fcc69db59-c2plf           1/1     Running   3          5d11h
net-test1-5fcc69db59-xzvrl           1/1     Running   3          5d11h
net-test2-8456fd74f7-ckpsr           1/1     Running   3          5d11h
net-test2-8456fd74f7-nbzx8           1/1     Running   3          5d11h
nginx-deployment-4-8c449b55f-kzcxg   1/1     Running   0          59s
nginx-deployment-5f4dc447b5-wcg2t    1/1     Running   0          2d5h
tomcat-deployment-5cd65b4d74-d4x7s   1/1     Running   2          5d2h
root@k8s-master1:/opt/k8s-data/yaml/linux39/case6# kubectl get service
NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes              ClusterIP   192.168.0.1     <none>        443/TCP        5d14h
magedu-nginx-service    NodePort    192.168.3.55    <none>        80:30004/TCP   2d9h
magedu-tomcat-service   NodePort    192.168.1.200   <none>        80:30005/TCP   5d2h
ng-deploy-80            NodePort    192.168.1.114   <none>        81:30019/TCP   3m36s
root@k8s-master1:/opt/k8s-data/yaml/linux39/case6# kubectl exec -it nginx-deployment-4-8c449b55f-kzcxg  bash
root@nginx-deployment-4-8c449b55f-kzcxg:/# cat /etc/nginx/conf.d/mysite.conf 
server {
   listen       80;
   server_name  www.mysite.com;
   index        index.html;

   location / {
       root /data/nginx/html;
       if (!-e $request_filename) {
           rewrite ^/(.*) /index.html last;
       }
   }
}

mark

  • 在 node-1 中创建首页文件,看是否能够调用 configmap
root@node-1:~# cd /data/nginx/linux39/
root@node-1:/data/nginx/linux39# ll
total 8
drwxr-xr-x 2 root root 4096 Apr  4 04:49 ./
drwxr-xr-x 3 root root 4096 Apr  4 04:49 ../
root@node-1:/data/nginx/linux39# vim index.html
root@node-1:/data/nginx/linux39# cat index.html 
configMap test page

mark

DaemonSet

  • DaemonSet: https://kubernetes.io/zh/docs/concepts/workloads/controllers/daemonset/

  • DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。当有节点加入集群时, 也会为他们新增一个 Pod 。当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod。

  • DaemonSet 会在当前k8s集群中的所有 node 创建相同的 node,主要用于在所有 node 执行相同的操作的场景。
    1.日志收集
    2.Prometheus
    3.flannel

root@k8s-master1:/opt/k8s-data/yaml/linux39# pwd
/opt/k8s-data/yaml/linux39
root@k8s-master1:/opt/k8s-data/yaml/linux39# vim daemonset.yml 
root@k8s-master1:/opt/k8s-data/yaml/linux39# cat daemonset.yml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      # this toleration is to have the daemonset runnable on master nodes
      # remove it if your masters can't run pods
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log 
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log #把/var/log挂载到pod里面
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers #把/var/lib/docker/containers挂载到pod里面
root@k8s-master1:/opt/k8s-data/yaml/linux39# kubectl apply -f daemonset.yml 
daemonset.apps/fluentd-elasticsearch created
root@k8s-master1:/opt/k8s-data/yaml/linux39# kubectl get pod -n kube-system 
NAME                                  READY   STATUS              RESTARTS   AGE
coredns-7f9c544f75-cz7f8              1/1     Running             4          5d11h
coredns-7f9c544f75-h9s2f              1/1     Running             2          5d4h
etcd-k8s-master1                      1/1     Running             4          5d14h
etcd-k8s-master2                      1/1     Running             2          5d13h
etcd-k8s-master3                      1/1     Running             2          5d13h
fluentd-elasticsearch-ftmdh           0/1     ContainerCreating   0          48s
fluentd-elasticsearch-lfwkv           0/1     ContainerCreating   0          48s
fluentd-elasticsearch-s25qm           0/1     ContainerCreating   0          48s
fluentd-elasticsearch-v8fj6           0/1     ContainerCreating   0          48s
fluentd-elasticsearch-wfdnf           0/1     ContainerCreating   0          48s
fluentd-elasticsearch-xl9f4           0/1     ContainerCreating   0          48s
  • 可以看到集群中所有的 master 和所有的 node 节点中都创建了相同的一个 pod 可以用于相同资源的获取调用。
点赞

发表回复

电子邮件地址不会被公开。必填项已用 * 标注