外贸led网站建设,个人博客首页,母版做双语网站,扬州网页设计培训Kubernetes 是目前最受欢迎的容器编排技术#xff0c;越来越多的应用开始往 Kubernetes 中迁移。Kubernetes 现有的 ReplicaSet、Deployment、Service 等资源对象已经可以满足无状态应用对于自动扩缩容、负载均衡等基本需求。但是对于有状态的、分布式的应用#xff0c;通常拥…Kubernetes 是目前最受欢迎的容器编排技术越来越多的应用开始往 Kubernetes 中迁移。Kubernetes 现有的 ReplicaSet、Deployment、Service 等资源对象已经可以满足无状态应用对于自动扩缩容、负载均衡等基本需求。但是对于有状态的、分布式的应用通常拥有各自的一套模型定义规范例如 PrometheusEtcdZookeeperElasticsearch 等等。部署这些分布式应用往往需要熟悉特定领域的知识并且在扩缩容和升级时需要考虑如何保证应用服务的可用性等问题。为了简化有状态、分布式应用的部署Kubernetes Operator 应运而生。Kubernetes Operator 是一种特定的应用控制器通过 CRDCustom Resource Definitions自定义资源定义扩展 Kubernetes API 的功能可以用它来创建、配置和管理特定的有状态应用而不需要直接去使用 Kubernetes 中最原始的一些资源对象比如 PodDeploymentService 等等。Elastic Cloud on KubernetesECK 是其中的一种 Kubernetes Operator方便我们管理 Elastic Stack 家族中的各种组件例如 ElasticsearchKibanaAPMBeats 等等。比如只需要定义一个 Elasticsearch 类型的 CRD 对象ECK 就可以帮助我们快速搭建出一套 Elasticsearch 集群。使用create安装Elastic的自定义资源定义[rootk8s-192-168-1-140 ~]# kubectl create -f https://download.elastic.co/downloads/eck/3.2.0/crds.yamlcustomresourcedefinition.apiextensions.k8s.io/agents.agent.k8s.elastic.co createdcustomresourcedefinition.apiextensions.k8s.io/apmservers.apm.k8s.elastic.co createdcustomresourcedefinition.apiextensions.k8s.io/beats.beat.k8s.elastic.co createdcustomresourcedefinition.apiextensions.k8s.io/elasticmapsservers.maps.k8s.elastic.co createdcustomresourcedefinition.apiextensions.k8s.io/elasticsearchautoscalers.autoscaling.k8s.elastic.co createdcustomresourcedefinition.apiextensions.k8s.io/elasticsearches.elasticsearch.k8s.elastic.co createdcustomresourcedefinition.apiextensions.k8s.io/enterprisesearches.enterprisesearch.k8s.elastic.co createdcustomresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co createdcustomresourcedefinition.apiextensions.k8s.io/logstashes.logstash.k8s.elastic.co createdcustomresourcedefinition.apiextensions.k8s.io/stackconfigpolicies.stackconfigpolicy.k8s.elastic.co created[rootk8s-192-168-1-140 ~]#使用kubectl apply 安装operator及其RBAC规则[rootk8s-192-168-1-140 ~]# kubectl apply -f https://download.elastic.co/downloads/eck/3.2.0/operator.yamlnamespace/elastic-system createdserviceaccount/elastic-operator createdsecret/elastic-webhook-server-cert createdconfigmap/elastic-operator createdclusterrole.rbac.authorization.k8s.io/elastic-operator createdclusterrole.rbac.authorization.k8s.io/elastic-operator-view createdclusterrole.rbac.authorization.k8s.io/elastic-operator-edit createdclusterrolebinding.rbac.authorization.k8s.io/elastic-operator createdservice/elastic-webhook-server createdstatefulset.apps/elastic-operator createdvalidatingwebhookconfiguration.admissionregistration.k8s.io/elastic-webhook.k8s.elastic.co created[rootk8s-192-168-1-140 ~]#查看是否启动完成[rootk8s-192-168-1-140 ~]kubectl get -n elastic-system podsNAME READY STATUS RESTARTS AGEelastic-operator-0 1/1 Running 0 8m38s[rootk8s-192-168-1-140 ~]#启动部署Operator自动创建和管理Kubernetes资源以实现Elasticsearch集群的期望状态。可能需要几分钟的时间才能创建所有资源并准备好使用群集。cat EOF | kubectl apply -f -apiVersion: elasticsearch.k8s.elastic.co/v1kind: Elasticsearchmetadata:name: quickstartspec:version: 9.2.2nodeSets:- name: defaultcount: 1config:node.store.allow_mmap: falseEOF存储用量创建时默认为1G存储空间可以在创建时配置申明空间cat EOF | kubectl apply -f -apiVersion: elasticsearch.k8s.elastic.co/v1kind: Elasticsearchmetadata:name: quickstartspec:version: 9.2.2nodeSets:- name: defaultcount: 1volumeClaimTemplates:- metadata:name: elasticsearch-dataspec:accessModes:- ReadWriteOnceresources:requests:storage: 5GistorageClassName: nfs-storageconfig:node.store.allow_mmap: falseEOF查看部署状态[rootk8s-192-168-1-140 ~]#[rootk8s-192-168-1-140 ~]# kubectl get pod -ANAMESPACE NAME READY STATUS RESTARTS AGEdefault nginx-66686b6766-tdwt2 1/1 Running 2 (invalid ago) 81ddefault quickstart-es-default-0 0/1 Pending 0 2m1selastic-system elastic-operator-0 1/1 Running 0 12mkube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (invalid ago) 81dkube-system calico-node-hpwvr 1/1 Running 2 (invalid ago) 81dkube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (invalid ago) 81dkube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (invalid ago) 81dkube-system node-local-dns-nz4q7 1/1 Running 2 (invalid ago) 81d[rootk8s-192-168-1-140 ~]#查看日志[rootk8s-192-168-1-140 ~]# kubectl logs -f quickstart-es-default-0Defaulted container elasticsearch out of: elasticsearch, elastic-internal-init-filesystem (init), elastic-internal-suspend (init)[rootk8s-192-168-1-140 ~]#安装NFS动态挂载[rootk8s-192-168-1-140 ~]# yum install nfs-utils -y[rootk8s-192-168-1-140 ~]# mkdir /nfs[rootk8s-192-168-1-140 ~]# vim /etc/exports/nfs *(rw,sync,no_root_squash,no_subtree_check)[rootk8s-192-168-1-140 ~]# systemctl restart rpcbind[rootk8s-192-168-1-140 ~]# systemctl restart nfs-server[rootk8s-192-168-1-140 ~]# systemctl enable rpcbind[rootk8s-192-168-1-140 ~]# systemctl enable nfs-serverK8S 编写NFS供给[rootk8s-192-168-1-140 ~]# vim nfs-storage.yaml[rootk8s-192-168-1-140 ~]#[rootk8s-192-168-1-140 ~]# cat nfs-storage.yamlapiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: nfs-storageannotations:storageclass.kubernetes.io/is-default-class: trueprovisioner: k8s-sigs.io/nfs-subdir-external-provisionerparameters:archiveOnDelete: true ## 删除pv的时候pv的内容是否要备份---apiVersion: apps/v1kind: Deploymentmetadata:name: nfs-client-provisionerlabels:app: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: defaultspec:replicas: 1strategy:type: Recreateselector:matchLabels:app: nfs-client-provisionertemplate:metadata:labels:app: nfs-client-provisionerspec:serviceAccountName: nfs-client-provisionercontainers:- name: nfs-client-provisionerimage: registry.cn-hangzhou.aliyuncs.com/chenby/nfs-subdir-external-provisioner:v4.0.2# resources:# limits:# cpu: 10m# requests:# cpu: 10mvolumeMounts:- name: nfs-client-rootmountPath: /persistentvolumesenv:- name: PROVISIONER_NAMEvalue: k8s-sigs.io/nfs-subdir-external-provisioner- name: NFS_SERVERvalue: 192.168.1.140 ## 指定自己nfs服务器地址- name: NFS_PATHvalue: /nfs/ ## nfs服务器共享的目录volumes:- name: nfs-client-rootnfs:server: 192.168.1.140path: /nfs/---apiVersion: v1kind: ServiceAccountmetadata:name: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: default---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: nfs-client-provisioner-runnerrules:- apiGroups: []resources: [nodes]verbs: [get, list, watch]- apiGroups: []resources: [persistentvolumes]verbs: [get, list, watch, create, delete]- apiGroups: []resources: [persistentvolumeclaims]verbs: [get, list, watch, update]- apiGroups: [storage.k8s.io]resources: [storageclasses]verbs: [get, list, watch]- apiGroups: []resources: [events]verbs: [create, update, patch]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: run-nfs-client-provisionersubjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: defaultroleRef:kind: ClusterRolename: nfs-client-provisioner-runnerapiGroup: rbac.authorization.k8s.io---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: defaultrules:- apiGroups: []resources: [endpoints]verbs: [get, list, watch, create, update, patch]---kind: RoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: leader-locking-nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: defaultsubjects:- kind: ServiceAccountname: nfs-client-provisioner# replace with namespace where provisioner is deployednamespace: defaultroleRef:kind: Rolename: leader-locking-nfs-client-provisionerapiGroup: rbac.authorization.k8s.io开始安装[rootk8s-192-168-1-140 ~]# kubectl apply -f nfs-storage.yamlstorageclass.storage.k8s.io/nfs-storage createddeployment.apps/nfs-client-provisioner createdserviceaccount/nfs-client-provisioner createdclusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner createdclusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner createdrole.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner createdrolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created[rootk8s-192-168-1-140 ~]#查看存储[rootk8s-192-168-1-140 ~]# kubectl get storageclasses.storage.k8s.ioNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEnfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 6h7m[rootk8s-192-168-1-140 ~]#[rootk8s-192-168-1-140 ~]# kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGEelasticsearch-data-quickstart-es-default-0 Bound pvc-2df832aa-1c54-4af6-8384-5e5c5f167445 5Gi RWO nfs-storage unset 39s[rootk8s-192-168-1-140 ~]#[rootk8s-192-168-1-140 ~]# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGEpvc-2df832aa-1c54-4af6-8384-5e5c5f167445 5Gi RWO Delete Bound default/elasticsearch-data-quickstart-es-default-0 nfs-storage unset 43s[rootk8s-192-168-1-140 ~]#查看ES服务发现[rootk8s-192-168-1-140 ~]# kubectl get serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.68.0.1 none 443/TCP 81dnginx NodePort 10.68.148.53 none 80:30330/TCP 81dquickstart-es-default ClusterIP None none 9200/TCP 74squickstart-es-http ClusterIP 10.68.66.232 none 9200/TCP 75squickstart-es-internal-http ClusterIP 10.68.121.73 none 9200/TCP 75squickstart-es-transport ClusterIP None none 9300/TCP 75s[rootk8s-192-168-1-140 ~]#查看ES密码[rootk8s-192-168-1-140 ~]# PASSWORD$(kubectl get secret quickstart-es-elastic-user -o go-template{{.data.elastic | base64decode}})[rootk8s-192-168-1-140 ~]#[rootk8s-192-168-1-140 ~]# kubectl get secret quickstart-es-elastic-user -o go-template{{.data.elastic | base64decode}}V3VPqwQMURTSg6zFYvVIsH13[rootk8s-192-168-1-140 ~]#[rootk8s-192-168-1-140 ~]#[rootk8s-192-168-1-140 ~]# curl -u elastic:$PASSWORD -k https://10.68.66.232:9200{name : quickstart-es-default-0,cluster_name : quickstart,cluster_uuid : JNqGubnmSeao_LO-JmypHg,version : {number : 9.2.2,build_flavor : default,build_type : docker,build_hash : ed771e6976fac1a085affabd45433234a4babeaf,build_date : 2025-11-27T08:06:51.614397514Z,build_snapshot : false,lucene_version : 10.3.2,minimum_wire_compatibility_version : 8.19.0,minimum_index_compatibility_version : 8.0.0},tagline : You Know, for Search}[rootk8s-192-168-1-140 ~]#安装Kibana服务cat EOF | kubectl apply -f -apiVersion: kibana.k8s.elastic.co/v1kind: Kibanametadata:name: quickstartspec:version: 9.2.2count: 1elasticsearchRef:name: quickstartEOF查看服务状态以及密码信息[rootk8s-192-168-1-140 ~]# kubectl get kibanaNAME HEALTH NODES VERSION AGEquickstart red 9.2.2 16s[rootk8s-192-168-1-140 ~]## 查看密码[rootk8s-192-168-1-140 ~]# kubectl get secret quickstart-es-elastic-user -ojsonpath{.data.elastic} | base64 --decode; echoV3VPqwQMURTSg6zFYvVIsH13[rootk8s-192-168-1-140 ~]#[rootk8s-192-168-1-140 ~]# kubectl get serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.68.0.1 none 443/TCP 81dnginx NodePort 10.68.148.53 none 80:30330/TCP 81dquickstart-es-default ClusterIP None none 9200/TCP 2m47squickstart-es-http ClusterIP 10.68.66.232 none 9200/TCP 2m48squickstart-es-internal-http ClusterIP 10.68.121.73 none 9200/TCP 2m48squickstart-es-transport ClusterIP None none 9300/TCP 2m48squickstart-kb-http ClusterIP 10.68.103.103 none 5601/TCP 24s[rootk8s-192-168-1-140 ~]#[rootk8s-192-168-1-140 ~]# kubectl get service quickstart-kb-httpNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEquickstart-kb-http ClusterIP 10.68.103.103 none 5601/TCP 2m39s[rootk8s-192-168-1-140 ~]#开启访问[rootk8s-192-168-1-140 ~] kubectl port-forward --address 0.0.0.0 service/quickstart-kb-http 5601Forwarding from 0.0.0.0:5601 - 5601# 登录地址https://192.168.1.140:5601/login用户elastic密码V3VPqwQMURTSg6zFYvVIsH13修改ES副本数cat EOF | kubectl apply -f -apiVersion: elasticsearch.k8s.elastic.co/v1kind: Elasticsearchmetadata:name: quickstartspec:version: 9.2.2nodeSets:- name: defaultcount: 3config:node.store.allow_mmap: falseEOF查看状态[rootk8s-192-168-1-140 ~]# kubectl get pod -ANAMESPACE NAME READY STATUS RESTARTS AGEdefault nfs-client-provisioner-58d465c998-w8mfq 1/1 Running 0 5h38mdefault nginx-66686b6766-tdwt2 1/1 Running 2 (invalid ago) 81ddefault quickstart-es-default-0 1/1 Running 0 5h51mdefault quickstart-kb-57cc78f6b8-b8fpf 1/1 Running 0 5h24melastic-system elastic-operator-0 1/1 Running 0 6h2mkube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (invalid ago) 81dkube-system calico-node-hpwvr 1/1 Running 2 (invalid ago) 81dkube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (invalid ago) 81dkube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (invalid ago) 81dkube-system node-local-dns-nz4q7 1/1 Running 2 (invalid ago) 81d[rootk8s-192-168-1-140 ~]# kubectl get pod -A -wNAMESPACE NAME READY STATUS RESTARTS AGEdefault nfs-client-provisioner-58d465c998-w8mfq 1/1 Running 0 5h38mdefault nginx-66686b6766-tdwt2 1/1 Running 2 (invalid ago) 81ddefault quickstart-es-default-0 1/1 Running 0 5h51mdefault quickstart-kb-57cc78f6b8-b8fpf 1/1 Running 0 5h24melastic-system elastic-operator-0 1/1 Running 0 6h2mkube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (invalid ago) 81dkube-system calico-node-hpwvr 1/1 Running 2 (invalid ago) 81dkube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (invalid ago) 81dkube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (invalid ago) 81dkube-system node-local-dns-nz4q7 1/1 Running 2 (invalid ago) 81ddefault quickstart-es-default-1 0/1 Pending 0 0sdefault quickstart-es-default-1 0/1 Pending 0 0sdefault quickstart-es-default-1 0/1 Pending 0 0sdefault quickstart-es-default-1 0/1 Init:0/2 0 0sdefault quickstart-es-default-1 0/1 Init:0/2 0 1sdefault quickstart-es-default-0 1/1 Running 0 5h51mdefault quickstart-es-default-1 0/1 Init:0/2 0 1sdefault quickstart-es-default-0 1/1 Running 0 5h51mdefault quickstart-es-default-1 0/1 Init:0/2 0 2sdefault quickstart-es-default-1 0/1 Init:1/2 0 3sdefault quickstart-es-default-1 0/1 PodInitializing 0 4sdefault quickstart-es-default-1 0/1 Running 0 5sdefault quickstart-es-default-1 1/1 Running 0 26sdefault quickstart-es-default-2 0/1 Pending 0 0sdefault quickstart-es-default-2 0/1 Pending 0 0sdefault quickstart-es-default-2 0/1 Pending 0 0sdefault quickstart-es-default-2 0/1 Init:0/2 0 0sdefault quickstart-es-default-2 0/1 Init:0/2 0 1sdefault quickstart-es-default-0 1/1 Running 0 5h51mdefault quickstart-es-default-1 1/1 Running 0 30sdefault quickstart-es-default-2 0/1 Init:0/2 0 2sdefault quickstart-es-default-1 1/1 Running 0 30sdefault quickstart-es-default-2 0/1 Init:0/2 0 2sdefault quickstart-es-default-0 1/1 Running 0 5h51mdefault quickstart-es-default-2 0/1 Init:1/2 0 3sdefault quickstart-es-default-2 0/1 PodInitializing 0 4sdefault quickstart-es-default-2 0/1 Running 0 5sdefault quickstart-es-default-2 1/1 Running 0 33s^C[rootk8s-192-168-1-140 ~]#[rootk8s-192-168-1-140 ~]#[rootk8s-192-168-1-140 ~]#[rootk8s-192-168-1-140 ~]# kubectl get pod -A -wNAMESPACE NAME READY STATUS RESTARTS AGEdefault nfs-client-provisioner-58d465c998-w8mfq 1/1 Running 0 5h40mdefault nginx-66686b6766-tdwt2 1/1 Running 2 (invalid ago) 81ddefault quickstart-es-default-0 1/1 Running 0 5h53mdefault quickstart-es-default-1 1/1 Running 0 2m19sdefault quickstart-es-default-2 1/1 Running 0 111sdefault quickstart-kb-57cc78f6b8-b8fpf 1/1 Running 0 5h26melastic-system elastic-operator-0 1/1 Running 0 6h4mkube-system calico-kube-controllers-78dcb7b647-8f2ph 1/1 Running 2 (invalid ago) 81dkube-system calico-node-hpwvr 1/1 Running 2 (invalid ago) 81dkube-system coredns-6746f4cb74-bhkv8 1/1 Running 2 (invalid ago) 81dkube-system metrics-server-55c56cb875-bwbpr 1/1 Running 2 (invalid ago) 81dkube-system node-local-dns-nz4q7 1/1 Running 2 (invalid ago) 81d卸载删除# 删除所有命名空间中的所有Elastic资源kubectl get namespaces --no-headers -o custom-columns:metadata.name \| xargs -n1 kubectl delete elastic --all -n# 删除operatorkubectl delete -f https://download.elastic.co/downloads/eck/3.2.0/operator.yamlkubectl delete -f https://download.elastic.co/downloads/eck/3.2.0/crds.yaml