二进制部署Kubernetes V1.18.X(Worker节点组件篇)
二进制部署Kubernetes V1.18.X(工作节点组件)继上一篇文章:二进制部署Kubernetes V1.18.X(主节点组件)之后,在完成以下工作后,
kubernetes集群的初步建设已经完成;Kubernetes工作节点组件包括:kubelet、kube-proxy;
-hostname-override:显示名称,在集群中是唯一的。
Network plug-in: Changchun New Industries is enabled.
-kube config:空路径,会自动生成,然后用来连接apiserver。
Bootstrap -kubeconfig: Start applying for certificate from apiserver for the first time.
-cni-bin-dir:网络插件路径。
-网络插件:网络插件模块
-cgroup-driver: it must be consistent with the exec-opts' ['native. cgroup driver=systemd'] set by dock workers, otherwise an error will be reported;
-config:配置参数文件
cert-dir:kube let证书生成目录
Pod-infra-container-image:管理Pod网络容器的镜像。
[root @ k8s-master 01 CFG]# CD/xdd/soft/kubernetes/CFG kube _ API server=' https://172 . 30 . 103 . 73:6443 ' # IP:port token=' 761 f38 b 83806866 fc6 f 524 f 85f 269073 ' #与token.csvLIE保持一致生成kubelet bootstrap kubeconfig配置文件聚类参数
库贝特尔配置集-cluster kubernetes \-certificate-authority=/xdd/soft/kubernetes/SSL/ca。PEM \-embed-certs=true \-server=https://172。30 .103 .73:6443 \-kube config=bootstrapkube配置#设置客户端认证参数
库贝特尔配置集-凭据kube let-bootstrap ' \-token=761 f38b 83806866 fc6f 524 f85f 269073 \-kube config=bootstrap。kube配置#设置上下文参数
库贝特尔配置集-上下文默认值\-cluster=kubernetes \-user=' kube let-bootstrap ' \-kube config=bootstrap。kube配置#设置默认上下文
[root@k8s-master01 cfg]# kubectl get csrNAMEAGESIGNERNAMEREQUESTORCONDITIONnode-csr-Q6xPd9TXJ8hedIvsY1vViFmkuxsl7pPpZ9yvYb4zVxI2m1skubernetes.io/kube-apiserver-client-kubeletkubelet-bootstrapPending# 批准申请[root@k8s-master01 cfg]# kubectl certificate approve node-csr-Q6xPd9TXJ8hedIvsY1vViFmkuxsl7pPpZ9yvYb4zVxIcertificatesigningrequest.certificates.k8s.io/node-csr-Q6xPd9TXJ8hedIvsY1vViFmkuxsl7pPpZ9yvYb4zVxI approved# 查看节点
[root@k8s-master cfg]# kubectl get nodeNAMESTATUSROLESAGEVERSIONk8s-masterNotReady5m12sv1.18.19注释:由于网络插件还没有部署,
创建配置文件cat /xdd/soft/kubernetes/cfg/kube-proxy.conf EOFKUBE_PROXY_OPTS='--logtostderr=false \\--v=2 \\--log-dir=/xdd/logs/kube-proxy \\--config=/xdd/soft/kubernetes/cfg/kube-proxy-config.yml'EOFmkdir -p /xdd/logs/kube-proxy创建配置参数文件cat /xdd/soft/kubernetes/cfg/kube-proxy-config.yml EOFkind: KubeProxyConfigurationapiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0metricsBindAddress: 0.0.0.0:10249clientConnection:kubeconfig: /xdd/soft/kubernetes/cfg/kube-proxy.kubeconfighostnameOverride: k8s-masterclusterCIDR: 10.244.0.0/16mode: ipvsipvs:scheduler: 'rr'iptables:masqueradeAll: trueEOF生成kube-proxy证书自签ca证书与kube-apiserver保持一致
[root@k8s-master ~]# mkdir -p/xdd/soft/tls/kube-proxy[root@k8s-master ~]# cd /xdd/soft/tls/kube-proxy/[root@k8s-master kube-proxy]# cp /xdd/soft/tls/kube-apiserver/ca.pem /xdd/soft/tls/kube-proxy/[root@k8s-master kube-proxy]# cp /xdd/soft/tls/kube-apiserver/ca-key.pem /xdd/soft/tls/kube-proxy/[root@k8s-master kube-proxy]# cp /xdd/soft/tls/kube-apiserver/ca-config.json /xdd/soft/tls/kube-proxy/cat kube-proxy-csr.json # 生成证书
[root@k8s-master kube-proxy]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kuberneteskube-proxy-csr.json | cfssljson -bare kube-proxy[root@k8s-master kube-proxy]# ls kube-proxy*pemkube-proxy-key.pemkube-proxy.pem[root@k8s-master kube-proxy]# cp kube-proxy*pem /xdd/soft/kubernetes/ssl/[root@k8s-master kube-proxy]# ls /xdd/soft/kubernetes/ssl/kube-proxy*pem/xdd/soft/kubernetes/ssl/kube-proxy-key.pem/xdd/soft/kubernetes/ssl/kube-proxy.pem生成kube-proxy.kubeconfig文件[root@k8s-master kube-proxy]# cd /xdd/soft/kubernetes/cfg/KUBE_APISERVER='https://172.30.103.73:6443'# 设置集群参数kubectl config set-cluster kubernetes \--certificate-authority=/xdd/soft/kubernetes/ssl/ca.pem \--embed-certs=true \--server=https://172.30.103.73:6443 \--kubeconfig=kube-proxy.kubeconfig# 设置客户端认证参数kubectl config set-credentials kube-proxy \--client-certificate=/xdd/soft/kubernetes/ssl/kube-proxy.pem \--client-key=/xdd/soft/kubernetes/ssl/kube-proxy-key.pem \--embed-certs=true \--kubeconfig=kube-proxy.kubeconfig# 设置上下文参数kubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=kube-proxy.kubeconfig# 设置默认上下文kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig配置systemd启动服务cat /usr/lib/systemd/system/kube-proxy.service EOF[Unit]Description=Kubernetes ProxyAfter=network.target[Service]EnvironmentFile=/xdd/soft/kubernetes/cfg/kube-proxy.confExecStart=/xdd/soft/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTSRestart=on-failureRestartSec=5LimitNOFILE=65536[Install]WantedBy=multi-user.targetEOF[root@k8s-master cfg]# systemctl daemon-reload[root@k8s-master cfg]# systemctl start kube-proxy[root@k8s-master cfg]# systemctl enablekube-proxyCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.[root@k8s-master cfg]# systemctl statuskube-proxy3.部署CNI网络
创建cni网络目录[root@k8s-master package]# mkdir -p /xdd/soft/cni/{bin,cfg}[root@k8s-master package]# mkdir -p /xdd/package/cni下载二进制包:[root@k8s-master cni]# wget https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz[root@k8s-master cni]# wget https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz.sha512验证完整性[root@k8s-master cni]# sha512sum -c cni-plugins-linux-amd64-v0.9.1.tgz.sha512cni-plugins-linux-amd64-v0.9.1.tgz: OK部署CNI网络[root@k8s-master cni]# tar -zxvf cni-plugins-linux-amd64-v0.9.1.tgz -C /xdd/soft/cni/bin/[root@k8s-master cni]# ll /xdd/soft/cni/bin/[root@k8s-master cni]# cd/xdd/soft/cni/cfg[root@k8s-master cfg]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml默认镜像地址无法访问,
[root@k8s-master cfg]# sed -ir 's#quay.io/coreos/flannel:*#mharbor-cs.cloud.kemai.cn/base/flannel:v0.14.0#g' kube-flannel.yml[root@k8s-master cfg]# kubectl apply -f kube-flannel.ymlpodsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds created创建CNI环境变量cat /etc/profile.d/cni.sh 验证CNI网络插件[root@k8s-master cfg]# kubectl -n kube-system get pods -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESkube-flannel-ds-98hjc1/1Running05m57s172.30.103.73k8s-masterkube-flannel-ds-kp6g91/1Running05m57s172.30.103.92k8s-node1[root@k8s-master cfg]# kubectl get nodesNAMESTATUSROLESAGEVERSIONk8s-masterReady100mv1.18.19k8s-node1Ready51mv1.18.194.授权apiserver访问kubelet
cat apiserver-to-kubelet-rbac.yaml EOFapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:annotations:rbac.authorization.kubernetes.io/autoupdate: 'true'labels:kubernetes.io/bootstrapping: rbac-defaultsname:system:kube-apiserver-to-kubeletrules:- apiGroups:- ''resources:- nodes/proxy- nodes/stats- nodes/log- nodes/spec- nodes/metrics- pods/logverbs:- '*'---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name:system:kube-apiserver-to-kubeletnamespace: ''roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename:system:kube-apiserver-to-kubeletsubjects:- apiGroup: rbac.authorization.k8s.iokind: Username: kubernetesEOF[root@k8s-master cfg]# kubectl create clusterrolebinding kubernetes--clusterrole=cluster-admin--user=kubernetes5.增加Worker Node
拷贝文件到新节点scp -r /xdd/soft/kubernetes root@172.30.103.92:/xdd/softscp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.31.72:/usr/lib/systemd/systemscp -r /xdd/soft/cni/root@172.30.103.92:/xdd/soft除kubelet证书和kubeconfig文件rm /xdd/soft/kubernetes/cfg/kubelet.kubeconfigrm -f /xdd/soft/kubernetes/ssl/kubelet*修改主机名vi /xdd/soft/kubernetes/cfg/kubelet.conf--hostname-override=k8s-node1vi /xdd/soft/kubernetes/cfg/kube-proxy-config.ymlhostnameOverride: k8s-node1启动并设置开机启动systemctl daemon-reloadsystemctl start kubeletsystemctl enable kubeletsystemctl start kube-proxysystemctl enable kube-proxy 在Master上批准新Node kubelet证书申请kubectl get csrNAMEAGESIGNERNAMEREQUESTORCONDITIONnode-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro89skubernetes.io/kube-apiserver-client-kubeletkubelet-bootstrapPendingkubectl certificate approve node-csr-4zTjsaVSrhuyhIGqsefxzVoZDCNKei-aE2jyTP81Uro查看Node状态[root@k8s-master ~]# kubectl get nodes -o wideNAMESTATUSROLESAGEVERSIONINTERNAL-IPEXTERNAL-IPOS-IMAGEKERNEL-VERSIONCONTAINER-RUNTIMEk8s-masterReady163mv1.18.19172.30.103.73CentOS Linux 7 (Core)5.4.120-1.el7.elrepo.x86_64docker://20.10.4k8s-node1Ready114mv1.18.19172.30.103.92CentOS Linux 7 (Core)5.4.120-1.el7.elrepo.x86_64docker://20.10.4k8s-node2Ready26mv1.18.19172.30.103.64CentOS Linux 7 (Core)5.4.120-1.el7.elrepo.x86_64docker://20.10.4[root@k8s-master ~]# kubectl get pods -n kube-system -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESkube-flannel-ds-98hjc1/1Running372m172.30.103.73k8s-masterkube-flannel-ds-kp6g91/1Running272m172.30.103.92k8s-node1kube-flannel-ds-rlz691/1Running226m172.30.103.64k8s-node26.部署Dashboard官方下载地址:https://github.com/kubernetes/dashboard/releases
安装Dashboard[root@k8s-master soft]# mkdir /xdd/soft/dashboard[root@k8s-master soft]#wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml默认Dashboard只能集群内部访问,
vi recommended.yamlkind: ServiceapiVersion: v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboardspec:ports:- port: 443targetPort: 8443nodePort: 30001type: NodePortselector:k8s-app: kubernetes-dashboard[root@k8s-master soft]# kubectl apply -f recommended.yaml[root@k8s-master dashboard]# kubectl get pods,svc -n kubernetes-dashboardNAMEREADYSTATUSRESTARTSAGEpod/dashboard-metrics-scraper-78f5d9f487-rq89h1/1Running03m18spod/kubernetes-dashboard-577bd97bc-wchwn1/1Running03m19sNAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGEservice/dashboard-metrics-scraperClusterIP10.0.0.418000/TCP3m19sservice/kubernetes-dashboardNodePort10.0.0.99443:30001/TCP3m19s访问地址:https://NodeIP:30001创建service account并绑定默认cluster-admin管理员集群角色[root@k8s-master dashboard]# kubectl create serviceaccount dashboard-admin -n kube-systemserviceaccount/dashboard-admin created[root@k8s-master dashboard]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-adminclusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created登入Dashboard使用输出的token登录Dashboard
[root@k8s-master dashboard]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')token:eyJhbGciOiJSUzI1NiIsImtpZCI6ImJiS1Z6MjhjTnFKanVMbDYyb1c4U3JEaWhGSWREMDF1emJjajBxSEZWWUEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4ta3M1OGgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMTA4NWNiYzgtOWQwMS00ZTMyLTljNzItMjBiYjdlN2IzZmZkIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.Wm9EwkhHoTeevGCwHze81e6XemgKbrSt3A5oiPnZvS8TeYGgX96lqRAp5nIXcAK-8yvN9fYHd52Zp1axO5iah-Ohd-ULA1VljcOZObf_uVrDqQ3vcj5Mtcms2X8E71KexFiNvDLuP-SftyDeo7RJLUyWJLVfMvK-VGXCJrVAWrwCmk5OOcpkweOGMcL37HeZ1zBROYiq8iCWiegmGzXExy_6MPxqKis4mZqtkla7PlrTXHWF6DDSyGkn2wpUDWk44N48WiODEo9yzK9xzYunVZ9jtauPUPxUgwCqfk9wTadD9L-iaTO3rIZtelqK_l95agKh0pteCz9hcdBTH8ZQVw7.部署CoreDNS
安装CoreDNS下载地址:https://github.com/coredns/deployment/tree/master/kubernetes拷贝文件coredns.yaml.sed 到本地/xdd/soft/coredns/coredns.yaml
[root@k8s-master coredns]# mkdir /xdd/soft/coredns[root@k8s-master coredns]# cd /xdd/soft/coredns[root@k8s-master coredns]# sed -i 's/CLUSTER_DNS_IP/10.0.0.2/g' coredns.yml[root@k8s-master coredns]# kubectl apply -f coredns.yml[root@k8s-master coredns]# kubectl get pods -n kube-systemNAMEREADYSTATUSRESTARTSAGEcoredns-6ff445f54-f8jl80/1CrashLoopBackOff42m41skube-flannel-ds-b2gsn1/1Running0112mkube-flannel-ds-dg57q1/1Running0111mkube-flannel-ds-r9pcd1/1Running1116m验证CoreDNS功能内部解析验证
[root@k8s-master coredns]# kubectl run -it --rm dns-test --image=busybox:1.28.4 shIf you don't see a command prompt, try pressing enter./# nslookup kubernetesServer:10.0.0.2Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.localName:kubernetesAddress 1: 10.0.0.1 kubernetes.default.svc.cluster.local外部解析验证/# nslookup www.baidu.comServer:10.0.0.2Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.localName:www.baidu.comAddress 1: 14.215.177.38Address 2: 14.215.177.39解析正常,