k8s在线部署-使用kubeasz部署amd64高可用k8s1-23-17

二、部署k8s高可用集群

2.1 高可用集群规划

高可用集群构架图大致如下(下图是从官网复制过来的,跟本文中k8s集群节点布局稍有差别,但大致不变):

image-20241011112321181

  • 注意1:确保各节点时区设置一致、时间同步。 如果你的环境没有提供NTP 时间同步,推荐集成安装chrony
  • 注意2:确保在干净的系统上开始安装,不要使用曾经装过kubeadm或其他k8s发行版的环境
  • 注意3:建议操作系统升级到新的稳定内核,请结合阅读内核升级文档
  • 注意4:在公有云上创建多主集群,请结合阅读在公有云上部署 kubeasz

高可用集群所需节点配置如下:

角色 数量 描述
部署节点 1 运行ansible/ezctl命令,一般复用第一个master节点
etcd节点 3 注意etcd集群需要1,3,5,...奇数个节点,一般复用master节点
master节点 3 高可用集群至少3个master节点(上图来自kubeasz官网,只画出两个master节点不准确)
node节点 n 运行应用负载的节点,可根据需要提升机器配置/增加节点数

机器配置:

  • master节点:4c/8g内存/50g硬盘
  • worker节点:建议8c/32g内存/200g硬盘以上

注意:默认配置下容器运行时和kubelet会占用/var的磁盘空间,如果磁盘分区特殊,可以设置config.yml中的容器运行时和kubelet数据目录:CONTAINERD_STORAGE_DIR DOCKER_STORAGE_DIR KUBELET_ROOT_DIR

在 kubeasz 2x 版本,多节点高可用集群安装可以使用2种方式:

  • 1.按照本文步骤先规划准备,预先配置节点信息后,直接安装多节点高可用集群
  • 2.先部署单节点集群 AllinOne部署,然后通过 节点添加 扩容成高可用集群

2.2 高可用集群部署步骤

以下示例创建一个5节点的多主高可用集群,最后添加一个节点k8s03-6,文档中命令默认都需要root权限运行。

主机名 IP root密码 规格 磁盘 操作系统 备注
k8s03-1 10.13.15.61 cloud@2020 8c16g 400G Ubuntu20.04.3 LTS-amd64 控制节点、Harbor服务器
k8s03-2 10.13.15.62 cloud@2020 8c16g 400G Ubuntu20.04.3 LTS-amd64 控制节点
k8s03-3 10.13.15.63 cloud@2020 8c16g 400G Ubuntu20.04.3 LTS-amd64 控制节点
10.13.15.70 多控制节点做keepalived与l4lb后的虚拟ip
k8s03-4 10.13.15.64 cloud@2020 8c16g 400G Ubuntu20.04.3 LTS-amd64 工作节点
k8s03-5 10.13.15.65 cloud@2020 8c16g 400G Ubuntu20.04.3 LTS-amd64 工作节点
k8s03-6 10.13.15.66 cloud@2020 8c16g 400G Ubuntu20.04.3 LTS-amd64 控制节点(演示添加工作节点)

2.2.1 基础系统配置

  • 开发环境每个节点8C核心16G内核/50G磁盘+,生产环境的配置要更高(具体看业务数量。建议直接用128C256G+物理服务器)、磁盘空间建议1T+
  • 最小化安装Ubuntu 16.04+ server或者CentOS 7+ Minimal
  • 配置基础网络、更新源、SSH登录等
  • 推荐使用ansible in docker 容器化方式运行,无需安装额外依赖

2.2.2 在每个节点配置时间同步

2.2.2.1 配置时区

1
2
3
4
5
6
7
#所有节点时区都配置为东八区
timedatectl set-timezone Asia/Shanghai
#手动设置时间为北京时间(尽量靠近准确的北京时间)
timedatectl set-time "YYYY-MM-DD HH:MM:SS"

#查看时区与时间
timedatectl

2.2.2.2 安装配置时间同步服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#1.安装 chrony 作为时间同步软件:
apt-get install chrony -qy

#2.修改配置文件 /etc/chrony/chrony.conf,修改 ntp 服务器配置(以下3种方式任选其一):
vi /etc/chrony/chrony.conf
# 2.1 使用默认的 pool 配置
pool ntp.ubuntu.com iburst maxsources 4
pool 0.ubuntu.pool.ntp.org iburst maxsources 1
pool 1.ubuntu.pool.ntp.org iburst maxsources 1
pool 2.ubuntu.pool.ntp.org iburst maxsources 2
# 2.2 增加国内的 ntp 服务器,或是指定其他常用的时间服务器
server ntp.aliyun.com iburst
server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
server ntp1.cloud.aliyuncs.com iburst
server ntp2.cloud.aliyuncs.com iburst
server ntp3.cloud.aliyuncs.com iburst
server ntp7.cloud.aliyuncs.com iburst
server ntp8.cloud.aliyuncs.com iburst
server ntp9.cloud.aliyuncs.com iburst
server ntp.api.bz iburst
# 2.3 若要使用本地NTP服务器,可以将上面的删除,然后添加自己的NTP服务器(假设10.0.0.1是本地NTP服务器):
server 10.0.0.1 iburst
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#重启 chrony 服务:
systemctl restart chrony

#验证 chrony 同步状态:
chronyc sourcestats -v
#输出内容大致如下
210 Number of sources = 8
.- Number of sample points in measurement set.
/ .- Number of residual runs with same sign.
| / .- Length of measurement set (time).
| | / .- Est. clock freq error (ppm).
| | | / .- Est. error in freq.
| | | | / .- Est. offset.
| | | | | | On the -.
| | | | | | samples. \
| | | | | | |
Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
==============================================================================
prod-ntp-3.ntp1.ps5.cano> 4 3 9 +704.136 21532.734 -17ms 4358us
prod-ntp-4.ntp1.ps5.cano> 4 3 10 +1534.234 26545.184 +395us 5834us
alphyn.canonical.com 4 3 11 -633.039 22169.467 -3371us 4248us
prod-ntp-5.ntp4.ps5.cano> 4 3 9 -218.393 5069.837 -28ms 1108us
tick.ntp.infomaniak.ch 4 3 9 +179.013 18164.828 +4135us 3693us
ntp5.flashdance.cx 0 0 0 +0.000 2000.000 +0ns 4000ms
ntp7.flashdance.cx 4 3 9 -2025.918 26238.625 -25ms 5856us
dns1.synet.edu.cn 4 3 8 +53.828 9119.362 +422us 1610us

2.2.3 在每个节点安装依赖工具

推荐使用ansible in docker 容器化方式运行,无需安装额外依赖

2.2.4 安装k8s

k8s与kubeasz推荐的匹配版本对照表:

Kubernetes version 1.19 1.20 1.21 1.22 1.23 1.24 1.25 1.26 1.27 1.28 1.29 1.30 1.31
kubeasz version 2.2.2 3.0.1 3.1.0 3.1.1 3.2.0 3.6.2 3.6.2 3.6.2 3.6.2 3.6.2 3.6.3 3.6.4 3.6.5

2.2.4.1 下载项目源码、二进制及离线镜像

1
2
3
4
5
6
#下载工具脚本ezdown,举例使用kubeasz版本3.2.0
root@k8s03-1:~# mkdir /opt/kubeasz-deployk8s
root@k8s03-1:~# cd /opt/kubeasz-deployk8s
root@k8s03-1:/opt/kubeasz-deployk8s# export release=3.2.0
root@k8s03-1:/opt/kubeasz-deployk8s# wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
root@k8s03-1:/opt/kubeasz-deployk8s# chmod +x ./ezdown
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#下载kubeasz代码、二进制、默认容器镜像(更多关于ezdown的参数,运行./ezdown 查看)
# 国内环境(-k指定k8s版本,其他参数执行“./ezdown”查看),以下命令会创建并启动local_registry容器
./ezdown -D -m "CN" -k "v1.23.17"
# 海外环境
#./ezdown -D -m standard

#下载harbor相关镜像
root@k8s03-1:/opt/kubeasz-deployk8s# ./ezdown -R -m "CN"
root@k8s03-1:/opt/kubeasz-deployk8s# ls -alh /etc/kubeasz/down/harbor-offline-installer*
-rw-r--r-- 1 root root 534M Feb 6 2021 /etc/kubeasz/down/harbor-offline-installer-v2.1.3.tgz

#【可选】下载额外容器镜像(cilium,flannel,prometheus等)
#./ezdown -X -m "CN" -k "v1.23.17"
#【可选】下载离线系统包 (适用于无法使用yum/apt仓库情形)
#./ezdown -P -m "CN" -k "v1.23.17"

上述脚本运行成功后,所有文件(kubeasz代码、二进制、离线镜像)均已整理好放入目录/etc/kubeasz

2.2.4.2 创建集群配置实例

1
2
3
4
5
6
7
8
9
10
# 容器化运行kubeasz(会创建并启动kubeasz容器)
./ezdown -S

# 创建新集群 k8s03
root@k8s03-1:/opt/kubeasz-deployk8s# docker exec -it kubeasz ezctl new k8s03
2024-10-15 11:33:29 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s03
2024-10-15 11:33:30 DEBUG set versions
2024-10-15 11:33:30 DEBUG cluster k8s03: files successfully created.
2024-10-15 11:33:30 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s03/hosts'
2024-10-15 11:33:30 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s03/config.yml'

然后根据提示配置'/etc/kubeasz/clusters/k8s03/hosts' 和 '/etc/kubeasz/clusters/k8s03/config.yml':根据前面节点规划修改hosts 文件和其他集群层面的主要配置选项;其他集群组件等配置项可以在config.yml 文件中修改。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
#按住alt键 + 鼠标左键,可以按列选择
root@k8s03-1:/opt/kubeasz-deployk8s# vi /etc/kubeasz/clusters/k8s03/hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
10.13.15.61 #这个需要修改成控制节点ip
10.13.15.62 #这个需要修改成控制节点ip
10.13.15.63 #这个需要修改成控制节点ip

# master node(s), set unique 'k8s_nodename' for each node
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character
[kube_master]
#10.13.15.61 k8s_nodename="k8s03-1" #这个需要修改(如果没有k8s_nodename配置,则安装后将直接使用ip作为k8s节点名)
10.13.15.61 #这个需要修改成控制节点ip
10.13.15.62 #这个需要修改成控制节点ip
10.13.15.63 #这个需要修改成控制节点ip

# work node(s), set unique 'k8s_nodename' for each node
# CAUTION: 'k8s_nodename' must consist of lower case alphanumeric characters, '-' or '.',
# and must start and end with an alphanumeric character
[kube_node]
10.13.15.64 #这个需要修改成工作节点ip
10.13.15.65 #这个需要修改成工作节点ip

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
10.13.15.61 NEW_INSTALL=true #需要在服务器上搭建harbor服务,就配置成此服务器的ip
#192.168.1.8 NEW_INSTALL=false

# [optional] loadbalance for accessing k8s from outside
[ex_lb]
#10.13.15.70是3个控制节点组成keepalived与l4lb集群后的虚拟ip
10.13.15.63 LB_ROLE=backup EX_APISERVER_VIP=10.13.15.70 EX_APISERVER_PORT=8443
10.13.15.62 LB_ROLE=backup EX_APISERVER_VIP=10.13.15.70 EX_APISERVER_PORT=8443
10.13.15.61 LB_ROLE=master EX_APISERVER_VIP=10.13.15.70 EX_APISERVER_PORT=8443

# [optional] ntp server for the cluster
[chrony]
#192.168.1.1

[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"

# Cluster container-runtime supported: docker, containerd
# if k8s version >= 1.24, docker is not supported
#CONTAINER_RUNTIME="containerd"
CONTAINER_RUNTIME="docker" #这个需要修改
...
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
root@k8s03-1:/opt/kubeasz-deployk8s# vi /etc/kubeasz/clusters/k8s03/config.yml 
# k8s version
#K8S_VER: "1.23.1"
K8S_VER: "1.23.17" #修改

############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
#ETCD_DATA_DIR: "/var/lib/etcd"
#ETCD_WAL_DIR: ""
ETCD_DATA_DIR: "/var/lib/etcd/data" #修改
ETCD_WAL_DIR: "/var/lib/etcd/wal" #修改

############################
# role:runtime [containerd,docker]
############################
# [containerd]..................
#SANDBOX_IMAGE: "easzlab/pause:3.6"
SANDBOX_IMAGE: "easzlab.io.local/easzlab/pause:3.6" #如果创建了docker registry就修改成此行配置,否则保持默认
# [docker].........HTTP......
#INSECURE_REG: '["127.0.0.1/8"]'
INSECURE_REG: '["easzlab.io.local"]' #如果创建了docker registry就修改成此行配置,否则保持默认

# nfs-provisioner ............
nfs_provisioner_install: "yes" #修改为yes,可以自动安装nfs类型的sc(前提是先有一个可用的nfs服务端)
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.2"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "10.13.15.61" #修改(修改成nfs服务端的ip)
nfs_path: "/nfs" #修改(修改成nfs服务端的共享目录)

############################
# role:harbor
############################
# harbor version..................
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.yourdomain.com"
HARBOR_TLS_PORT: 8444 #8443改成8444

2.2.4.3 开始安装

1
2
3
#以下操作是在k8s节点操作系统上,不是在容器内。也可以进容器kubeasz直接操作
#建议使用alias命令,查看~/.bashrc 文件应该包含:alias dk='docker exec -it kubeasz'
source ~/.bashrc
2.2.4.3.0 安装配置docker registry
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
root@k8s03-1:/opt/kubeasz-deployk8s# openssl req  -addext "subjectAltName = DNS:easzlab.io.local" \
-newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
-x509 -days 36500 -out certs/domain.crt

root@k8s03-1:/opt/kubeasz-deployk8s# docker run -d \
--restart=always \
--name local_registry \
-v "$(pwd)"/certs:/certs \
-v /mnt/registry:/var/lib/registry \
-e REGISTRY_HTTP_ADDR=0.0.0.0:443 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
-p 443:443 \
registry:2

root@k8s03-1:/opt/kubeasz-deployk8s# mkdir -p /etc/docker/certs.d/easzlab.io.local
root@k8s03-1:/opt/kubeasz-deployk8s# cp certs/domain.crt /etc/docker/certs.d/easzlab.io.local/ca.crt

root@k8s03-1:/opt/kubeasz-deployk8s# docker tag registry:2 easzlab.io.local/registry:2
root@k8s03-1:/opt/kubeasz-deployk8s# docker push easzlab.io.local/registry:2
root@k8s03-1:/opt/kubeasz-deployk8s# docker pull easzlab.io.local/registry:2

docker tag registry:2 easzlab.io.local/library/registry:2
docker push easzlab.io.local/library/registry:2
docker pull easzlab.io.local/library/registry:2
2.2.4.3.1 配置ssh免密登录
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
root@k8s03-1:~# cat /etc/hosts
...
10.13.15.61 k8s03-1
10.13.15.62 k8s03-2
10.13.15.63 k8s03-3
10.13.15.64 k8s03-4
10.13.15.65 k8s03-5

#$IP为所有节点地址包括自身,按照提示输入yes 和root密码
root@k8s03-1:~# ssh-keygen -P ""
root@k8s03-1:~# ssh-copy-id root@$IP

# 为每个节点设置python软链接
for i in {61..65}; do
ssh root@10.13.15.${i} 'if [ ! -L /usr/bin/python ];then ln -s /usr/bin/python3 /usr/bin/python; fi';
done
2.2.4.3.2 搭建nfs服务器

本文以在k8s03-1节点上搭建nfs服务端为例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
###nfs服务端
#执行以下命令安装 NFS 服务,
#apt 会自动安装 nfs-common、rpcbind 等软件包
root@k8s03-1:~# apt install nfs-kernel-server -y

#创建作为nfs服务端的根目录
root@k8s03-1:~# mkdir /nfs

#编写配置文件
root@k8s03-1:~# vim /etc/exports
#[任意主机所有权限]
root@k8s03-1:~# /nfs *(rw,sync,insecure,no_subtree_check,no_root_squash)

#重启 NFS 服务
root@k8s03-1:~# systemctl restart nfs-kernel-server
root@k8s03-1:~# systemctl enable nfs-kernel-server && systemctl status nfs-kernel-server

#常用命令工具
#在安装 NFS 服务器时,已包含常用的命令行工具,无需额外安装
#显示已经 mount 到本机 NFS 目录的客户端机器
sudo showmount -e localhost
#将配置文件中的目录全部重新 export 一次,无需重启服务
sudo exportfs -rv
#查看 NFS 的运行状态
sudo nfsstat
#查看 rpc 执行信息,可以用于检测 rpc 运行情况
sudo rpcinfo
2.2.4.3.3 一键部署与分步部署
1
2
3
4
5
6
7
8
9
10
11
12
13
#在k8s节点上操作,当然也可以进容器kubeasz内操作
#建议使用alias命令,查看~/.bashrc 文件应该包含:alias dk='docker exec -it kubeasz'
root@k8s03-1:~# source ~/.bashrc

# 一键安装,等价于执行docker exec -it kubeasz ezctl setup k8s03 all
#对于不熟悉的操作者而言,建议分步执行,执行完一个步骤后查看此步骤的结果以熟悉整个部署流程
root@k8s03-1:~# dk ezctl setup k8s03 all
# 或者分步安装,具体使用 dk ezctl help setup 查看分步安装帮助信息
# dk ezctl setup k8s03 01
# dk ezctl setup k8s03 02
# dk ezctl setup k8s03 03
# dk ezctl setup k8s03 04
...
2.2.4.3.4 01-创建证书和安装准备
1
2
3
4
5
6
#容器内操作
k8s03-1:/# ezctl setup k8s03 01

#在k8s节点上操作
root@k8s03-1:~# source ~/.bashrc
root@k8s03-1:~# dk ezctl setup k8s03 01
2.2.4.3.5 02-安装etcd集群
1
k8s03-1:/# ezctl setup k8s03 02
2.2.4.3.6 03-安装容器运行时
1
k8s03-1:/# ezctl setup k8s03 03
2.2.4.3.7 04-安装master节点
1
2
3
4
5
6
7
8
k8s03-1:/# ezctl setup k8s03 04

#上述安装master节点相关命令执行成功后,可以查看集群中的master节点,如下
k8s03-1:/etc/kubeasz/clusters/k8s03# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.13.15.61 Ready,SchedulingDisabled master 57s v1.23.17
10.13.15.62 Ready,SchedulingDisabled master 57s v1.23.17
10.13.15.63 Ready,SchedulingDisabled master 57s v1.23.17
2.2.4.3.8 05-安装node节点
1
2
3
4
5
6
7
8
9
10
k8s03-1:/# ezctl setup k8s03 05

#上述安装node节点相关命令执行成功后,可以查看集群中的所有节点,如下
k8s03-1:/etc/kubeasz/clusters/k8s03# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.13.15.61 Ready,SchedulingDisabled master 4m59s v1.23.17
10.13.15.62 Ready,SchedulingDisabled master 4m59s v1.23.17
10.13.15.63 Ready,SchedulingDisabled master 4m59s v1.23.17
10.13.15.64 Ready node 22s v1.23.17
10.13.15.65 Ready node 22s v1.23.17
2.2.4.3.906-安装集群网络
1
k8s03-1:/# ezctl setup k8s03 06
2.2.4.3.10 07-安装集群插件
1
2
3
4
5
6
7
8
9
10
11
12
k8s03-1:/# ezctl setup k8s03 07


#查看创建的sc
root@k8s03-1:/# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 7m47s
#将sc/managed-nfs-storage 设置为默认的sc
root@k8s03-1:/# kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
root@k8s03-1:/# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 9m13s
2.2.4.3.11 10-ex-lb
1
k8s03-1:/# ezctl setup k8s03 10
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#3个控制节点上现在应该都正常运行了以下两个服务
root@k8s03-1:~# systemctl status l4lb.service
root@k8s03-1:~# systemctl status keepalived.service

#因为默认情况下,k8s03-1 是keepalived 集群的主节点,所以此时VIP 10.13.15.70 应该是在k8s03-1上
root@k8s03-1:~# ip a
...
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether fa:16:3e:65:87:be brd ff:ff:ff:ff:ff:ff
inet 10.13.15.61/24 brd 10.13.15.255 scope global dynamic ens3
valid_lft 72162sec preferred_lft 72162sec
inet 10.13.15.70/32 scope global ens3
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe65:87be/64 scope link
valid_lft forever preferred_lft forever
...
2.2.4.3.11.1 负载均衡相关设置
1
2
3
4
5
6
7
8
9
10
11
##以下是OpenStack虚拟机特定操作,在openstack 控制节点上执行
root@controller01:~# docker exec -it kolla-ansible bash
root@controller01:~# source /root/admin-openrc.sh
root@controller01:~# openstack port list | egrep "10.13.15.61|10.13.15.62|10.13.15.63"
| 10194efa-abee-4f04-9638-dd7e554d0be9 | 10.13.15.61 | fa:16:3e:65:87:be | ip_address='10.13.15.61', subnet_id='596d5b92-6db8-480c-a8b3-e672cc39b531' | ACTIVE |
| 23ef271f-40ee-489c-a219-3f8ca6fab8e1 | 10.13.15.63 | fa:16:3e:87:3a:47 | ip_address='10.13.15.63', subnet_id='596d5b92-6db8-480c-a8b3-e672cc39b531' | ACTIVE |
| ec016664-03d5-45a2-94b2-c2306c3c0136 | 10.13.15.62 | fa:16:3e:97:af:b4 | ip_address='10.13.15.62', subnet_id='596d5b92-6db8-480c-a8b3-e672cc39b531' | ACTIVE |

root@controller01:~# openstack port set --allowed-address ip-address=10.13.15.70 10194efa-abee-4f04-9638-dd7e554d0be9
root@controller01:~# openstack port set --allowed-address ip-address=10.13.15.70 23ef271f-40ee-489c-a219-3f8ca6fab8e1
root@controller01:~# openstack port set --allowed-address ip-address=10.13.15.70 ec016664-03d5-45a2-94b2-c2306c3c0136
1
2
3
4
5
6
7
8
9
10
11
12
#回到k8s节点上操作
root@k8s03-1:/var/data/harbor# kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.68.38.156 <none> 8000/TCP 49m
kube-dns ClusterIP 10.68.0.2 <none> 53/UDP,53/TCP,9153/TCP 50m
kube-dns-upstream ClusterIP 10.68.209.167 <none> 53/UDP,53/TCP 50m
kubernetes-dashboard NodePort 10.68.173.217 <none> 443:32268/TCP 49m
metrics-server ClusterIP 10.68.168.180 <none> 443/TCP 50m
node-local-dns ClusterIP None <none> 9253/TCP 50m
#NodePort类型的 service/kubernetes-dashboard 在k8s节点上映射的端口是 32268
#以下使用vip + 32268 即 https://10.13.15.70:32268 来访问kuboard(建议使用 firefox 浏览器访问)
#另外,当然也可以使用k8s集群任何一个节点的IP+32268,以访问kubernetes原生dashboard即kuboard(建议使用 firefox 浏览器访问)
image-20241031173526066

2.2.4.4 查看集群信息与状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#查看k8s节点列表
root@k8s03-1:/var/data/harbor# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s03-1 Ready,SchedulingDisabled master 32m v1.23.17
k8s03-2 Ready,SchedulingDisabled master 32m v1.23.17
k8s03-3 Ready,SchedulingDisabled master 32m v1.23.17
k8s03-4 Ready node 16m v1.23.17
k8s03-5 Ready node 16m v1.23.17

#查看集群中pod列表
root@k8s03-1:/var/data/harbor# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-754966f84c-mmsdm 1/1 Running 0 15m
kube-system calico-node-5dgf7 1/1 Running 0 15m
kube-system calico-node-c2gn5 1/1 Running 0 15m
kube-system calico-node-j5xbk 1/1 Running 0 15m
kube-system calico-node-rfdbh 1/1 Running 0 15m
kube-system calico-node-s9r9k 1/1 Running 0 15m
kube-system coredns-596755dbff-d6ds5 1/1 Running 0 4m36s
kube-system dashboard-metrics-scraper-799d786dbf-rc4kn 1/1 Running 0 3m56s
kube-system kubernetes-dashboard-9f8c8b989-448zj 1/1 Running 0 3m56s
kube-system metrics-server-5d648558d9-npj5m 1/1 Running 0 4m25s
kube-system nfs-client-provisioner-f97c77ddd-ltrdv 1/1 Running 0 4m25s
kube-system node-local-dns-5zxbn 1/1 Running 0 4m34s
kube-system node-local-dns-jt7hh 1/1 Running 0 4m34s
kube-system node-local-dns-p52ck 1/1 Running 0 4m34s
kube-system node-local-dns-vk2sc 1/1 Running 0 4m34s
kube-system node-local-dns-vr87v 1/1 Running 0 4m34s

#查看component status
root@k8s03-1:/var/data/harbor# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
etcd-2 Healthy {"health":"true","reason":""}
etcd-1 Healthy {"health":"true","reason":""}
1
2
3
4
5
6
7
8
9
#查看namespace/kube-system下的svc
root@k8s03-1:/var/data/harbor# kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.68.38.156 <none> 8000/TCP 49m
kube-dns ClusterIP 10.68.0.2 <none> 53/UDP,53/TCP,9153/TCP 50m
kube-dns-upstream ClusterIP 10.68.209.167 <none> 53/UDP,53/TCP 50m
kubernetes-dashboard NodePort 10.68.173.217 <none> 443:32268/TCP 49m
metrics-server ClusterIP 10.68.168.180 <none> 443/TCP 50m
node-local-dns ClusterIP None <none> 9253/TCP 50m

2.2.4.5 登录kubernetes-dashboard

image-20250110145004619

选择Kubeconfig时,如果提示“Internal error (500): Not enough data to create auth info structure.”,请参照如下方式处理:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
#创建如下3个文件,然后分别apply或bash执行
root@k8s03-1:/opt/kubeasz-deployk8s# cat 01-ServiceAccount-admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard

root@k8s03-1:/opt/kubeasz-deployk8s# cat 02-ClusterRoleBinding-admin-user.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard

root@k8s03-1:/opt/kubeasz-deployk8s# kubectl apply -f 01-ServiceAccount-admin-user.yaml
root@k8s03-1:/opt/kubeasz-deployk8s# kubectl apply -f 02-ClusterRoleBinding-admin-user.yaml
root@k8s03-1:/opt/kubeasz-deployk8s# cat 03-Getting-a-Bearer-Token-for-ServiceAccount.sh
#!/bin/bash
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"

root@k8s03-1:/opt/kubeasz-deployk8s# bash 03-Getting-a-Bearer-Token-for-ServiceAccount.sh
#上述脚本文件执行时输出如下内容
eyJhbGciOiJSUzI1NiIsImtpZCI6IkJyV2xQLWZ5REZ6cUtOb1VBVzB0ZVZCUzBoVElWcWNyVVhtM1lhZnpGVjgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWRnYmJsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2N2ViMWIzZi0wYjU0LTQ5M2ItOWQ4Mi00MWU0OGMxNzRmMDQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.ieO-n3yTYnz7Z0PkGLvaoNq_0VxO9ENWh5gJXK6qzf3h7ifUC4kbSbr5M7ksF7A3Z6pGl9w7ZLnGuXF7Hguva7NmF_-3gXKQYRstDePrZNo0pBZFO13SZmCOKiLIMNyJFYv93WYJYwLeXhfRf61bi1BNxMapGC62oVyyjQ0UTpbqIzVbWElkhKPzwYl_shp_nyCXwDO0KVZsq8z0Tln-Xne7CwQGjDcoZBtqFfaRalTolS7iUwKBkXqp98luShk-Tecik11ICacCQLpYRMgi-3NE-WR4bnbAe2PKwYkYMv0TlfNL1MSA4-9zhQD9gHOik-eWfscPG_a6s7tknRaaWQ

#将上述输出内容复制,按如下形式粘贴到/root/.kube/config文件中
apiVersion: v1
clusters:
- cluster:
server: https://XXXX
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: YOUR UER
name: kubernetes
current-context: "kubernetes"
kind: Config
preferences: {}
users:
- name: YOUR USER
user:
client-certificate-data: CODED
client-key-data: CODED
token: CODED #粘贴到这里,替换此处的CODED

使用上述/root/.kube/config 文件即可登录kubernetes-dashboard 。

2.2.5 安装harbor

2.2.5.1 使用默认配置安装Harbor

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#如果是彻底重新安装harbor,要先删除harbor服务器上 /var/data/database /var/data/registry /var/data/redis /var/data/job_logs /var/data/harbor 这几个目录
#k8s节点上、kubeasz容器外执行如下命令,查看harbor相关容器
root@k8s03-1:~# rm -rf /var/data/database /var/data/registry /var/data/redis /var/data/job_logs /var/data/harbor

#安装harbor(kubeasz容器内执行)
#如果是前面安装了ex-lb,则k8s03-1节点上的8443端口将被占用了。而安装harbor时也尝试去使用8443端口,此时此步骤就会在最后阶段失败,即“TASK [harbor : 安装 harbor]”任务失败,"msg": "non-zero return code"
#所以使用kubeasz安装harbor之前要先将 clusters/k8s03/config.yml 文件中的 HARBOR_TLS_PORT 参数值修改为其他值,比如修改为8444
k8s03-1:/etc/kubeasz/clusters/k8s03# ezctl setup k8s03 11 或 ezctl setup k8s03 harbor


#k8s节点上、kubeasz容器外执行如下命令,查看harbor相关容器
root@k8s03-1:~# cd /var/data/harbor
root@k8s03-1:/var/data/harbor# docker-compose ps
chartmuseum ./docker-entrypoint.sh Up (healthy)
harbor-core /harbor/entrypoint.sh Up (healthy)
harbor-db /docker-entrypoint.sh Up (healthy)
harbor-jobservice /harbor/entrypoint.sh Up (healthy)
harbor-log /bin/sh -c /usr/local/bin/ ... Up (healthy) 127.0.0.1:1514->10514/tcp
harbor-portal nginx -g daemon off; Up (healthy)
nginx nginx -g daemon off; Up (healthy) 0.0.0.0:80->8080/tcp,:::80->8080/tcp, 0.0.0.0:8444->8443/tcp,:::8444->8443/tcp
redis redis-server /etc/redis.conf Up (healthy)
registry /home/harbor/entrypoint.sh Up (healthy)
registryctl /home/harbor/start.sh Up (healthy)


#harbor的安装目录默认在:/var/data,它由“/etc/kubeasz/clusters/k8s03/config.yml”中的 HARBOR_PATH 配置项决定
#harbor系统admin用户的密码,在harbor部署服务器的如下文件中查看:/var/data/harbor/harbor.yml
#Dgyj08NOYtvhMc7x

#在k8s集群所有节点上,可以登录此harbor仓库。用户:admin,密码就是在harbor部署服务器的如下文件/var/data/harbor/harbor.yml中harbor_admin_password 配置值
root@k8s03-1:/etc/kubeasz# docker login harbor.easzlab.io.local:8444
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

2.2.5.2 修改Harbor服务的端口

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
root@k8s03-1:~# cd /var/data/harbor
#先停掉harbor相关容器
root@k8s03-1:/var/data/harbor# docker-compose down

#将所有 8444 字符串替换成 8445
root@k8s03-1:/var/data/harbor# sed -i "s/8444/8445/g" `grep 8444 -rl ./`
#重新启动harbor相关容器
root@k8s03-1:/var/data/harbor# docker-compose up -d

#修改证书目录
root@k8s03-1:/var/data/harbor# cd /etc/docker/certs.d/
root@k8s03-1:/etc/docker/certs.d# mkdir harbor.yourdomain.com\:8445
root@k8s03-1:/etc/docker/certs.d# docker cp kubeasz:/etc/kubeasz/down/ca.pem ./harbor.yourdomain.com:8445/
#登录
root@k8s03-1:/etc/docker/certs.d# docker login harbor.yourdomain.com:8445
#尝试提交镜像...

2.2.6 继续安装Kubesphere

​ 本文已经使用kubeasz3.2.0 安装好kubernetes1.23.17,现准备部署kubesphere3.4.1。

前提条件

  • 安装之前,Kubernetes 集群已配置默认存储类型 (StorageClass);
  • 当使用 --cluster-signing-cert-file--cluster-signing-key-file 参数启动时,在 kube-apiserver 中会激活 CSR 签名功能。请参见 RKE 安装问题;(暂时无需关注)
  • 有关在 Kubernetes 上安装 KubeSphere 的准备工作,请参见准备工作

2.2.6.1 手动创建sc/local

​ 使用kubeadm或kubeasz部署的k8s环境,没有创建任何storage class资源对象,需要自己创建。一个默认的sc是部署的kubesphere的前提条件。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
root@k8s03-1:~# cd /etc/kubeasz/clusters/k8s03/
root@k8s03-1:/etc/kubeasz/clusters/k8s03# cat >> default-storage-class.yaml <<-EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local
annotations:
cas.openebs.io/config: |
- name: StorageType
value: "hostpath"
- name: BasePath
value: "/var/openebs/local/"
openebs.io/cas-type: local
storageclass.beta.kubernetes.io/is-default-class: 'false'
storageclass.kubesphere.io/supported-access-modes: '["ReadWriteOnce"]'
provisioner: openebs.io/local
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
EOF

#应用上述yaml文件,创建local sc
root@k8s03-1:/etc/kubeasz/clusters/k8s03# kubectl get sc
No resources found
root@k8s03-1:/etc/kubeasz/clusters/k8s03# kubectl apply -f default-storage-class.yaml
storageclass.storage.k8s.io/local created
root@k8s03-1:/etc/kubeasz/clusters/k8s03# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local openebs.io/local Delete WaitForFirstConsumer false 1s

# 设置为默认StorageClass
#kubectl patch storageclass local -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

#集群中创建默认StorageClass
# 默认没有,可以自建
#创建 StorageClass:https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/
#设置默认 StorageClass:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/change-default-storage-class/

2.2.6.2 手动创建sc/nfs

1
#在k8s03-1节点上搭建nfs服务端:参考“2.2.4.3.2 搭建nfs服务器”章节
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#1 下载并创建storageclass
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# mkdir sc-nfs && cd sc-nfs
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# wget https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/class.yaml

root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# mv class.yaml storageclass-nfs.yml
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# cat storageclass-nfs.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass # 类型
metadata:
name: nfs-client # 名称,要使用就需要调用此名称
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false" # 删除数据时是否存档,false表示不存档,true表示存档

root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# kubectl apply -f storageclass-nfs.yml
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 2m16s
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 4s

#将nfs-client sc设置为默认sc
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 2m40s
nfs-client (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 28s

# RECLAIMPOLICY pv回收策略,pod或pvc被删除后,pv是否删除还是保留。
# VOLUMEBINDINGMODE Immediate 模式下PVC与PV立即绑定,主要是不等待相关Pod调度完成,不关心其运行节点,直接完成绑定。相反的 WaitForFirstConsumer模式下需要等待Pod调度完成后进行PV绑定。
# ALLOWVOLUMEEXPANSION pvc扩容

1
2
3
4
5
6
7
8
9
10
11
#2 下载并创建rbac
#因为storage自动创建pv需要经过kube-apiserver,所以需要授权
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# wget https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/rbac.yaml
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# mv rbac.yaml storageclass-nfs-rbac.yaml

root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# kubectl apply -f storageclass-nfs-rbac.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
#3 创建动态供给的deployment
#需要一个deployment来专门实现pv与pvc的自动创建
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# cat deploy-nfs-client-provisioner.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-beijing.aliyuncs.com/pylixm/nfs-subdir-external-provisioner:v4.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 10.13.15.61 #nfs服务器ip
- name: NFS_PATH
value: /nfs #nfs服务器共享盘挂载在nfs服务器上的目录
volumes:
- name: nfs-client-root
nfs:
server: 10.13.15.61
path: /nfs

root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# kubectl apply -f deploy-nfs-client-provisioner.yml
deployment.apps/nfs-client-provisioner created

#查看
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nfs-client-provisioner-7cdb58c49d-4slt7 1/1 Running 0 44s 172.20.197.147 10.13.15.65 <none> <none>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
#4 测试存储动态供给是否可用
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# cat nginx-sc.yaml
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
imagePullSecrets:
- name: huoban-harbor
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "nfs-client"
resources:
requests:
storage: 1Gi

#应用
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# kubectl apply -f nginx-sc.yaml
service/nginx created
statefulset.apps/web created

#查看pv与pvc创建情况
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0a9b8046-b364-4146-816d-815dd5696ac5 1Gi RWO Delete Bound default/www-web-0 nfs-client 64s
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
www-web-0 Bound pvc-0a9b8046-b364-4146-816d-815dd5696ac5 1Gi RWO nfs-client 60s

#查看pod
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# kubectl get pods -o wide | egrep "NAME|web"
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-0 1/1 Running 0 5m11s 172.20.46.31 10.13.15.64 <none> <none>
web-1 1/1 Running 0 3m23s 172.20.197.148 10.13.15.65 <none> <none>

如果要删除stoarageclass/nfs-client 相关资源对象与配置,执行如下操作:

1
2
3
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# kubectl delete -f deploy-nfs-client-provisioner.yml
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# kubectl delete -f storageclass-nfs-rbac.yaml
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341/sc-nfs# kubectl delete -f storageclass-nfs.yml

2.2.6.3 手动创建sc/ceph

1
#暂略,后续可能更新在个人github博客:https://jiangsanyin.github.io/archives/

2.2.6.4 下载kubesphere安装文件

1
2
3
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# wget https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/kubesphere-installer.yaml

root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# wget https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/cluster-configuration.yaml

2.2.6.5 准备安装镜像

2.2.6.5.1 准备镜像清单文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
#创建镜像清单文件 images-list-aliyuncs.txt
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# touch images-list-aliyuncs.txt

#images-list-aliyuncs.txt文件内容如下
#使用经验告诉笔者,如果只使用最基础的kubesphere功能,下列镜像列表可以进一步被裁剪(笔者暂未做裁剪)
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# cat images-list-aliyuncs.txt
##kubesphere-images
registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1
registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.4.1
registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.4.1
registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.4.1
registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0
registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0
registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1
registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.1
registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.3.1
registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2
registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine
registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0
registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0
##kubeedge-images
registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.13.0
registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.13.0
registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.3.0
##gatekeeper-images
registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2
##openpitrix-images
registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.2
##kubesphere-devops-images
registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:ks-v3.4.1
registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:ks-v3.4.1
registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:ks-v3.4.1
registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.4.0-2.319.3-1
registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2-podman
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11-podman
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16-podman
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17-podman
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18-podman
registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.1
registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/argocd:v2.3.3
registry.cn-beijing.aliyuncs.com/kubesphereio/argocd-applicationset:v0.4.1
registry.cn-beijing.aliyuncs.com/kubesphereio/dex:v2.30.2
registry.cn-beijing.aliyuncs.com/kubesphereio/redis:6.2.6-alpine
##kubesphere-monitoring-images
registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.7.1
registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.39.1
registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.6.0
registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1
registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.31.0
registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v2.3.0
registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v2.3.0
registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
##kubesphere-logging-images
registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6
registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch-curator:v0.0.5
registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22
registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch:2.6.0
registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch-dashboards:2.6.0
registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.14.0
registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.9.4
registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:v1.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.6.0
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.6.0
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.6.0
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0
##istio-images
registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.14.6
registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.14.6
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.29
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.29
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.29
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.29
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.29
registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.50.1
registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.50
##example-images
registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
registry.cn-beijing.aliyuncs.com/kubesphereio/nginx:1.14-alpine
registry.cn-beijing.aliyuncs.com/kubesphereio/wget:1.0
registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text
registry.cn-beijing.aliyuncs.com/kubesphereio/wordpress:4.8-apache
registry.cn-beijing.aliyuncs.com/kubesphereio/hpa-example:latest
registry.cn-beijing.aliyuncs.com/kubesphereio/fluentd:v1.4.2-2.0
registry.cn-beijing.aliyuncs.com/kubesphereio/perl:latest
registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-productpage-v1:1.16.2
registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v1:1.16.2
registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v2:1.16.2
registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-details-v1:1.16.2
registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-ratings-v1:1.16.3
##weave-scope-images
registry.cn-beijing.aliyuncs.com/kubesphereio/scope:1.13.0
2.2.6.5.2 下载 offline-installation-tool.sh
1
2
#下载offline-installation-tool.sh 
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.4.1/offline-installation-tool.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
#如果不能正常下载,此处准备了此文件的内容,如下
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# cat offline-installation-tool.sh
#!/usr/bin/env bash

# Copyright 2018 The KubeSphere Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


CurrentDIR=$(cd "$(dirname "$0")" || exit;pwd)
ImagesDirDefault=${CurrentDIR}/kubesphere-images
save="false"
registryurl=""
reposUrl=("quay.azk8s.cn" "gcr.azk8s.cn" "docker.elastic.co" "quay.io" "k8s.gcr.io")
KUBERNETES_VERSION=${KUBERNETES_VERSION:-"v1.21.5"}
HELM_VERSION=${HELM_VERSION:-"v3.6.3"}
CNI_VERSION=${CNI_VERSION:-"v0.9.1"}
ETCD_VERSION=${ETCD_VERSION:-"v3.4.13"}
CRICTL_VERSION=${CRICTL_VERSION:-"v1.22.0"}
DOCKER_VERSION=${DOCKER_VERSION:-"20.10.8"}

func() {
echo "Usage:"
echo
echo " $0 [-l IMAGES-LIST] [-d IMAGES-DIR] [-r PRIVATE-REGISTRY] [-v KUBERNETES_VERSION ]"
echo
echo "Description:"
echo " -b : save kubernetes' binaries."
echo " -d IMAGES-DIR : the dir of files (tar.gz) which generated by \`docker save\`. default: ${ImagesDirDefault}"
echo " -l IMAGES-LIST : text file with list of images."
echo " -r PRIVATE-REGISTRY : target private registry:port."
echo " -s : save model will be applied. Pull the images in the IMAGES-LIST and save images as a tar.gz file."
echo " -v KUBERNETES_VERSION : download kubernetes' binaries. default: v1.21.5"
echo " -h : usage message"
echo
echo "Examples:"
echo
echo "# Download the default kubernetes version dependency binaries.(default: [kubernetes: v1.21.5], [helm: v3.6.3], [cni: v0.9.1], [etcd: v3.4.13])"
echo "./offline-installtion-tool.sh -b"
echo
echo "# Custom download the kubernetes version dependecy binaries."
echo "export KUBERNETES_VERSION=v1.22.1;export HELM_VERSION=v3.6.3;"
echo "./offline-installtion-tool.sh -b"
exit
}

while getopts 'bsl:r:d:v:h' OPT; do
case $OPT in
b) binary="true";;
d) ImagesDir="$OPTARG";;
l) ImagesList="$OPTARG";;
r) Registry="$OPTARG";;
s) save="true";;
v) KUBERNETES_VERSION="$OPTARG";;
h) func;;
?) func;;
*) func;;
esac
done

if [ -z "${ImagesDir}" ]; then
ImagesDir=${ImagesDirDefault}
fi

if [ -n "${Registry}" ]; then
registryurl=${Registry}
fi

if [ -z "${ARCH}" ]; then
case "$(uname -m)" in
x86_64)
ARCH=amd64
;;
armv8*)
ARCH=arm64
;;
aarch64*)
ARCH=arm64
;;
armv*)
ARCH=armv7
;;
*)
echo "${ARCH}, isn't supported"
exit 1
;;
esac
fi

binariesDIR=${CurrentDIR}/kubekey/${KUBERNETES_VERSION}/${ARCH}

if [[ ${binary} == "true" ]]; then
mkdir -p ${binariesDIR}
if [ -n "${KKZONE}" ] && [ "x${KKZONE}" == "xcn" ]; then
echo "Download kubeadm ..."
curl -L -o ${binariesDIR}/kubeadm https://kubernetes-release.pek3b.qingstor.com/release/${KUBERNETES_VERSION}/bin/linux/${ARCH}/kubeadm
echo "Download kubelet ..."
curl -L -o ${binariesDIR}/kubelet https://kubernetes-release.pek3b.qingstor.com/release/${KUBERNETES_VERSION}/bin/linux/${ARCH}/kubelet
echo "Download kubectl ..."
curl -L -o ${binariesDIR}/kubectl https://kubernetes-release.pek3b.qingstor.com/release/${KUBERNETES_VERSION}/bin/linux/${ARCH}/kubectl
echo "Download helm ..."
curl -L -o ${binariesDIR}/helm https://kubernetes-helm.pek3b.qingstor.com/linux-${ARCH}/${HELM_VERSION}/helm
echo "Download cni plugins ..."
curl -L -o ${binariesDIR}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz https://containernetworking.pek3b.qingstor.com/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz
echo "Download etcd ..."
curl -L -o ${binariesDIR}/etcd-${ETCD_VERSION}-linux-${ARCH}.tar.gz https://kubernetes-release.pek3b.qingstor.com/etcd/release/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-${ARCH}.tar.gz
echo "Download crictl ..."
curl -L -o ${binariesDIR}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz https://kubernetes-release.pek3b.qingstor.com/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz
echo "Download docker ..."
curl -L -o ${binariesDIR}/docker-${DOCKER_VERSION}.tgz https://mirrors.aliyun.com/docker-ce/linux/static/stable/${ARCH}/docker-${DOCKER_VERSION}.tgz
else
echo "Download kubeadm ..."
curl -L -o ${binariesDIR}/kubeadm https://storage.googleapis.com/kubernetes-release/release/${KUBERNETES_VERSION}/bin/linux/${ARCH}/kubeadm
echo "Download kubelet ..."
curl -L -o ${binariesDIR}/kubelet https://storage.googleapis.com/kubernetes-release/release/${KUBERNETES_VERSION}/bin/linux/${ARCH}/kubelet
echo "Download kubectl ..."
curl -L -o ${binariesDIR}/kubectl https://storage.googleapis.com/kubernetes-release/release/${KUBERNETES_VERSION}/bin/linux/${ARCH}/kubectl
echo "Download helm ..."
curl -L -o ${binariesDIR}/helm-${HELM_VERSION}-linux-${ARCH}.tar.gz https://get.helm.sh/helm-${HELM_VERSION}-linux-${ARCH}.tar.gz && cd ${binariesDIR} && tar -zxf helm-${HELM_VERSION}-linux-${ARCH}.tar.gz && mv linux-${ARCH}/helm . && rm -rf *linux-${ARCH}* && cd -
echo "Download cni plugins ..."
curl -L -o ${binariesDIR}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-${ARCH}-${CNI_VERSION}.tgz
echo "Download etcd ..."
curl -L -o ${binariesDIR}/etcd-${ETCD_VERSION}-linux-${ARCH}.tar.gz https://github.com/coreos/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-${ARCH}.tar.gz
echo "Download crictl ..."
curl -L -o ${binariesDIR}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-${ARCH}.tar.gz
echo "Download docker ..."
curl -L -o ${binariesDIR}/docker-${DOCKER_VERSION}.tgz https://download.docker.com/linux/static/stable/${ARCH}/docker-${DOCKER_VERSION}.tgz
fi
fi

if [[ ${save} == "true" ]] && [[ -n "${ImagesList}" ]]; then
if [ ! -d ${ImagesDir} ]; then
mkdir -p ${ImagesDir}
fi
ImagesListLen=$(cat ${ImagesList} | wc -l)
name=""
images=""
index=0
for image in $(<${ImagesList}); do
if [[ ${image} =~ ^\#\#.* ]]; then
if [[ -n ${images} ]]; then
echo ""
echo "Save images: "${name}" to "${ImagesDir}"/"${name}".tar.gz <<<"
docker save ${images} | gzip -c > ${ImagesDir}"/"${name}.tar.gz
echo ""
fi
images=""
name=$(echo "${image}" | sed 's/#//g' | sed -e 's/[[:space:]]//g')
((index++))
continue
fi

image=$(echo "${image}" |tr -d '\r')
docker pull "${image}"
images=${images}" "${image}

if [[ ${index} -eq ${ImagesListLen}-1 ]]; then
if [[ -n ${images} ]]; then
docker save ${images} | gzip -c > ${ImagesDir}"/"${name}.tar.gz
fi
fi
((index++))
done
elif [ -n "${ImagesList}" ]; then
# shellcheck disable=SC2045
for image in $(ls ${ImagesDir}/*.tar.gz); do
echo "Load images: "${image}" <<<"
docker load < $image
done

if [[ -n ${registryurl} ]]; then
for image in $(<${ImagesList}); do
if [[ ${image} =~ ^\#\#.* ]]; then
continue
fi
url=${image%%/*} # ${image}第一个/左边的所有字符串
ImageName=${image#*/} # ${image}第一个/右边的所有字符串
echo $image

if echo "${reposUrl[@]}" | grep -Fx "$url" &>/dev/null; then
imageurl=$registryurl"/"${image#*/}
elif [ $url == $registryurl ]; then
if [[ $ImageName != */* ]]; then
imageurl=$registryurl"/library/"$ImageName
else
imageurl=$image
fi
elif [ "$(echo $url | grep ':')" != "" ]; then
imageurl=$registryurl"/library/"$image
else
imageurl=$registryurl"/"$image
fi

## push image
image=$(echo "${image}" |tr -d '\r')
imageurl=$(echo "${imageurl}" |tr -d '\r')
echo $imageurl
docker tag $image $imageurl
docker push $imageurl
done
fi
fi

使 .sh 文件可执行

1
chmod +x offline-installation-tool.sh

查看如何使用脚本

1
./offline-installation-tool.sh -h
2.2.6.5.3 拉取镜像
1
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# ./offline-installation-tool.sh -s -l images-list-aliyuncs.txt -d ./kubesphere-images

2.2.6.6 推送镜像至私有仓库

1
2
3
4
5
#推送到docker registry中
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# ./offline-installation-tool.sh -l images-list-aliyuncs.txt -d ./kubesphere-images -r easzlab.io.local

#如果你已经按照前面的步骤安装了Harbor(默认已经创建一个library项目),此处还可以将镜像推送到Harbor仓库中
#root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# ./offline-installation-tool.sh -l images-list-aliyuncs.txt -d ./kubesphere-images -r harbor.yourdomain.com:8444

2.2.6.7 修改ks-installer:v3.4.1镜像

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#不修改的话,的会提示“after connection broken by ‘SSLError(SSLCertVerificationError(1, ’[SSL: CERTIFICATE_VERIFY_FAILED] certifica”报错
#创建Dockerfile文件,内容如下
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# cat Dockerfile
FROM registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1
USER root
RUN sed -i 's/self.verify_ssl = True/self.verify_ssl = False/g' /usr/local/lib/python3.10/site-packages/kubernetes/client/configuration.py
USER kubesphere


root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# docker build -f Dockerfile ./ -t easzlab.io.local/registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# docker push easzlab.io.local/registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# docker tag easzlab.io.local/registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1 easzlab.io.local/library/ks-installer:v3.4.1
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# docker push easzlab.io.local/library/ks-installer:v3.4.1

#修改kubesphere-installer.yaml 文件中使用的ks-installer镜像
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# sed -i 's#image: kubesphere/ks-installer:v3.4.1#image: easzlab.io.local/library/ks-installer:v3.4.1#g' kubesphere-installer.yaml

2.2.6.8 上传镜像到指定registry项目下

1
2
#创建push-image-to-easzlab.io.local.sh 文件
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# touch push-image-to-easzlab.io.local.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
#push-image-to-easzlab.io.local.sh 文件内容如下
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# cat push-image-to-easzlab.io.local.sh
#!/bin/bash
images=`docker images | egrep "^registry.cn-beijing.aliyuncs.com"`

#上传所有镜像到library项目下
# 使用 while 循环遍历每一行
while IFS= read -r line; do
echo "$line"
image=`echo $line | awk '{print $1}'`
tag=`echo $line | awk '{print $2}'`
ImageName=${image##*/} #镜像名,不包含前面的所有repository名及斜线
newImageTag="easzlab.io.local/library/${ImageName}:${tag}"
echo ${newImageTag}
docker tag "${image}:${tag}" ${newImageTag}
docker push ${newImageTag}
echo "---------------"
done <<< "$images"

#经验证给部署kubesphere3.4.1过程中需要用到的镜像重新命名
docker tag easzlab.io.local/library/snapshot-controller:v4.0.0 easzlab.io.local/csiplugin/snapshot-controller:v4.0.0
docker push easzlab.io.local/csiplugin/snapshot-controller:v4.0.0
docker tag easzlab.io.local/library/defaultbackend-amd64:1.4 easzlab.io.local/mirrorgooglecontainers/defaultbackend-amd64:1.4
docker push easzlab.io.local/mirrorgooglecontainers/defaultbackend-amd64:1.4
docker tag easzlab.io.local/library/ks-console:v3.4.1 easzlab.io.local/kubesphere/ks-console:v3.4.1
docker push easzlab.io.local/kubesphere/ks-console:v3.4.1
docker tag easzlab.io.local/library/kube-state-metrics:v2.6.0 easzlab.io.local/kubesphere/kube-state-metrics:v2.6.0
docker push easzlab.io.local/kubesphere/kube-state-metrics:v2.6.0
docker tag easzlab.io.local/library/kube-rbac-proxy:v0.11.0 easzlab.io.local/kubesphere/kube-rbac-proxy:v0.11.0
docker push easzlab.io.local/kubesphere/kube-rbac-proxy:v0.11.0
docker tag easzlab.io.local/library/node-exporter:v1.3.1 easzlab.io.local/prom/node-exporter:v1.3.1
docker push easzlab.io.local/prom/node-exporter:v1.3.1
docker tag easzlab.io.local/library/ks-apiserver:v3.4.1 easzlab.io.local/kubesphere/ks-apiserver:v3.4.1
docker push easzlab.io.local/kubesphere/ks-apiserver:v3.4.1
docker tag easzlab.io.local/library/ks-controller-manager:v3.4.1 easzlab.io.local/kubesphere/ks-controller-manager:v3.4.1
docker push easzlab.io.local/kubesphere/ks-controller-manager:v3.4.1
docker tag easzlab.io.local/library/prometheus-operator:v0.55.1 easzlab.io.local/kubesphere/prometheus-operator:v0.55.1
docker push easzlab.io.local/kubesphere/prometheus-operator:v0.55.1
docker tag easzlab.io.local/library/kubectl:v1.22.0 easzlab.io.local/kubesphere/kubectl:v1.22.0
docker push easzlab.io.local/kubesphere/kubectl:v1.22.0
docker tag easzlab.io.local/library/alertmanager:v0.23.0 easzlab.io.local/prom/alertmanager:v0.23.0
docker push easzlab.io.local/prom/alertmanager:v0.23.0
docker tag easzlab.io.local/library/prometheus-config-reloader:v0.55.1 easzlab.io.local/kubesphere/prometheus-config-reloader:v0.55.1
docker push easzlab.io.local/kubesphere/prometheus-config-reloader:v0.55.1
docker tag easzlab.io.local/library/prometheus:v2.39.1 easzlab.io.local/prom/prometheus:v2.39.1
docker push easzlab.io.local/prom/prometheus:v2.39.1
docker tag easzlab.io.local/library/notification-manager-operator:v2.3.0 easzlab.io.local/kubesphere/notification-manager-operator:v2.3.0
docker push easzlab.io.local/kubesphere/notification-manager-operator:v2.3.0
docker tag easzlab.io.local/library/notification-manager:v2.3.0 easzlab.io.local/kubesphere/notification-manager:v2.3.0
docker push easzlab.io.local/kubesphere/notification-manager:v2.3.0
docker tag easzlab.io.local/library/notification-tenant-sidecar:v3.2.0 easzlab.io.local/kubesphere/notification-tenant-sidecar:v3.2.0
docker push easzlab.io.local/kubesphere/notification-tenant-sidecar:v3.2.0

1
2
#执行push-image-to-easzlab.io.local.sh 文件
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# bash push-image-to-easzlab.io.local.sh

2.2.6.9 执行安装kubesphere

执行以下命令部署 KubeSphere:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#应用以下两个文件,以安装kubesphere
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# kubectl apply -f kubesphere-installer.yaml
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
namespace/kubesphere-system created
serviceaccount/ks-installer created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# kubectl apply -f cluster-configuration.yaml
clusterconfiguration.installer.kubesphere.io/ks-installer created
#查看安装进度与日志
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# kubectl -n kubesphere-system logs ks-installer-64c97c9dc5-nbrlt -f

#给k8s集群的控制平台节点打标签
kubectl label node 10.13.15.61 node-role.kubernetes.io/control-plane=""
kubectl label node 10.13.15.61 node-role.kubernetes.io/master=""
kubectl label node 10.13.15.61 node.kubernetes.io/exclude-from-external-load-balancers=""
kubectl label node 10.13.15.62 node-role.kubernetes.io/control-plane=""
kubectl label node 10.13.15.62 node-role.kubernetes.io/master=""
kubectl label node 10.13.15.62 node.kubernetes.io/exclude-from-external-load-balancers=""
kubectl label node 10.13.15.63 node-role.kubernetes.io/control-plane=""
kubectl label node 10.13.15.63 node-role.kubernetes.io/master=""
kubectl label node 10.13.15.63 node.kubernetes.io/exclude-from-external-load-balancers=""

#查看k8s集群的节点列表
root@k8s03-1:/opt/kubeasz-deployk8s/deploy-kubesphere341# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.13.15.61 Ready,SchedulingDisabled control-plane,master 4d17h v1.23.17
10.13.15.62 Ready,SchedulingDisabled control-plane,master 4d17h v1.23.17
10.13.15.63 Ready,SchedulingDisabled control-plane,master 4d17h v1.23.17
10.13.15.64 Ready node 4d17h v1.23.17
10.13.15.65 Ready node 4d17h v1.23.17
10.13.15.66 Ready node 4d17h v1.23.17

使用vip+30880端口,可以正常访问kubesphere的web系统:

image-20241105111401577

2.2.6.10 从k8s中卸载kubesphere

1
2
3
4
#参考:https://kubesphere.io/zh/docs/v3.4/installing-on-kubernetes/uninstall-kubesphere-from-k8s/

wget https://raw.githubusercontent.com/kubesphere/ks-installer/release-3.1/scripts/kubesphere-delete.sh
bash kubesphere-delete.sh

2.3 删除节点

参考:https://github.com/easzlab/kubeasz/blob/3.2.0/docs/op/op-node.md

删除 node 节点流程:(参考ezctl 里面del-node函数 和 playbooks/32.delnode.yml)

  • 检测是否可以删除
  • 迁移节点上的 pod
  • 删除 node 相关服务及文件
  • 从集群删除 node
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#删除节点前
k8s03-1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.13.15.61 Ready,SchedulingDisabled master 3h23m v1.23.17
10.13.15.62 Ready,SchedulingDisabled master 3h23m v1.23.17
10.13.15.63 Ready,SchedulingDisabled master 3h23m v1.23.17
10.13.15.64 Ready node 3h7m v1.23.17
10.13.15.65 Ready node 3h7m v1.23.17

#删除节点(一次只能删除一个节点),它的IP是10.13.15.65
k8s03-1:~# ezctl del-node k8s03 10.13.15.65 # 假设待删除节点为 10.13.15.65

#删除节点成功后
k8s03-1:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.13.15.61 Ready,SchedulingDisabled master 3h25m v1.23.17
10.13.15.62 Ready,SchedulingDisabled master 3h25m v1.23.17
10.13.15.63 Ready,SchedulingDisabled master 3h25m v1.23.17
10.13.15.64 Ready node 3h9m v1.23.17

2.4 添加节点

新增kube_node节点大致流程为:(参考ezctl 里面add-node函数 和 playbooks/22.addnode.yml)

  • [可选]新节点安装 chrony 时间同步
  • 新节点预处理 prepare
  • 新节点安装 docker 服务
  • 新节点安装 kube_node 服务
  • 新节点安装网络插件相关

操作步骤

首先配置 ssh 免密码登录新增节点,然后执行 (假设k8s集群名为 k8s03,待增加节点为 10.13.15.66、它的IP同时也是其节点名):

1
2
3
4
5
#一次只能添加一个节点(添加k8s节点前记得配置ssh免密登录)
$ ezctl add-node k8s03 10.13.15.66

#要添加多个节点,就执行添加节点命令多次
#可能需要在新节点上执行:echo "10.13.15.61 easzlab.io.local" >> /etc/hosts

验证

1
2
3
4
5
6
7
# 验证新节点状态
$ kubectl get node

# 验证新节点的网络插件calico 或flannel 的Pod 状态
$ kubectl get pod -n kube-system -o wide

# 验证新建pod能否调度到新节点,略

2.5 卸载集群

在宿主机上,按照如下步骤清理

  • 清理集群 docker exec -it kubeasz ezctl destroy <集群名称>
  • 重启节点,以确保清理残留的虚拟网卡、路由等信息

操作步骤:

1
2
k8s03-1:/# ezctl destroy k8s03
#然后重启所有k8s节点

2.6 报错与处理

2.6.1 添加节点时“TASK [kube-node : 轮询等待kubelet启动]”执行失败

image-20250116143621421

问题定位:

t1-gpu节点的ip是10.12.62.25,查看其上kubelet服务有如下报错信息:

image-20250116143749619

可知,使用kubeasz部署k8s时,安装的kubelet使用的 cgroup driver是"systemd",但t1-gpu上现有docker cgroup driver 使用cgroupfs,修改t1-gpu上wget r docker cgroup driver 为systemd即可。

处理办法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#在t1-gpu服务器的/etc/docker/daemon.json文件中添加如下内容
root@t1-gpu:~# vi /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
...
}

#重启t1-gpu服务器的docker服务
root@t1-gpu:~# systemctl daemon-reload && systemctl restart docker

root@t1-gpu:~# docker info | grep "Cgroup Driver"
Cgroup Driver: systemd

#然后在部署节点再次尝试添加k8s节点

2.6.2 添加k8s节点时失败再次添加此节点时提示node已在hosts文件中

image-20250116144450943

处理办法:

1
2
3
4
5
6
7
8
9
10
#在部署节点修改/etc/kubeasz/clusters/集群名/hosts
root@controller01:/etc/kubeasz# vi /etc/kubeasz/clusters/cluster01/hosts
...
# work node(s)
[kube_node]
10.12.62.25 #将此行内容删除
#192.168.1.3
#192.168.1.4
172.20.0.21
...

然后在部署节点再次尝试添加k8s节点


k8s在线部署-使用kubeasz部署amd64高可用k8s1-23-17
https://jiangsanyin.github.io/2024/12/01/k8s在线部署-使用kubeasz部署amd64高可用k8s1-23-17/
作者
sanyinjiang
发布于
2024年12月1日
许可协议