KK是什么?
kk是KubeKey的简称,也是其命令形式。 KubeKey (kubesphere.com.cn)
KubeKey(由 Go 语言开发)是一种全新的安装工具,替代了以前使用的基于 ansible 的安装程序。KubeKey 为您提供灵活的安装选择,您可以仅安装 Kubernetes,也可以同时安装 Kubernetes 和 KubeSphere。
KubeSphere 是 GitHub上的一个开源项目,是成千上万名社区用户的聚集地。很多用户都在使用 KubeSphere 运行工作负载。对于在 Linux 上的安装,KubeSphere 既可以部署在云端,也可以部署在本地环境中,例如 AWS EC2、Azure VM 和裸机等。
KubeKey 的几种使用场景:
- 仅安装 Kubernetes;
- 使用一个命令同时安装 Kubernetes 和 KubeSphere;
- 扩缩集群;
- 升级集群;
- 安装 Kubernetes 相关的插件(Chart 或 YAML)。
我们利用其安装Kubernetes
下载KubeKey
运行以下命令,以确保您从正确的区域下载 KubeKey。
export KKZONE=cn
运行以下命令来下载 KubeKey:
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
KubeKey 的最新版本 (v2.2.2),可以更改命令中的版本号来下载特定的版本
KubeKey 所有受支持的 Kubernetes 版本
运行 ./kk version --show-supported-k8s
,查看能使用 KubeKey 安装的所有受支持的 Kubernetes 版本
root@cp4:~# ./kk version --show-supported-k8s
。。。
v1.21.2
v1.21.3
v1.21.4
v1.21.5
v1.21.6
v1.21.7
v1.21.8
v1.21.9
v1.21.10
v1.21.11
v1.21.12
v1.21.13
v1.22.0
v1.22.1
v1.22.2
v1.22.3
v1.22.4
v1.22.5
v1.22.6
v1.22.7
v1.22.8
v1.22.9
v1.22.10
v1.23.0
v1.23.1
v1.23.2
v1.23.3
v1.23.4
v1.23.5
v1.23.6
v1.23.7
v1.23.8
v1.24.0
v1.24.1
可以看到,还不支持最新的1.25版本。
但是,若需使用 KubeKey 来安装 Kubernetes 和 KubeSphere 3.3.0,请参见下表以查看所有受支持的 Kubernetes 版本。
KubeSphere 版本 | 受支持的 Kubernetes 版本 |
---|---|
v3.3.0 | v1.19.x、v1.20.x、v1.21.x、v1.22.x、v1.23.x(实验性支持) |
准备Linux 主机
系统要求
系统 | 最低要求(每个节点) |
---|---|
Ubuntu 16.04,18.04,20.04, 22.04 | CPU:2 核,内存:4 G,硬盘:40 G |
Debian Buster,Stretch | CPU:2 核,内存:4 G,硬盘:40 G |
CentOS 7.x | CPU:2 核,内存:4 G,硬盘:40 G |
Red Hat Enterprise Linux 7 | CPU:2 核,内存:4 G,硬盘:40 G |
SUSE Linux Enterprise Server 15 /openSUSE Leap 15.2 | CPU:2 核,内存:4 G,硬盘:40 G |
备注
/var/lib/docker
路径主要用于存储容器数据,在使用和操作过程中数据量会逐渐增加。因此,在生产环境中,建议为/var/lib/docker
单独挂载一个硬盘。- CPU 必须为 x86_64,暂时不支持 Arm 架构的 CPU
节点要求
- 所有节点必须都能通过
SSH
访问。 - 所有节点时间同步。
- 所有节点都应使用
sudo
/curl
/openssl
/tar
。
容器运行时
KubeKey 会默认安装最新版本的 Docker。或者,也可以在创建集群前手动安装 Docker 或其他容器运行时。
支持的容器运行时 | 版本 |
---|---|
Docker | 19.3.8+ |
containerd | 最新版 |
CRI-O(试验版,未经充分测试) | 最新版 |
iSula(试验版,未经充分测试) | 最新版 |
依赖项要求
KubeKey 可以一同安装 Kubernetes 和 KubeSphere。根据要安装的 Kubernetes 版本,需要安装的依赖项可能会不同。您可以参考下表,查看是否需要提前在节点上安装相关依赖项。
依赖项 | Kubernetes 版本 ≥ 1.18 | Kubernetes 版本 < 1.18 |
---|---|---|
socat | 必须 | 可选,但建议安装 |
conntrack | 必须 | 可选,但建议安装 |
ebtables | 可选,但建议安装 | 可选,但建议安装 |
ipset | 可选,但建议安装 | 可选,但建议安装 |
网络和 DNS 要求
- 请确保
/etc/resolv.conf
中的 DNS 地址可用,否则,可能会导致集群中的 DNS 出现问题。 - 如果您的网络配置使用防火墙规则或安全组,请务必确保基础设施组件可以通过特定端口相互通信。建议您关闭防火墙。有关更多信息,请参见端口要求。
- 支持的 CNI 插件:Calico 和 Flannel。
创建Cluster配置文件并修改
./kk create config -f config-sample.yaml
root@cp4:/home/zyi#./kk create config -f config-sample.yaml
root@cp4:/home/zyi# vim config-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: test-cluster
spec:
hosts:
- {name: cp4, address: 172.16.10.10, internalAddress: 172.16.10.10, password: "cisco123"}
- {name: worker41, address: 172.16.10.11, internalAddress: 172.16.10.11, password: "cisco123"}
roleGroups:
etcd:
- cp4
control-plane:
- cp4
worker:
- cp4
- worker41
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
# internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.24.1
clusterName: cluster24.smartx.lab
autoRenewCerts: true
containerManager: containerd
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.244.64.0/18
kubeServiceCIDR: 10.244.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []
以上配置使用:
- 两节点cp4/worker41;
- Runtime-Containerd
- Kubernetes 1.24.1
- CNI-Calico
创建集群
root@cp4:/home/zyi# ./kk create config -f config-sample.yaml
config-sample.yaml already exists. Are you sure you want to overwrite this config file? [yes/no]: no
root@cp4:/home/zyi# ./kk create cluster -f config-sample.yaml
_ __ _ _ __
| | / / | | | | / /
| |/ / _ _| |__ ___| |/ / ___ _ _
| \| | | | '_ \ / _ \ \ / _ \ | | |
| |\ \ |_| | |_) | __/ |\ \ __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
__/ |
|___/
12:04:12 UTC [GreetingsModule] Greetings
12:04:13 UTC message: [worker41]
Greetings, KubeKey!
12:04:13 UTC message: [cp4]
Greetings, KubeKey!
12:04:13 UTC success: [worker41]
12:04:13 UTC success: [cp4]
12:04:13 UTC [NodePreCheckModule] A pre-check on nodes
12:04:13 UTC success: [cp4]
12:04:13 UTC success: [worker41]
12:04:13 UTC [ConfirmModule] Display confirmation form
+----------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
+----------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| cp4 | y | y | y | y | y | y | | y | | | | | | | UTC 12:04:13 |
| worker41 | y | y | y | y | y | y | | y | | | | | | | UTC 12:04:13 |
+----------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
12:04:15 UTC success: [LocalHost]
12:04:15 UTC [NodeBinariesModule] Download installation binaries
12:04:15 UTC message: [localhost]
downloading amd64 kubeadm v1.24.1 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 42.3M 100 42.3M 0 0 16.1M 0 0:00:02 0:00:02 --:--:-- 16.0M
12:04:18 UTC message: [localhost]
downloading amd64 kubelet v1.24.1 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 110M 100 110M 0 0 27.3M 0 0:00:04 0:00:04 --:--:-- 27.3M
12:04:23 UTC message: [localhost]
downloading amd64 kubectl v1.24.1 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 43.5M 100 43.5M 0 0 17.0M 0 0:00:02 0:00:02 --:--:-- 17.0M
12:04:26 UTC message: [localhost]
downloading amd64 helm v3.6.3 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
3 13.0M 3 496k 0 0 752 0 5:03:40 0:11:14 4:52:26 0
curl: (56) OpenSSL SSL_read: Connection timed out, errno 110
12:15:41 UTC [WARN] Having a problem with accessing https://storage.googleapis.com? You can try again after setting environment 'export KKZONE=cn'
12:15:41 UTC message: [LocalHost]
Failed to download helm binary: curl -L -o /home/zyi/kubekey/helm/v3.6.3/amd64/helm-v3.6.3-linux-amd64.tar.gz https://get.helm.sh/helm-v3.6.3-linux-amd64.tar.gz && cd /home/zyi/kubekey/helm/v3.6.3/amd64 && tar -zxf helm-v3.6.3-linux-amd64.tar.gz && mv linux-amd64/helm . && rm -rf *linux-amd64* error: exit status 56
12:15:41 UTC failed: [LocalHost]
**error: Pipeline[CreateClusterPipeline] execute failed: Module[NodeBinariesModule] exec failed:
failed: [LocalHost] [DownloadBinaries] exec failed after 1 retires: Failed to download helm binary: curl -L -o /home/zyi/kubekey/helm/v3.6.3/amd64/helm-v3.6.3-linux-amd64.tar.gz https://get.helm.sh/helm-v3.6.3-linux-amd64.tar.gz && cd /home/zyi/kubekey/helm/v3.6.3/amd64 && tar -zxf helm-v3.6.3-linux-amd64.tar.gz && mv linux-amd64/helm . && rm -rf *linux-amd64* error: exit status 56**
root@cp4:/home/zyi# **export KKZONE=cn**
root@cp4:/home/zyi# ./kk create cluster -f config-sample.yaml
_ __ _ _ __
| | / / | | | | / /
| |/ / _ _| |__ ___| |/ / ___ _ _
| \| | | | '_ \ / _ \ \ / _ \ | | |
| |\ \ |_| | |_) | __/ |\ \ __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
__/ |
|___/
12:16:04 UTC [GreetingsModule] Greetings
12:16:05 UTC message: [worker41]
Greetings, KubeKey!
12:16:05 UTC message: [cp4]
Greetings, KubeKey!
12:16:05 UTC success: [worker41]
12:16:05 UTC success: [cp4]
12:16:05 UTC [NodePreCheckModule] A pre-check on nodes
12:16:05 UTC success: [worker41]
12:16:05 UTC success: [cp4]
12:16:05 UTC [ConfirmModule] Display confirmation form
+----------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
+----------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| cp4 | y | y | y | y | y | y | | y | | | | | | | UTC 12:16:05 |
| worker41 | y | y | y | y | y | y | | y | | | | | | | UTC 12:16:05 |
+----------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations
Continue this installation? [yes/no]: yes
12:16:08 UTC success: [LocalHost]
12:16:08 UTC [NodeBinariesModule] Download installation binaries
12:16:08 UTC message: [localhost]
downloading amd64 kubeadm v1.24.1 ...
12:16:08 UTC message: [localhost]
kubeadm is existed
12:16:08 UTC message: [localhost]
downloading amd64 kubelet v1.24.1 ...
12:16:09 UTC message: [localhost]
kubelet is existed
12:16:09 UTC message: [localhost]
downloading amd64 kubectl v1.24.1 ...
12:16:09 UTC message: [localhost]
kubectl is existed
12:16:09 UTC message: [localhost]
downloading amd64 helm v3.6.3 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 43.0M 100 43.0M 0 0 1018k 0 0:00:43 0:00:43 --:--:-- 1024k
12:16:52 UTC message: [localhost]
downloading amd64 kubecni v0.9.1 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 37.9M 100 37.9M 0 0 1021k 0 0:00:38 0:00:38 --:--:-- 1063k
12:17:31 UTC message: [localhost]
downloading amd64 crictl v1.24.0 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 13.8M 100 13.8M 0 0 1019k 0 0:00:13 0:00:13 --:--:-- 1059k
12:17:45 UTC message: [localhost]
downloading amd64 etcd v3.4.13 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 16.5M 100 16.5M 0 0 1019k 0 0:00:16 0:00:16 --:--:-- 1051k
12:18:01 UTC message: [localhost]
downloading amd64 containerd 1.6.4 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 42.3M 100 42.3M 0 0 1019k 0 0:00:42 0:00:42 --:--:-- 1087k
12:18:44 UTC message: [localhost]
downloading amd64 runc v1.1.1 ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9194k 100 9194k 0 0 1009k 0 0:00:09 0:00:09 --:--:-- 1076k
12:18:53 UTC success: [LocalHost]
12:18:53 UTC [ConfigureOSModule] Prepare to init OS
12:18:54 UTC success: [worker41]
12:18:54 UTC success: [cp4]
12:18:54 UTC [ConfigureOSModule] Generate init os script
12:18:54 UTC success: [cp4]
12:18:54 UTC success: [worker41]
12:18:54 UTC [ConfigureOSModule] Exec init os script
12:18:55 UTC stdout: [worker41]
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
12:18:55 UTC stdout: [cp4]
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
vm.swappiness = 1
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
12:18:55 UTC success: [worker41]
12:18:55 UTC success: [cp4]
12:18:55 UTC [ConfigureOSModule] configure the ntp server for each node
12:18:55 UTC skipped: [worker41]
12:18:55 UTC skipped: [cp4]
12:18:55 UTC [KubernetesStatusModule] Get kubernetes cluster status
12:18:55 UTC success: [cp4]
12:18:55 UTC [InstallContainerModule] Sync containerd binaries
12:18:57 UTC success: [cp4]
12:18:57 UTC success: [worker41]
12:18:57 UTC [InstallContainerModule] Sync crictl binaries
12:18:58 UTC success: [cp4]
12:18:58 UTC success: [worker41]
12:18:58 UTC [InstallContainerModule] Generate containerd service
12:18:58 UTC success: [cp4]
12:18:58 UTC success: [worker41]
12:18:58 UTC [InstallContainerModule] Generate containerd config
12:18:58 UTC success: [cp4]
12:18:58 UTC success: [worker41]
12:18:58 UTC [InstallContainerModule] Generate crictl config
12:18:58 UTC success: [cp4]
12:18:58 UTC success: [worker41]
12:18:58 UTC [InstallContainerModule] Enable containerd
12:18:59 UTC success: [worker41]
12:18:59 UTC success: [cp4]
12:18:59 UTC [PullModule] Start to pull images on all nodes
12:18:59 UTC message: [worker41]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.7
12:18:59 UTC message: [cp4]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.7
12:19:01 UTC message: [worker41]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.24.1
12:19:01 UTC message: [cp4]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.24.1
12:19:06 UTC message: [worker41]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
12:19:06 UTC message: [cp4]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.24.1
12:19:08 UTC message: [worker41]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
12:19:10 UTC message: [cp4]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.24.1
12:19:12 UTC message: [worker41]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
12:19:14 UTC message: [cp4]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.24.1
12:19:17 UTC message: [worker41]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
12:19:19 UTC message: [cp4]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
12:19:22 UTC message: [cp4]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
12:19:26 UTC message: [cp4]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
12:19:27 UTC message: [worker41]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
12:19:31 UTC message: [cp4]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
12:19:36 UTC message: [worker41]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
12:19:40 UTC message: [cp4]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
12:19:50 UTC message: [cp4]
downloading image: registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
12:19:52 UTC success: [worker41]
12:19:52 UTC success: [cp4]
12:19:52 UTC [ETCDPreCheckModule] Get etcd status
12:19:52 UTC success: [cp4]
12:19:52 UTC [CertsModule] Fetch etcd certs
12:19:52 UTC success: [cp4]
12:19:52 UTC [CertsModule] Generate etcd Certs
[certs] Generating "ca" certificate and key
[certs] admin-cp4 serving cert is signed for DNS names [cp4 etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost worker41] and IPs [127.0.0.1 ::1 172.16.10.10 172.16.10.11]
[certs] member-cp4 serving cert is signed for DNS names [cp4 etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost worker41] and IPs [127.0.0.1 ::1 172.16.10.10 172.16.10.11]
[certs] node-cp4 serving cert is signed for DNS names [cp4 etcd etcd.kube-system etcd.kube-system.svc etcd.kube-system.svc.cluster.local lb.kubesphere.local localhost worker41] and IPs [127.0.0.1 ::1 172.16.10.10 172.16.10.11]
12:19:53 UTC success: [LocalHost]
12:19:53 UTC [CertsModule] Synchronize certs file
12:19:53 UTC success: [cp4]
12:19:53 UTC [CertsModule] Synchronize certs file to master
12:19:53 UTC skipped: [cp4]
12:19:53 UTC [InstallETCDBinaryModule] Install etcd using binary
12:19:54 UTC success: [cp4]
12:19:54 UTC [InstallETCDBinaryModule] Generate etcd service
12:19:54 UTC success: [cp4]
12:19:54 UTC [InstallETCDBinaryModule] Generate access address
12:19:54 UTC success: [cp4]
12:19:54 UTC [ETCDConfigureModule] Health check on exist etcd
12:19:54 UTC skipped: [cp4]
12:19:54 UTC [ETCDConfigureModule] Generate etcd.env config on new etcd
12:19:54 UTC success: [cp4]
12:19:54 UTC [ETCDConfigureModule] Refresh etcd.env config on all etcd
12:19:54 UTC success: [cp4]
12:19:54 UTC [ETCDConfigureModule] Restart etcd
12:19:57 UTC stdout: [cp4]
Created symlink /etc/systemd/system/multi-user.target.wants/etcd.service → /etc/systemd/system/etcd.service.
12:19:57 UTC success: [cp4]
12:19:57 UTC [ETCDConfigureModule] Health check on all etcd
12:19:57 UTC success: [cp4]
12:19:57 UTC [ETCDConfigureModule] Refresh etcd.env config to exist mode on all etcd
12:19:57 UTC success: [cp4]
12:19:57 UTC [ETCDConfigureModule] Health check on all etcd
12:19:57 UTC success: [cp4]
12:19:57 UTC [ETCDBackupModule] Backup etcd data regularly
12:19:57 UTC success: [cp4]
12:19:57 UTC [ETCDBackupModule] Generate backup ETCD service
12:19:57 UTC success: [cp4]
12:19:57 UTC [ETCDBackupModule] Generate backup ETCD timer
12:19:57 UTC success: [cp4]
12:19:57 UTC [ETCDBackupModule] Enable backup etcd service
12:19:58 UTC success: [cp4]
12:19:58 UTC [InstallKubeBinariesModule] Synchronize kubernetes binaries
12:20:08 UTC success: [cp4]
12:20:08 UTC success: [worker41]
12:20:08 UTC [InstallKubeBinariesModule] Synchronize kubelet
12:20:08 UTC success: [worker41]
12:20:08 UTC success: [cp4]
12:20:08 UTC [InstallKubeBinariesModule] Generate kubelet service
12:20:08 UTC success: [cp4]
12:20:08 UTC success: [worker41]
12:20:08 UTC [InstallKubeBinariesModule] Enable kubelet service
12:20:09 UTC success: [worker41]
12:20:09 UTC success: [cp4]
12:20:09 UTC [InstallKubeBinariesModule] Generate kubelet env
12:20:09 UTC success: [cp4]
12:20:09 UTC success: [worker41]
12:20:09 UTC [InitKubernetesModule] Generate kubeadm config
12:20:09 UTC success: [cp4]
12:20:09 UTC [InitKubernetesModule] Init cluster using kubeadm
12:20:25 UTC stdout: [cp4]
W0924 12:20:09.513341 3665 common.go:83] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W0924 12:20:09.514588 3665 common.go:83] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W0924 12:20:09.522371 3665 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.244.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [cp4 cp4.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost worker41 worker41.cluster.local] and IPs [10.244.0.1 172.16.10.10 127.0.0.1 172.16.10.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.003533 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node cp4 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node cp4 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: g9viw6.oma7nbn14lballdw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join lb.kubesphere.local:6443 --token g9viw6.oma7nbn14lballdw \
--discovery-token-ca-cert-hash sha256:0d4051bdc07855ce501dd71fc95ce23dbd102a0de4e36bbefb098811069eae8e \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join lb.kubesphere.local:6443 --token g9viw6.oma7nbn14lballdw \
--discovery-token-ca-cert-hash sha256:0d4051bdc07855ce501dd71fc95ce23dbd102a0de4e36bbefb098811069eae8e
12:20:25 UTC success: [cp4]
12:20:25 UTC [InitKubernetesModule] Copy admin.conf to ~/.kube/config
12:20:25 UTC success: [cp4]
12:20:25 UTC [InitKubernetesModule] Remove master taint
12:20:26 UTC stdout: [cp4]
node/cp4 untainted
12:20:26 UTC stdout: [cp4]
node/cp4 untainted
12:20:26 UTC success: [cp4]
12:20:26 UTC [InitKubernetesModule] Add worker label
12:20:26 UTC stdout: [cp4]
node/cp4 labeled
12:20:26 UTC success: [cp4]
12:20:26 UTC [ClusterDNSModule] Generate coredns service
12:20:26 UTC success: [cp4]
12:20:26 UTC [ClusterDNSModule] Override coredns service
12:20:26 UTC stdout: [cp4]
service "kube-dns" deleted
12:20:27 UTC stdout: [cp4]
service/coredns created
Warning: resource clusterroles/system:coredns is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrole.rbac.authorization.k8s.io/system:coredns configured
12:20:27 UTC success: [cp4]
12:20:27 UTC [ClusterDNSModule] Generate nodelocaldns
12:20:27 UTC success: [cp4]
12:20:27 UTC [ClusterDNSModule] Deploy nodelocaldns
12:20:27 UTC stdout: [cp4]
serviceaccount/nodelocaldns created
daemonset.apps/nodelocaldns created
12:20:27 UTC success: [cp4]
12:20:27 UTC [ClusterDNSModule] Generate nodelocaldns configmap
12:20:28 UTC success: [cp4]
12:20:28 UTC [ClusterDNSModule] Apply nodelocaldns configmap
12:20:28 UTC stdout: [cp4]
configmap/nodelocaldns created
12:20:28 UTC success: [cp4]
12:20:28 UTC [KubernetesStatusModule] Get kubernetes cluster status
12:20:28 UTC stdout: [cp4]
v1.24.1
12:20:28 UTC stdout: [cp4]
cp4 v1.24.1 [map[address:172.16.10.10 type:InternalIP] map[address:cp4 type:Hostname]]
12:20:30 UTC stdout: [cp4]
I0924 12:20:29.685946 4408 version.go:255] remote version is much newer: v1.25.2; falling back to: stable-1.24
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
dddbbd2547ed4f8f0cdc0a22040ad37c08c24855c8d9b3f67ca1e9dc7ed73268
12:20:30 UTC stdout: [cp4]
secret/kubeadm-certs patched
12:20:30 UTC stdout: [cp4]
secret/kubeadm-certs patched
12:20:30 UTC stdout: [cp4]
secret/kubeadm-certs patched
12:20:30 UTC stdout: [cp4]
3xqg2r.ql189s90j5k3x8bq
12:20:30 UTC success: [cp4]
12:20:30 UTC [JoinNodesModule] Generate kubeadm config
12:20:31 UTC skipped: [cp4]
12:20:31 UTC success: [worker41]
12:20:31 UTC [JoinNodesModule] Join control-plane node
12:20:31 UTC skipped: [cp4]
12:20:31 UTC [JoinNodesModule] Join worker node
12:20:57 UTC stdout: [worker41]
W0924 12:20:31.100227 2830 common.go:83] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0924 12:20:43.588320 2830 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.244.0.10]; the provided value is: [169.254.25.10]
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
12:20:57 UTC skipped: [cp4]
12:20:57 UTC success: [worker41]
12:20:57 UTC [JoinNodesModule] Copy admin.conf to ~/.kube/config
12:20:57 UTC skipped: [cp4]
12:20:57 UTC [JoinNodesModule] Remove master taint
12:20:57 UTC skipped: [cp4]
12:20:57 UTC [JoinNodesModule] Add worker label to master
12:20:57 UTC skipped: [cp4]
12:20:57 UTC [JoinNodesModule] Synchronize kube config to worker
12:20:57 UTC skipped: [cp4]
12:20:57 UTC success: [worker41]
12:20:57 UTC [JoinNodesModule] Add worker label to worker
12:20:57 UTC stdout: [worker41]
node/worker41 labeled
12:20:57 UTC skipped: [cp4]
12:20:57 UTC success: [worker41]
12:20:57 UTC [DeployNetworkPluginModule] Generate calico
12:20:57 UTC success: [cp4]
12:20:57 UTC [DeployNetworkPluginModule] Deploy calico
12:20:58 UTC stdout: [cp4]
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
poddisruptionbudget.policy/calico-kube-controllers created
12:20:58 UTC success: [cp4]
12:20:58 UTC [ConfigureKubernetesModule] Configure kubernetes
12:20:58 UTC success: [worker41]
12:20:58 UTC success: [cp4]
12:20:58 UTC [ChownModule] Chown user $HOME/.kube dir
12:20:58 UTC success: [worker41]
12:20:58 UTC success: [cp4]
12:20:58 UTC [AutoRenewCertsModule] Generate k8s certs renew script
12:20:58 UTC success: [cp4]
12:20:58 UTC [AutoRenewCertsModule] Generate k8s certs renew service
12:20:58 UTC success: [cp4]
12:20:58 UTC [AutoRenewCertsModule] Generate k8s certs renew timer
12:20:58 UTC success: [cp4]
12:20:58 UTC [AutoRenewCertsModule] Enable k8s certs renew service
12:20:59 UTC success: [cp4]
12:20:59 UTC [SaveKubeConfigModule] Save kube config as a configmap
12:20:59 UTC success: [LocalHost]
12:20:59 UTC [AddonsModule] Install addons
12:20:59 UTC success: [LocalHost]
12:20:59 UTC Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.
Please check the result using the command:
kubectl get pod -A
检查集群创建情况
root@cp4:/home/zyi# kubectl get node
NAME STATUS ROLES AGE VERSION
cp4 NotReady control-plane,worker 44s v1.24.1
worker41 NotReady worker 8s v1.24.1
root@cp4:/home/zyi# kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-f9f9bbcc9-b2xxx 1/1 Running 0 22s
kube-system calico-node-mcbpn 1/1 Running 0 22s
kube-system calico-node-zbm8p 1/1 Running 0 22s
kube-system coredns-f657fccfd-6tnvr 1/1 Running 0 43s
kube-system coredns-f657fccfd-rqwsm 1/1 Running 0 43s
kube-system kube-apiserver-cp4 1/1 Running 0 58s
kube-system kube-controller-manager-cp4 1/1 Running 0 57s
kube-system kube-proxy-2q4p9 1/1 Running 0 23s
kube-system kube-proxy-ml5sx 1/1 Running 0 43s
kube-system kube-scheduler-cp4 1/1 Running 0 56s
kube-system nodelocaldns-4srdw 1/1 Running 0 23s
kube-system nodelocaldns-fd9gx 1/1 Running 0 43s