离线安装RKE2(Rancher Kubernetes Engine 2)

官方文档

英文版:Air-Gap Install
中文版:离线安装

安装RKE2的执行程序

下载二进制包

GitHub Releases下载对应版本的离线文件,这里以v1.29.7+rke2r1为例:

1
2
3
4
5
mkdir /home/v1.29.7+rke2r1 && cd /home/v1.29.7+rke2r1/
curl -OLs https://github.com/rancher/rke2/releases/download/v1.29.7%2Brke2r1/rke2-images.linux-amd64.tar.zst
curl -OLs https://github.com/rancher/rke2/releases/download/v1.29.7%2Brke2r1/rke2.linux-amd64.tar.gz
curl -OLs https://github.com/rancher/rke2/releases/download/v1.29.7%2Brke2r1/sha256sum-amd64.txt
curl -sfL https://get.rke2.io --output install.sh

复制离线文件

复制镜像压缩包到/var/lib/rancher/rke2/agent/images/目录下

1
2
mkdir -p /var/lib/rancher/rke2/agent/images/
cp rke2-images-*.tar.* /var/lib/rancher/rke2/agent/images/

安装执行程序rke2

这一步在所有的节点上都要执行。

避坑指南:在官方的文档中,这一步叫做“安装RKE2”,这有一点歧义和迷惑性。事实上,这一步只是把rke2.linux-amd64.tar.gz里的rke2执行程序和一些脚本解压出来进行了复制和设置执行权限,没有启动任何进程,也没有进行kubernetes集群的安装!

1
2
3
4
5
6
7
8
9
10
11
INSTALL_RKE2_ARTIFACT_PATH=/home/v1.29.7+rke2r1 sh install.sh

# 查看可执行程序和脚本(rke2,清理脚本rke2-killall.sh,卸载脚本rke2-uninstall.sh)
ls -l /usr/local/bin/rke2*

# 查看安装的service(rke2-server和rke2-agent)
ls -l /usr/local/lib/systemd/system/rke2*

# rke2-server和rke2-agent都没有启动(状态为inactive)
systemctl status rke2-server.service
systemctl status rke2-agent.service

至此,节点的准备已经完成,后续就是按照节点规划,Master节点就启动rke2-server服务,Worker节点就启动rke2-agent服务。

启动RKE2-Server(安装Kubernetes的Master节点)

这一步才是Kubernetes集群的安装。

配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
mkdir -p /etc/rancher/rke2

# 私有镜像仓库的配置
cat > /etc/rancher/rke2/registries.yaml << EOF
mirrors:
"*":
endpoint:
- "http://your_registry_host_ip:5000"
EOF

# 第一台Master节点的rke2的配置,没有server参数,并用node-taint设置污点不执行工作负载
cat > /etc/rancher/rke2/config.yaml << EOF
token: your_token
tls-san: host_ip_of_master_node1
node-taint:
- "CriticalAddonsOnly=true:NoExecute"
system-default-registry: "your_registry_host_ip:5000"
control-plane-resource-requests: kube-apiserver-cpu=50m,kube-apiserver-memory=256M,kube-scheduler-cpu=50m,kube-scheduler-memory=128M,etcd-cpu=100m
EOF

在上述config.yaml中,设置了control-plane-resource-requests,即控制面资源请求,参考控制平面组件资源请求/限制
可能的值有:kube-apiserver,kube-scheduler,kube-controller-manager,kube-proxy,etcd,cloud-controller-manager,其默认值可以用kubectl describe pod kube-apiserver-your_hostname -n kube-system查看。这些参数的调整,只影响当前主机,重启rke2-server服务后生效。

启动

1
2
3
4
5
# 启用rke2-server服务(开机自动启动)
systemctl enable rke2-server.service

# 启动rke2-server服务
systemctl start rke2-server.service

验证

1
2
3
4
5
# 查看日志
journalctl -u rke2-server -f

# 确认集群是正确的
/var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get nodes

简化工具的使用(rke2-server)

RKE2没有像k3s一样把kbuectl等工具加到全局PATH,默认的配置文件位置也不对,需要手动指定。

1
2
3
4
5
6
7
8
# 在~/.bashrc里添加以下内容
export PATH=/var/lib/rancher/rke2/bin:$PATH
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
export CRI_CONFIG_FILE=/var/lib/rancher/rke2/agent/etc/crictl.yaml
export CTRD_ADDRESS=/run/k3s/containerd/containerd.sock

# 使更改生效
source ~/.bashrc

ctr是containerd的命令行工具,用于与 containerd 容器运行时进行交互。crictl是kubernetes的标准接口,用于与 Container Runtime Interface (CRI) 兼容的容器运行时进行交互。

其他Master节点

Master节点的数量要求是奇数(1,3,5…),后续Master节点的配置里增加server参数,指向第一台Master节点,既可加入集群。

1
2
3
4
5
6
7
# 第n台Master节点的rke2的配置(增加了server参数)
cat > /etc/rancher/rke2/config.yaml << EOF
server: https://host_ip_of_master_node1:9345
token: your_token
tls-san: host_ip_of_master_node1
system-default-registry: "your_registry_host_ip:5000"
EOF

启动RKE2-Agent(安装Kubernetes的Worker节点)

配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
mkdir -p /etc/rancher/rke2

# 私有镜像仓库的配置
cat > /etc/rancher/rke2/registries.yaml << EOF
mirrors:
"*":
endpoint:
- "http://your_registry_host_ip:5000"
EOF

# Worker节点的rke2的配置
cat > /etc/rancher/rke2/config.yaml << EOF
server: https://host_ip_of_master_node1:9345
token: your_token
tls-san: host_ip_of_master_node1
system-default-registry: "your_registry_host_ip:5000"
EOF

启动

1
2
3
4
5
6
7
8
# 启用rke2-agent服务(开机自动启动)
systemctl enable rke2-agent.service

# 启动rke2-agent服务
systemctl start rke2-agent.service

# 查看日志
journalctl -u rke2-agent -f

简化工具的使用(rke2-agent)

1
2
3
4
5
6
7
# 在~/.bashrc里添加以下内容(vi ~/.bashrc)
# rke2 tools
alias ctr='/var/lib/rancher/rke2/bin/ctr --address /run/k3s/containerd/containerd.sock --namespace k8s.io'
alias crictl='CRI_CONFIG_FILE=/var/lib/rancher/rke2/agent/etc/crictl.yaml /var/lib/rancher/rke2/bin/crictl'

# 使更改生效
source ~/.bashrc

私有仓库镜像的推送

registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.13.0为例。

1
2
3
4
5
6
7
8
9
# 下载
docker pull registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.13.0

# 标记
# 注意:这一步是把镜像仓库的域名替换为私有仓库,即把registry.k8s.io替换为your_registry)
docker tag registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.13.0 your_registry_host_ip:5000/kube-state-metrics/kube-state-metrics:v2.13.0

# 推送
docker push your_registry_host_ip:5000/kube-state-metrics/kube-state-metrics:v2.13.0

在k8s的node上直接pull、tag(不得已的办法):

1
2
crictl pull your_registry_ip:5000/kube-state-metrics/kube-state-metrics:v2.13.0
ctr image tag your_registry_ip:5000/kube-state-metrics/kube-state-metrics:v2.13.0 registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.13.0

Prometheus监控

Github: prometheus-community

安装(kube-prometheus-stack):

1
2
3
4
5
6
7
wget https://github.com/prometheus-community/helm-charts/releases/download/kube-prometheus-stack-62.3.1/kube-prometheus-stack-62.3.1.tgz
tar -xzvf *.tgz
cd kube-prometheus-stack
helm install prometheus .
#helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring --create-namespace
kubectl port-forward pod/prometheus-grafana-854cfc45d9-tsg79 3000:3000
#kubectl port-forward pod/prometheus-kube-prometheus-prometheus 9090:9090

grafana的使用

账号:admin/prom-operator,密码配置在helm values.yaml中。