在任意节点(推荐单独节点,也可在现有 master 上)安装 HAProxy:
sudo vim /etc/haproxy/haproxy.cfg
shitou@aishitou:~$ cat /etc/haproxy/haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets
defaults
log global
mode tcp
option tcplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
#errorfile 400 /etc/haproxy/errors/400.http
#errorfile 403 /etc/haproxy/errors/403.http
#errorfile 408 /etc/haproxy/errors/408.http
#errorfile 500 /etc/haproxy/errors/500.http
#errorfile 502 /etc/haproxy/errors/502.http
#errorfile 503 /etc/haproxy/errors/503.http
#errorfile 504 /etc/haproxy/errors/504.http
# 前端配置:监听 80 端口(根据你的服务端口调整,如 8080、443 等)
frontend k8s_front
bind *:80 # 负载均衡节点对外暴露的端口
mode tcp # 如果是 TCP 服务(如 Kubernetes API Server)用 tcp 模式;HTTP 服务用 http 模式
default_backend k8s_back # 转发到后端服务器组
# 后端配置:定义两个节点的 IP 和端口
backend k8s_back
mode tcp # 与前端模式保持一致
balance roundrobin # 轮询策略(按顺序分配请求)
# 后端服务器配置:格式为 "server 名称 IP:端口 check"(check 表示启用健康检查)
server master 192.168.31.19:6443 check # 现有 master 节点(假设端口是 6443,根据实际服务端口修改)
server new-node 192.168.31.6:6443 check # 新添加的节点(端口与 master 保持一致
# 统计页面配置(新增)
listen stats
bind *:8081 # 统计页面监听端口(可修改为其他未占用端口)
mode http
stats enable
stats uri /haproxy-stats # 访问路径
stats auth admin:123456 # 建议修改为强密码,格式:用户名:密码
stats refresh 30s # 页面自动刷新间隔
sudo systemctl restart haproxy
sudo systemctl enable haproxy
在现有 master 节点执行,直接将编辑好的配置应用到集群:
bash
kubectl apply -f kubeadm-config.yaml
执行成功会输出:
plaintext
configmap/kubeadm-config configured
====================================================================
shitou@shitou:~$ sudo kubeadm config view > kubeadm-config.yaml
[sudo] password for shitou:
invalid subcommand "view"
See 'kubeadm config -h' for help and examples
shitou@shitou:~$ kubectl -n kube-system get cm kubeadm-config -o yaml > kubeadm-config.yaml
shitou@shitou:~$ sudo vim kubeadm-config.yaml
shitou@shitou:~$ cat kubeadm-config.yaml
apiVersion: v1
data:
ClusterConfiguration: |
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.31.2:6443"
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: v1.28.15
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
kind: ConfigMap
metadata:
creationTimestamp: "2025-08-18T07:32:49Z"
name: kubeadm-config
namespace: kube-system
resourceVersion: "237"
uid: 64a9e958-b83d-4db7-be21-bf6ba18f8511
shitou@shitou:~$ sudo kubeadm init phase upload-config cluster --config kubeadm-config.yaml
unknown flag: --config
To see the stack trace of this error execute with --v=5 or higher
shitou@shitou:~$ kubectl apply -f kubeadm-config.yaml
Warning: resource configmaps/kubeadm-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
configmap/kubeadm-config configured
一、关键操作记录与说明
操作命令 目的 结果 / 问题 原因分析
sudo kubeadm config view > kubeadm-config.yaml 尝试通过kubeadm查看并导出集群配置 失败,报错invalid subcommand "view" kubeadm config子命令中没有view选项,该命令语法错误(正确获取配置的方式需通过kubectl操作 ConfigMap)
kubectl -n kube-system get cm kubeadm-config -o yaml > kubeadm-config.yaml 从kube-system命名空间获取kubeadm-config配置映射(ConfigMap)并导出为文件 成功,生成kubeadm-config.yaml 这是获取集群初始化配置的正确方式:Kubernetes 集群的kubeadm配置会存储在kube-system命名空间下的kubeadm-config ConfigMap 中
sudo vim kubeadm-config.yaml 编辑导出的配置文件 完成编辑(具体修改内容未展示) 通常用于调整集群配置参数(如 API 服务器参数、网络配置等)
sudo kubeadm init phase upload-config cluster --config kubeadm-config.yaml 尝试通过kubeadm阶段命令上传修改后的集群配置 失败,报错unknown flag: --config kubeadm init phase upload-config cluster命令不支持--config参数,该参数使用错误(阶段命令的参数需参考官方文档)
kubectl apply -f kubeadm-config.yaml 应用修改后的配置文件,更新kubeadm-config ConfigMap 成功,提示configmap/kubeadm-config configured,伴随警告 警告原因:原 ConfigMap 缺少kubectl.kubernetes.io/last-applied-configuration注释(该注释用于追踪声明式创建的资源),kubectl 自动补全了该注释,不影响配置生效
二、kubeadm-config.yaml配置文件解析
该文件是kube-system命名空间下kubeadm-config ConfigMap 的内容,存储了 Kubernetes 集群的初始化配置(ClusterConfiguration),核心配置如下:
yaml
apiVersion: v1 # ConfigMap的API版本
data:
ClusterConfiguration: | # 集群配置的核心内容
apiServer:
extraArgs:
authorization-mode: Node,RBAC # API Server的授权模式:节点授权+RBAC(基于角色的访问控制)
timeoutForControlPlane: 4m0s # 控制平面初始化超时时间(4分钟)
apiVersion: kubeadm.k8s.io/v1beta3 # kubeadm配置的API版本
certificatesDir: /etc/kubernetes/pki # 证书存储目录
clusterName: kubernetes # 集群名称(默认值)
controlPlaneEndpoint: "192.168.31.2:6443" # 控制平面端点(通常是负载均衡器或主节点的IP:端口,6443是K8s API默认端口)
controllerManager: {} # 控制器管理器配置(默认空,使用默认值)
dns: {} # DNS配置(默认空,使用CoreDNS默认配置)
etcd:
local:
dataDir: /var/lib/etcd # 本地etcd的数据存储目录
imageRepository: registry.k8s.io # 容器镜像仓库(Kubernetes官方仓库)
kind: ClusterConfiguration # 配置类型:集群配置
kubernetesVersion: v1.28.15 # Kubernetes版本
networking:
dnsDomain: cluster.local # 集群内部DNS域名
podSubnet: 10.244.0.0/16 # Pod网络网段(需与CNI插件配置一致,如Flannel常用此网段)
serviceSubnet: 10.96.0.0/12 # Service网络网段
scheduler: {} # 调度器配置(默认空,使用默认值)
kind: ConfigMap # 资源类型:配置映射
metadata: # 元数据
creationTimestamp: "2025-08-18T07:32:49Z" # 创建时间
name: kubeadm-config # 资源名称
namespace: kube-system # 所属命名空间(系统组件专用)
resourceVersion: "237" # 资源版本(用于追踪更新)
uid: 64a9e958-b83d-4db7-be21-bf6ba18f8511 # 唯一标识
======================================================================================
shitou@shitou:~$ kubectl apply -f kubeadm-config.yaml
Warning: resource configmaps/kubeadm-config is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
configmap/kubeadm-config configured
在现有节点生成密钥
shitou@shitou:~$ sudo kubeadm init phase upload-certs --upload-certs
I0819 11:01:22.796311 123471 version.go:256] remote version is much newer: v1.33.4; falling back to: stable-1.28
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
00d1798e0b06ecc8e5f27584ff9cf539f191cc7b8c798bf9e826df79f7cf116e
加入密令
shitou@shitou:~$ sudo kubeadm token create --print-join-command
kubeadm join 192.168.31.19:6443 --token blaexm.tqomyqllhieeb7wf --discovery-token-ca-cert-hash sha256:5ee36933b68e3b23bbe55771e3bd75514255e54e816786844a25a051f9ebda70
shitou@shitou:~$
在新的服务器上
shitou@shitou-To-be-filled-by-O-E-M:~$ sudo kubeadm join 192.168.31.2:6443 \
--token blaexm.tqomyqllhieeb7wf \
--discovery-token-ca-cert-hash sha256:5ee36933b68e3b23bbe55771e3bd75514255e54e816786844a25a051f9ebda70 \
--control-plane \
--certificate-key 00d1798e0b06ecc8e5f27584ff9cf539f191cc7b8c798bf9e826df79f7cf116e
执行后:
如果网络和配置正确,命令会自动拉取镜像、配置控制平面组件,过程约 3-5 分钟。成功后会显示类似:This node has joined the cluster and is now a control plane node