背景
kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。
系统要求
软硬件 |
最低配置 |
推荐配置 |
主机资源 |
集群规模为1~5个节点时,要求如下: master:至少1core cpu和2GB内存。 node:至少1core cpu和1GB内存。 随着集群规模的增大,应相应增加主机的配置。 |
master:4 core cpu和1GB内存。 node:根据需要运行的容器数量进行配置。 |
Linux操作系统 |
各种Linux发行版,包括Red Hat Linux、CentOS、Fedora、Ubuntu等,Kernel版本要求在3.10及以上 |
CentOS 7.8 |
etcd |
v3版本及以上 |
v3 |
安装步骤
- Master节点开放kubernetes组件所需端口,参考Installing kubeadm | Kubernetes
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379/tcp
firewall-cmd --permanent --add-port=2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --reload
- Node节点开放kubernetes组件所需端口,同参考第1条中的链接
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --reload
以下操作针对所有节点
- 禁用SELINUX
setenforce 0
- 创建/etc/sysctl.d/k8s.conf文件,添加如下内容
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
- 执行命令使其生效
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
- 创建/etc/sysconfig/modules/ipvs.modules,并添加如下内容
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
- 执行脚本,保证在节点重启后能自动加载所需模块,并查看是否已经正确加载所需的内核模块
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
- 安装docker的yum源
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
- 安装并启动docker
yum makecache fast
yum install -y --setopt=obsoletes=0 docker-ce-18.09.7-3.el7
systemctl start docker
systemctl enable docker
- 确认一下iptables filter表中FOWARD链的默认策略(pllicy)为ACCEPT
iptables -nvL
- 创建或修改/etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
- 重启docker
systemctl restart docker
docker info | grep cgroup
(输出cgroup driver: systemd)
- 创建kuberletes的yum源
vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
- 安装kubelet kubeadm kubectl
yum makecache fast
yum install -y kubelet kubeadm kubectl
- 关闭swap分区,然后编辑 /etc/fstab 文件,注释掉自动挂载swap的命令行
swapoff -a
- 使用kubelet的启动参数--fail-swap-on=false必须关闭Swap的限制,修改/etc/sysconfig/kubelet,加入:
KUBELET_EXTRA_ARGS=--fail-swap-on=false
- 开启kubelet服务
systemctl enable kubelet.service
systemctl start kubelet.service
- 配置环境变量
export KUBECONFIG=/etc/kubernetes/admin.conf
master节点
新建kubeadm.yaml,使用该脚本初始化kubeadm集群,其中advertiseAddress为master节点ip地址
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.152.100
bindPort: 6443
nodeRegistration:
taints:
- effect: PreferNoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.15.3
networking:
podSubnet: 10.244.0.0/16
新建get_images.sh脚本,拉取docker镜像,然后执行脚本
echo ""
echo "=========================================================="
echo "Pull Kubernetes v1.15.3 Images from mirrorgooglecontainers ......"
echo "=========================================================="
echo ""
MY_REGISTRY=registry.cn-hangzhou.aliyuncs.com/openthings
## 拉取镜像
docker pull ${MY_REGISTRY}/kube-apiserver-amd64:v1.15.3
docker pull ${MY_REGISTRY}/kube-controller-manager-amd64:v1.15.3
docker pull ${MY_REGISTRY}/kube-scheduler-amd64:v1.15.3
docker pull ${MY_REGISTRY}/kube-proxy-amd64:v1.15.3
docker pull ${MY_REGISTRY}/k8s-gcr-io-etcd:3.3.10
docker pull ${MY_REGISTRY}/k8s-gcr-io-pause:3.1
docker pull ${MY_REGISTRY}/k8s-gcr-io-coredns:1.3.1
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
## 添加Tag
docker tag ${MY_REGISTRY}/kube-apiserver-amd64:v1.15.3 k8s.gcr.io/kube-apiserver:v1.15.3
docker tag ${MY_REGISTRY}/kube-scheduler-amd64:v1.15.3 k8s.gcr.io/kube-scheduler:v1.15.3
docker tag ${MY_REGISTRY}/kube-controller-manager-amd64:v1.15.3 k8s.gcr.io/kube-controller-manager:v1.15.3
docker tag ${MY_REGISTRY}/kube-proxy-amd64:v1.15.3 k8s.gcr.io/kube-proxy:v1.15.3
docker tag ${MY_REGISTRY}/k8s-gcr-io-etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag ${MY_REGISTRY}/k8s-gcr-io-pause:3.1 k8s.gcr.io/pause:3.1
docker tag ${MY_REGISTRY}/k8s-gcr-io-coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
echo ""
echo "=========================================================="
echo "Pull Kubernetes v1.15.2 Images FINISHED."
echo "=========================================================="
echo ""
初始化集群
kubeadm init --config kubeadm.yaml
创建以下脚本,在node节点pull镜像,并执行
echo ""
echo "=========================================================="
echo "Pull Kubernetes v1.15.3 Images from mirrorgooglecontainers......"
echo "=========================================================="
echo ""
MY_REGISTRY=mirrorgooglecontainers
## 拉取镜像
docker pull ${MY_REGISTRY}/kube-proxy-amd64:v1.15.3
docker pull ${MY_REGISTRY}/k8s-gcr-io-pause:3.1
docker pull ${MY_REGISTRY}/k8s-gcr-io-coredns:1.3.1
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
##添加Tag
docker tag ${MY_REGISTRY}/kube-proxy-amd64:v1.15.3 k8s.gcr.io/kube-proxy:v1.15.3
docker tag ${MY_REGISTRY}/k8s-gcr-io-pause:3.1 k8s.gcr.io/pause:3.1
docker tag ${MY_REGISTRY}/k8s-gcr-io-coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
echo ""
echo "=========================================================="
echo "Pull Kubernetes v1.15.3 Images FINISHED."
echo "=========================================================="
echo ""
master节点安装flannel network(如果master是多网卡,需要修改iface参数指定集群主机内网网卡的名称,这里网卡名称是ens33)
kubectl apply -f kube-flannel.yml
node节点加入集群
kubeadm join 192.168.152.100:6443 --token xxx --discovery-token-ca-cert-hash sha256:xxx