searchusermenu
  • 发布文章
  • 消息中心
点赞
收藏
评论
分享
原创

openyurt集群添加云master节点

2023-09-27 09:24:45
17
0

1.1.  环境说明

  1.   k8s集群版本: v1.18.9
  2.   docker 版本: 19.3.15
  3.   linux系统:CentOS Linux 7 (Core)   3.10.0-957.27.2.el7.x86_64
  4.  机器master节点: master-1、master-3,  本文档以重置过的master-2节点为例,将master-2加入openyurt集群

1.2.  Docker安装

 1.  添加docker-ce镜像源 ( vi /etc/yum.repos.d/docker-ce.repo )

[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

  2.  安装docker, 并添加配置文件

#安装docker
yum update && yum install -y docker-ce-19.03.15 docker-ce-cli-19.03.15  containerd.io-1.4.12
#开机启动docker
systemctl enable docker --now
#/etc/docker/daemon配置文件
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "log-driver": "json-file",
  "log-opts": {"max-size": "100m", "max-file": "3"},
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
#重启docker
sudo systemctl daemon-reload
sudo systemctl restart docker

1.3. kubectl、 kubeadm、kubelet

 1.  添加k8s镜像源 ( vi /etc/yum.repos.d/Kubernetes.repo)

[Kubernetes]
baseurl = http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled = 1
gpgcheck = 0
gpgkey = http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
name = kubernetes
repo_gpgcheck = 0

  2. 基础环境设置

#设置机器名
hostnamectl set-hostname master-2
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
#关闭swap
swapoff -a 
sed -ri 's/.*swap.*/#&/' /etc/fstab
 
#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
 
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
 
 
#重启
sudo sysctl --system

3. 安装kubectl、kubeadm、kubelet并启动kubelet

#安装kube
yum update && yum install -y kubelet-1.18.9 kubeadm-1.18.9 kubectl-1.18.9 --disableexcludes=kubernetes
#启动kubelet
systemctl enable --now kubelet

1.4.  使用kubeadm将master2节点加入集群

#1.在其它master机器获取加入集群token
kubeadm token create  --print-join-command
输出: kubadm join  xxxxxx(集群url) --token xxxxxx(bootstrap-secret)  --discovery-token-ca-cert-hash  sha256:xxxxxx(sha256加密算法字符串)
#2.在其它master集群获取证书
kubeadm init phase upload-certs --upload-certs
输出:xxxxxx(certificate key)
#3.执行加入master节点命令
kubadm join  xxxxxx(集群url) --token xxxxxx(bootstrap-secret)  --discovery-token-ca-cert-hash  sha256:xxxxxx(sha256加密算法字符串)  --control-plane  --certificate-key xxxxxx(certificate key)

1.5. 手动将master2 k8s节点转成openyurt 云节点 

 1.  把其他master节点的 /etc/kubernetes/manifests/yurt-hub.yaml 拷贝到master-2的/etc/kubernetes/manifests/yurt-hub.yaml相应位置

 2.  修改yurt-hub.yaml文件中join-token

#先根据kubeadm token list判断文件中的token是否存在,若不存在则使用kubeadm token create创建token并替换
command:
    - yurthub
    - --v=2
    - --server-addr=https://xxxxxx
    - --node-name=$(NODE_NAME)
    - --join-token=xxxxxx
    - --working-mode=cloud

 3.  将其它master节点/var/lib/openyurt/kubelet.conf文件拷贝到 master-2节点/var/lib/openyurt/kubelet.conf相应位置

 4. 修改/var/lib/kubelet/kubeadm-flags.env 文件,追加–kubeconfig=xxx  --bootstrap-kubeconfig=xxx

KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=ehub.ctcdn.cn/cpdn/pause:3.2  --kubeconfig=/var/lib/openyurt/kubelet.conf  --bootstrap-kubeconfig= "

5.   master-2节点上打上 openyurt云端label

kubectl label node master-2  openyurt.io/is-edge-worker=false

6. 重启kubelet服务

systemctl restart kubelet

2.1 遇到的问题

问题1 :执行kubeadm join时出现 failed to get etcd status for https://192.168.0.3:2379: failed to dial endpoint https://192.168.0.3:2379 with maintenance client: context deadline exceeded

这是由于之前master2加入过集群时导致的etcd错误,现在的maser2节点对应的ip是192.168.0.5

#进入master1节点的etcd pod
kubectl exec -ti etcd-k8s-master01 -n kube-system sh
#查看etcd 节点
export ETCDCTL_API=3
etcdctl --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key" member list
#找到192.168.0.3 对应的节点id,执行删除操作
etcdctl --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key" member remove 17826e460c060952(192.168.0.3:2379对应的节点)
0条评论
作者已关闭评论
张****强
6文章数
1粉丝数
张****强
6 文章 | 1 粉丝
原创

openyurt集群添加云master节点

2023-09-27 09:24:45
17
0

1.1.  环境说明

  1.   k8s集群版本: v1.18.9
  2.   docker 版本: 19.3.15
  3.   linux系统:CentOS Linux 7 (Core)   3.10.0-957.27.2.el7.x86_64
  4.  机器master节点: master-1、master-3,  本文档以重置过的master-2节点为例,将master-2加入openyurt集群

1.2.  Docker安装

 1.  添加docker-ce镜像源 ( vi /etc/yum.repos.d/docker-ce.repo )

[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

  2.  安装docker, 并添加配置文件

#安装docker
yum update && yum install -y docker-ce-19.03.15 docker-ce-cli-19.03.15  containerd.io-1.4.12
#开机启动docker
systemctl enable docker --now
#/etc/docker/daemon配置文件
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "log-driver": "json-file",
  "log-opts": {"max-size": "100m", "max-file": "3"},
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
#重启docker
sudo systemctl daemon-reload
sudo systemctl restart docker

1.3. kubectl、 kubeadm、kubelet

 1.  添加k8s镜像源 ( vi /etc/yum.repos.d/Kubernetes.repo)

[Kubernetes]
baseurl = http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled = 1
gpgcheck = 0
gpgkey = http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
name = kubernetes
repo_gpgcheck = 0

  2. 基础环境设置

#设置机器名
hostnamectl set-hostname master-2
# 将 SELinux 设置为 permissive 模式(相当于将其禁用)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
#关闭swap
swapoff -a 
sed -ri 's/.*swap.*/#&/' /etc/fstab
 
#允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
 
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
 
 
#重启
sudo sysctl --system

3. 安装kubectl、kubeadm、kubelet并启动kubelet

#安装kube
yum update && yum install -y kubelet-1.18.9 kubeadm-1.18.9 kubectl-1.18.9 --disableexcludes=kubernetes
#启动kubelet
systemctl enable --now kubelet

1.4.  使用kubeadm将master2节点加入集群

#1.在其它master机器获取加入集群token
kubeadm token create  --print-join-command
输出: kubadm join  xxxxxx(集群url) --token xxxxxx(bootstrap-secret)  --discovery-token-ca-cert-hash  sha256:xxxxxx(sha256加密算法字符串)
#2.在其它master集群获取证书
kubeadm init phase upload-certs --upload-certs
输出:xxxxxx(certificate key)
#3.执行加入master节点命令
kubadm join  xxxxxx(集群url) --token xxxxxx(bootstrap-secret)  --discovery-token-ca-cert-hash  sha256:xxxxxx(sha256加密算法字符串)  --control-plane  --certificate-key xxxxxx(certificate key)

1.5. 手动将master2 k8s节点转成openyurt 云节点 

 1.  把其他master节点的 /etc/kubernetes/manifests/yurt-hub.yaml 拷贝到master-2的/etc/kubernetes/manifests/yurt-hub.yaml相应位置

 2.  修改yurt-hub.yaml文件中join-token

#先根据kubeadm token list判断文件中的token是否存在,若不存在则使用kubeadm token create创建token并替换
command:
    - yurthub
    - --v=2
    - --server-addr=https://xxxxxx
    - --node-name=$(NODE_NAME)
    - --join-token=xxxxxx
    - --working-mode=cloud

 3.  将其它master节点/var/lib/openyurt/kubelet.conf文件拷贝到 master-2节点/var/lib/openyurt/kubelet.conf相应位置

 4. 修改/var/lib/kubelet/kubeadm-flags.env 文件,追加–kubeconfig=xxx  --bootstrap-kubeconfig=xxx

KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=ehub.ctcdn.cn/cpdn/pause:3.2  --kubeconfig=/var/lib/openyurt/kubelet.conf  --bootstrap-kubeconfig= "

5.   master-2节点上打上 openyurt云端label

kubectl label node master-2  openyurt.io/is-edge-worker=false

6. 重启kubelet服务

systemctl restart kubelet

2.1 遇到的问题

问题1 :执行kubeadm join时出现 failed to get etcd status for https://192.168.0.3:2379: failed to dial endpoint https://192.168.0.3:2379 with maintenance client: context deadline exceeded

这是由于之前master2加入过集群时导致的etcd错误,现在的maser2节点对应的ip是192.168.0.5

#进入master1节点的etcd pod
kubectl exec -ti etcd-k8s-master01 -n kube-system sh
#查看etcd 节点
export ETCDCTL_API=3
etcdctl --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key" member list
#找到192.168.0.3 对应的节点id,执行删除操作
etcdctl --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key" member remove 17826e460c060952(192.168.0.3:2379对应的节点)
文章来自个人专栏
ctyun-zhangq
6 文章 | 1 订阅
0条评论
作者已关闭评论
作者已关闭评论
0
0