一、环境安装
1.1 kind 安装
本地操作系统采用CentOS系统,kind安装命令如下:
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
mv ./kind /usr/local/bin/kind
安装成功后执行如下命令检测kind是否安装成功:
kind version
kind v0.11.1 go1.16.4 linux/amd64
1.2 主集群安装
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: istio-testing
networking:
apiServerPort: 6443
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 6443
hostPort: 45876
listenAddress: "0.0.0.0"
protocol: tcp
- role: worker
该文件是将apiServer的6443端口通过45876暴露出去,方便给外部接口联调使用,安装主集群:
kind create cluster --config=./kind-primary.yaml
执行查看集群命令:
kind get clusters
1.3 从集群安装
apiVersion: kind.x-k8s.io/v1alpha4
name: istio-remote
networking:
apiServerAddress: 127.0.0.2
apiServerPort: 6443
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 6443
hostPort: 45877
listenAddress: "0.0.0.0"
protocol: tcp
- role: worker
该文件是将apiServer的6443端口通过45877暴露出去,127.0.0.1已经给主集群使用,因此重新定义apiServerAddress,方便给外部接口联调使用,安装从集群:
kind create cluster --config=./kind-remote.yaml
执行查看集群命令:
kind get clusters
istio-remote
istio-testing
1.4 LoadBalancer安装
1.4.1 主集群MetalLB安装
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
需要为MetalLB提供它控制的一系列IP地址。我们希望这个系列在docker网络上。
docker network inspect -f '{{.IPAM.Config}}' kind
[{172.17.0.0/16 172.17.0.1 map[]} {fc00:f853:ccd:e793::/64 fc00:f853:ccd:e793::1 map[]}]
输出将包含一个cidr,例如172.17.0.0/16。我们希望我们的负载均衡器IP范围来自这个子类。例如,我们可以通过创建IPAddressPool和相关的L2Advertisement,将MetalLB配置为使用172.17.255.200到172.17.255.250。
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: example
namespace: metallb-system
spec:
addresses:
- 172.17.255.200-172.17.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: empty
namespace: metallb-system
验证部署状态:
kubectl get po -n metallb-system
NAME READY STATUS RESTARTS AGE
controller-54b4fd6944-zck69 1/1 Running 0 34d
speaker-2b94x 1/1 Running 0 34d
speaker-zd2hh 1/1 Running 0 34d
1.5 Istioctl安装
https://github.com/istio/istio/releases 下载对应的Istioctl版本,这里采用1.16.2版本
tar -zxvf istio-1.16.2-linux-amd64.tar.gz
为了方便使用 istioctl,我们可以将istioctl放在合适的目录下并设置环境变量,如将istio目 录放置在/usr/local/下,然后再/ect/profile中添加环境变量。
mv istio-1.16.2 /usr/local/istio
在/etc/profile最后添加
export PATH=$PATH:/usr/local/istioctl/bin
保存后执行source /etc/profile
检查一下istio环境及版本信息:
istioctl version
client version: 1.16.2
control plane version: 1.16.2
data plane version: 1.16.2 (4 proxies)
二、主集群安装
2.1 主集群控制面安装
查看主从集群的kube config配置:
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.2:6443
name: kind-istio-remote
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://127.0.0.1:6443
name: kind-istio-testing
contexts:
- context:
cluster: kind-istio-remote
user: kind-istio-remote
name: kind-istio-remote
- context:
cluster: kind-istio-testing
user: kind-istio-testing
name: kind-istio-testing
current-context: kind-istio-testing
kind: Config
preferences: {}
users:
- name: kind-istio-remote
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
- name: kind-istio-testing
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
为 cluster1
创建 Istio 配置文件:
cat <<EOF > cluster1.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
meshID: mesh1
multiCluster:
clusterName: kind-istio-testing
network: network1
EOF
将配置文件应用到 cluster1
:
istioctl install --set values.pilot.env.EXTERNAL_ISTIOD=true --context="kind-istio-testing" -f cluster1.yaml
当 values.pilot.env.EXTERNAL_ISTIOD
被设置为 true
时, 安装在 cluster1
上的控制平面也可以作为其他从集群的外部控制平面。 当这个功能被启用时,istiod
将试图获得领导权锁,并因此管理将附加到它的并且带有 适当注解的从集群 (本例中为 cluster2
)。
2.2 主集群东西向网关安装
在主集群中安装东西向流量专用网关,类型为LoadBanlance,通过External-IP作为从集群的PilotAddress地址。
先配置istio-system的namespace为enabled
kubectl label namespace istio-system istio-injection=enabled
gen-eastwest-gateway.sh
#!/bin/bash
#
# Copyright Istio Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
set -euo pipefail
SINGLE_CLUSTER=0
REVISION=""
while (( "$#" )); do
case "$1" in
--single-cluster)
SINGLE_CLUSTER=1
shift
;;
--cluster)
# No longer does anything, but keep it around to avoid breaking users
shift 2
;;
--network)
NETWORK=$2
shift 2
;;
--mesh)
# No longer does anything, but keep it around to avoid breaking users
shift 2
;;
--revision)
REVISION=$2
shift 2
;;
-*)
echo "Error: Unsupported flag $1" >&2
exit 1
;;
esac
done
# single-cluster installations may need this gateway to allow VMs to get discovery
# for non-single cluster, we add additional topology information
SINGLE_CLUSTER="${SINGLE_CLUSTER:-0}"
if [[ "${SINGLE_CLUSTER}" -eq 0 ]]; then
if [[ -z "${NETWORK:-}" ]]; then
echo "Must specify either --single-cluster or --network."
exit 1
fi
fi
# base
IOP=$(cat <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: eastwest
spec:
revision: "${REVISION}"
profile: empty
components:
ingressGateways:
- name: istio-eastwestgateway
label:
istio: eastwestgateway
app: istio-eastwestgateway
EOF
)
# mark this as a multi-network gateway
if [[ "${SINGLE_CLUSTER}" -eq 0 ]]; then
IOP=$(cat <<EOF
$IOP
topology.istio.io/network: $NETWORK
EOF
)
fi
# env
IOP=$(cat <<EOF
$IOP
enabled: true
k8s:
EOF
)
if [[ "${SINGLE_CLUSTER}" -eq 0 ]]; then
IOP=$(cat <<EOF
$IOP
env:
# traffic through this gateway should be routed inside the network
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
value: ${NETWORK}
EOF
)
fi
# Ports
IOP=$(cat <<EOF
$IOP
service:
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: tls
port: 15443
targetPort: 15443
- name: tls-istiod
port: 15012
targetPort: 15012
- name: tls-webhook
port: 15017
targetPort: 15017
EOF
)
# Gateway injection template
IOP=$(cat <<EOF
$IOP
values:
gateways:
istio-ingressgateway:
injectionTemplate: gateway
EOF
)
# additional multicluster/multinetwork meta
if [[ "${SINGLE_CLUSTER}" -eq 0 ]]; then
IOP=$(cat <<EOF
$IOP
global:
network: ${NETWORK}
EOF
)
fi
echo "$IOP"
执行bash脚本安装东西向网关
./gen-eastwest-gateway.sh \
--mesh mesh1 --cluster kind-istio-remote --network network1 --revision rev | \
istioctl --context="kind-istio-remote" install -y -f -
获取外部网关地址:
kubectl get svc -n istio-system istio-eastwestgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-eastwestgateway LoadBalancer 11.98.185.93 172.17.255.201 15021:30123/TCP,15443:31529/TCP,15012:30861/TCP,15017:30089/TCP 20d
2.3 开放主集群控制面
expose-istiod.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istiod-gateway
spec:
selector:
istio: eastwestgateway
servers:
- port:
name: tls-istiod
number: 15012
protocol: tls
tls:
mode: PASSTHROUGH
hosts:
- "*"
- port:
name: tls-istiodwebhook
number: 15017
protocol: tls
tls:
mode: PASSTHROUGH
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: istiod-vs
spec:
hosts:
- "*"
gateways:
- istiod-gateway
tls:
- match:
- port: 15012
sniHosts:
- "*"
route:
- destination:
host: istiod.istio-system.svc.cluster.local
port:
number: 15012
- match:
- port: 15017
sniHosts:
- "*"
route:
- destination:
host: istiod.istio-system.svc.cluster.local
port:
number: 443
安装控制面路由规则:
kubectl apply --context="kind-istio-testing" -f \
./expose-istiod.yaml
三、从集群安装
3.1 从集群Istio安装
kubectl --context="kind-istio-remote" create namespace istio-system
kubectl --context="kind-istio-remote" annotate namespace istio-system topology.istio.io/controlPlaneClusters=kind-istio-testing
为从集群创建一个从集群配置:
cat <<EOF > cluster2.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: remote
values:
global:
meshID: mesh1
multiCluster:
clusterName: kind-istio-remote
network: network1
remotePilotAddress: 172.17.255.201
EOF
remotePilotAddress为主集群的东西向网关的External-IP地址。
将此配置应用到从集群:
istioctl install --context="kind-istio-remote" -f cluster2.yaml
查看安装状态:
kubectl get svc -n istio-system
kubectl get endpoints -n istio-system
3.2 从集群添加到主集群
为了将从集群连接到它的控制平面,我们让 cluster1
中的控制平面访问 cluster2
中的 API 服务器。 这将执行以下操作:
-
使控制平面能够验证来自在
cluster2
中运行的工作负载的连接请求。 如果没有 API Server 访问权限,控制平面将拒绝请求。 -
启用在
cluster2
中运行的服务端点发现。
因为它已包含在 topology.istio.io/controlPlaneClusters
命名空间注解中 cluster1
上的控制平面也将:
-
修补
cluster2
中 Webhook 中的证书。 -
启动命名空间控制器,在
cluster2
的命名空间中写入 ConfigMap。
为了能让 API 服务器访问 cluster2
, 我们生成一个远程 Secret 并将其应用于 cluster1
:
在主集群中应用如下配置:
istioctl x create-remote-secret \
--context="kind-istio-remote" \
--name=kind-istio-remote | \
kubectl apply -f - --context="kind-istio-testing"
查看安装状态:
kubectl get secrets -n istio-system istio-remote-secret-kind-istio-remote
decode从集群的kube config 配置:
kubectl get secrets -n istio-system istio-remote-secret-kind-istio-remote -o jsonpath='{.data.kind-istio-remote}' | base64 -d
查看主集群的istiod的日志:
kubectl logs -n istio-system -l app=istiod --tail 100000 | grep -i remote
如果输出结果如下,表示安装成功:
2023-05-25T06:34:26.024890Z info finished callback for cluster and starting to sync cluster=kind-istio-remote secret=istio-system/istio-remote-secret-kind-istio-remote
2023-05-25T06:34:26.024916Z info Number of remote clusters: 1
2023-05-25T06:34:26.031733Z info kube starting namespace controller for cluster kind-istio-remote
2023-05-25T06:34:26.037898Z info kube kube controller for kind-istio-remote synced after 15.314602ms
如果连接失败被拒绝,很有可能是apiServer地址不对:
在主集群里istio-remote-secret-kind-istio-remote的secret的apiServer的地址不应该是127.0.0.1:6443的地址,应该采用从集群控制面apiServer容器ip地址,查询从集群控制面ip地址:
docker ps
docker inspect {{containerId}}
//或者
kubectl get po -n kube-system kube-apiserver-istio-remote-control-plane -owide
重新修改istio-remote-secret-kind-istio-remote的secrets的apiServer地址
3.3 校验从集群是否添加成功
在从集群里创建sample的命名空间
kubectl create ns sample
查看configmap
kubectl get cm -n sample
NAME DATA AGE
istio-ca-root-cert 1 20d
kube-root-ca.crt 1 20d
如果存在istio-ca-root-cert的configmap说明从集群已成功添加到主集群,这时可以安装测试服务了。