searchusermenu
  • 发布文章
  • 消息中心
点赞
收藏
评论
分享
原创

kubevirt安装

2024-04-02 05:48:31
15
0

benci 内核版本升级(10.0好像不行)

注意:如果使用VMware进行虚拟机测试,

虚拟化引擎需要勾选虚拟化Intel VT-x/EPT 或AMD-V/RVI(V)  与虚拟化IOMMU(IO内存管理单元)(I)

 

1.1 查看当前内核

uname -r

1.2 导入ELRepo仓库公共密钥与yum源

rpm --import ://ww.elrepo.org/RPM-GPG-KEY-elrepo.org

rpm -Uvh ://ww.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

 

1.3 查看可用内核版本

yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

1.4 安装最新的稳定版本

yum -y --enablerepo=elrepo-kernel install kernel-lt

1.5 设置grub2

sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg

1.6 grub2-set-default 0 命令设置

方法1命令设置

grub2-set-default 0

方法2

vim /etc/default/grub

将GRUB_DEFAULT设置为0

GRUB_DEFAULT=saved

 

生成 grub 配置文件并重启

grub2-mkconfig -o /boot/grub2/grub.cfg

reboot

 

2. 安装docker(该版本下docker+k8s+kubevirt安装成功)

2.1 移除以前docker相关包(自选)

sudo yum remove docker \

                  docker-client \

                  docker-client-latest \

                  docker-common \

                  docker-latest \

                  docker-latest-logrotate \

                  docker-logrotate \

                  docker-engine

2.2配置yum源

sudo yum install -y yum-utils

sudo yum-config-manager \

--add-repo \

://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

 

2.3 安装

#以下是在安装k8s的时候使用

yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7  containerd.io-1.4.6

2.4配置国内源加速

额外添加了docker的生产环境核心配置cgroup

sudo mkdir -p /etc/docker

sudo tee /etc/docker/daemon.json <<-'EOF'

{

"registry-mirrors": ["://82m9ar63.mirror.aliyuncs.com"],

  "exec-opts": ["native.cgroupdriver=systemd"],

  "log-driver": "json-file",

  "log-opts": {

    "max-size": "100m"

  },

  "storage-driver": "overlay2"

}

EOF

sudo systemctl daemon-reload

sudo systemctl restart docker

 

3. k8s安装

3.1 基础环境(所有机器)

master和node所有机器上执行的操作

#各个机器设置自己的域名,便于区分主机master与node机器

hostnamectl set-hostname xxxx

 

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)

sudo setenforce 0

sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

 

#关闭swap

swapoff -a  

sed -ri 's/.*swap.*/#&/' /etc/fstab

 

#允许 iptables 检查桥接流量

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf

br_netfilter

EOF

 

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sudo sysctl --system

 

3.2 安装kubelet、kubeadm、kubectl(所有机器)

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

  ://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

exclude=kubelet kubeadm kubectl

EOF

 

sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

 

3.3 下载需要的镜像(对应k8s版本)(所有机器

sudo tee ./images.sh <<-'EOF'

#!/bin/bash

images=(

kube-apiserver:v1.20.9

kube-proxy:v1.20.9

kube-controller-manager:v1.20.9

kube-scheduler:v1.20.9

coredns:1.7.0

etcd:3.4.13-0

pause:3.2

)

for imageName in ${images[@]} ; do

docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName

done

EOF

   

chmod +x ./images.sh && ./images.sh

 

3.4 master域名映射(所有机器)

#所有机器添加master域名映射,172.31.04需要修改为master节点的,cluster-endpoint映射名

echo "172.31.0.4  cluster-endpoint" >> /etc/hosts

 

#机器之间需要都能够ping通

vim /etc/hosts

编辑例如

192.168.42.131  k8s-master

192.168.42.132  k8s-node01

192.168.42.131  cluster-endpoint

 

3.5 初始化主节点

注意

#主节点初始化(例如172.31.0.4为主机master的,v1.20.9为版本号,10.96.0.0/16为k8s创建服务时的范围,10.97.0.0/16为k8s创建的pod的范围(vm范围),因此三者前六位不能相同。

#注意所有网络范围不重叠

kubeadm init \

--apiserver-advertise-address=172.31.0.4 \

--control-plane-endpoint=cluster-endpoint \

--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \

--kubernetes-version v1.20.9 \

--service-cidr=10.96.0.0/16 \

--pod-network-cidr=10.97.0.0/16

# 所以下面的 calico.yaml 文件对应的10.97.0.0/16也要改成一致

 

3.6 等待一段时间出现如下

Your Kubernetes control-plane has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

Alternatively, if you are the root user, you can run:

 

  export KUBECONFIG=/etc/kubernetes/admin.conf

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

 ://kubernetes.io/docs/concepts/cluster-administration/addons/

 

You can now join any number of control-plane nodes by copying certificate authorities

and service account keys on each node and then running the following as root:

 

  kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \

    --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3 \

    --control-plane

 

Then you can join any number of worker nodes by running the following on each as root:

 

kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \

    --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

 

注意:此代码复制出来先保留

kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \

    --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

 

 

3.7 执行上方显示的指令(主节点master)

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

3.8测试

 

Nodes只有主节点,并且为NotReady,需要安装网络组件(calico或flannel)此处使用calico;

 

3.9 安装网络组件calico(版本v3.20

官网

://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-kubernetes-api-datastore-more-than-50-nodes

curl ://docs.projectcalico.org/v3.20/manifests/calico.yaml -O

 

编辑calico.yaml

如下将#去掉,并且将value对应的网络范围改为之前配置的创建pod生成的网络范围--pod-network-cidr=10.97.0.0/16中的10.97.0.0/16

curl ://docs.projectcalico.org/manifests/caloco.yaml -O

cat calico.yaml | grep 192.168

vi calico.yaml

# - name: CALICO_IPV4POOL_CIDR

#   value:  "192.168.0.0/16"

改为

- name: CALICO_IPV4POOL_CIDR

   value: "10.97.0.0/16"

 

#执行yaml

kubectl apply -f calico.yaml

 

成功之后显示如下

pod会自动生成,如果损毁会自动更新替换生成

 

 

3.10 添加子节点node

安装主节点master时生成保留的,该令牌有效时间为24小时

kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \

    --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

将此代码在node子节点机器上执行,则将该机器加入主节点之中。

 

如果过期或忘记,可在主节点master进行重新生成

kubeadm token create --print-join-command

 

再次在主节点查看节点则会显示如下

其中主节点master与子节点node配置完成

 

 

 

4. kubevirt安装

4.1检查机器是否能够进行虚拟化并安装(所有机器)

#监测是否能够实现vmx

cat /proc/cpuinfo |grep -E 'vmx|svm']

 

下载qemu、kvm、libvirt

yum install -y qemu-kvm libvirt virt-install bridge-utils

启动服务

systemctl start libvirtd

systemctl enable libvirtd

查看KVM模块加载

lsmod |grep kvm

 

#监测是否能够进行虚拟化

virt-host-validate qemu

 

4.2安装kubevirt(主节点master)(版本v0.49.0)

export KUBEVIRT_VERSION=v0.49.0

kubectl create -f ://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml

kubectl create -f ://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr.yaml

 

#查看安装情况(时间可能会长,或者重启)

kubectl get pods -n kubevirt

 

 

4.3安装CDI(vm存储插件)(版本v1.53.0)

export VERSION=v1.53.0

kubectl create -f ://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml

 

kubectl create -f ://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml

#完成可通过kubectl get pod -A

#是否创建了关于CDI的pod

 

4.4安装virtctl(vm管理工具)

wget -O virtctl ://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64

 

chmod +x virtctl

 

#完成可通过kubectl get pod -A

#是否创建了关于virtctl的pod

 

4.5构建vm

#下载一个vm.yaml

curl ://kubevirt.io/labs/manifests/vm.yaml

#或者自己构建,yaml内容如下:(注意,复制时定格apiVersion的a会被去除)

apiVersion: kubevirt.io/v1

kind: VirtualMachine

metadata:

  name: testvm

spec:

  running: false

  template:

    metadata:

      labels:

        kubevirt.io/size: small

        kubevirt.io/domain: testvm

    spec:

      domain:

        devices:

          disks:

            - name: containerdisk

              disk:

                bus: virtio

            - name: cloudinitdisk

              disk:

                bus: virtio

          interfaces:

          - name: default

            masquerade: {}

        resources:

          requests:

            memory: 64M

      networks:

      - name: default

        pod: {}

      volumes:

        - name: containerdisk

          containerDisk:

            image: quay.io/kubevirt/cirros-container-disk-demo

        - name: cloudinitdisk

          cloudInitNoCloud:

            userDataBase64: SGkuXG4=

 

#执行vm.yaml创建vm

kubectl apply -f vm.yaml

#或者直接下载创建

kubectl apply -f ://kubevirt.io/labs/manifests/vm.yaml

 

#查看vm

kubectl get vms

#启动vm实例

./virtctl start testvm

#查看vmi实例

kubectl get vmis

 

平台nt,://前需添加协议https或http

 

0条评论
0 / 1000
y****n
2文章数
0粉丝数
y****n
2 文章 | 0 粉丝
y****n
2文章数
0粉丝数
y****n
2 文章 | 0 粉丝
原创

kubevirt安装

2024-04-02 05:48:31
15
0

benci 内核版本升级(10.0好像不行)

注意:如果使用VMware进行虚拟机测试,

虚拟化引擎需要勾选虚拟化Intel VT-x/EPT 或AMD-V/RVI(V)  与虚拟化IOMMU(IO内存管理单元)(I)

 

1.1 查看当前内核

uname -r

1.2 导入ELRepo仓库公共密钥与yum源

rpm --import ://ww.elrepo.org/RPM-GPG-KEY-elrepo.org

rpm -Uvh ://ww.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

 

1.3 查看可用内核版本

yum --disablerepo="*" --enablerepo="elrepo-kernel" list available

1.4 安装最新的稳定版本

yum -y --enablerepo=elrepo-kernel install kernel-lt

1.5 设置grub2

sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg

1.6 grub2-set-default 0 命令设置

方法1命令设置

grub2-set-default 0

方法2

vim /etc/default/grub

将GRUB_DEFAULT设置为0

GRUB_DEFAULT=saved

 

生成 grub 配置文件并重启

grub2-mkconfig -o /boot/grub2/grub.cfg

reboot

 

2. 安装docker(该版本下docker+k8s+kubevirt安装成功)

2.1 移除以前docker相关包(自选)

sudo yum remove docker \

                  docker-client \

                  docker-client-latest \

                  docker-common \

                  docker-latest \

                  docker-latest-logrotate \

                  docker-logrotate \

                  docker-engine

2.2配置yum源

sudo yum install -y yum-utils

sudo yum-config-manager \

--add-repo \

://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

 

2.3 安装

#以下是在安装k8s的时候使用

yum install -y docker-ce-20.10.7 docker-ce-cli-20.10.7  containerd.io-1.4.6

2.4配置国内源加速

额外添加了docker的生产环境核心配置cgroup

sudo mkdir -p /etc/docker

sudo tee /etc/docker/daemon.json <<-'EOF'

{

"registry-mirrors": ["://82m9ar63.mirror.aliyuncs.com"],

  "exec-opts": ["native.cgroupdriver=systemd"],

  "log-driver": "json-file",

  "log-opts": {

    "max-size": "100m"

  },

  "storage-driver": "overlay2"

}

EOF

sudo systemctl daemon-reload

sudo systemctl restart docker

 

3. k8s安装

3.1 基础环境(所有机器)

master和node所有机器上执行的操作

#各个机器设置自己的域名,便于区分主机master与node机器

hostnamectl set-hostname xxxx

 

# 将 SELinux 设置为 permissive 模式(相当于将其禁用)

sudo setenforce 0

sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

 

#关闭swap

swapoff -a  

sed -ri 's/.*swap.*/#&/' /etc/fstab

 

#允许 iptables 检查桥接流量

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf

br_netfilter

EOF

 

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sudo sysctl --system

 

3.2 安装kubelet、kubeadm、kubectl(所有机器)

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

  ://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

exclude=kubelet kubeadm kubectl

EOF

 

sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

 

3.3 下载需要的镜像(对应k8s版本)(所有机器

sudo tee ./images.sh <<-'EOF'

#!/bin/bash

images=(

kube-apiserver:v1.20.9

kube-proxy:v1.20.9

kube-controller-manager:v1.20.9

kube-scheduler:v1.20.9

coredns:1.7.0

etcd:3.4.13-0

pause:3.2

)

for imageName in ${images[@]} ; do

docker pull registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/$imageName

done

EOF

   

chmod +x ./images.sh && ./images.sh

 

3.4 master域名映射(所有机器)

#所有机器添加master域名映射,172.31.04需要修改为master节点的,cluster-endpoint映射名

echo "172.31.0.4  cluster-endpoint" >> /etc/hosts

 

#机器之间需要都能够ping通

vim /etc/hosts

编辑例如

192.168.42.131  k8s-master

192.168.42.132  k8s-node01

192.168.42.131  cluster-endpoint

 

3.5 初始化主节点

注意

#主节点初始化(例如172.31.0.4为主机master的,v1.20.9为版本号,10.96.0.0/16为k8s创建服务时的范围,10.97.0.0/16为k8s创建的pod的范围(vm范围),因此三者前六位不能相同。

#注意所有网络范围不重叠

kubeadm init \

--apiserver-advertise-address=172.31.0.4 \

--control-plane-endpoint=cluster-endpoint \

--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \

--kubernetes-version v1.20.9 \

--service-cidr=10.96.0.0/16 \

--pod-network-cidr=10.97.0.0/16

# 所以下面的 calico.yaml 文件对应的10.97.0.0/16也要改成一致

 

3.6 等待一段时间出现如下

Your Kubernetes control-plane has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

Alternatively, if you are the root user, you can run:

 

  export KUBECONFIG=/etc/kubernetes/admin.conf

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

 ://kubernetes.io/docs/concepts/cluster-administration/addons/

 

You can now join any number of control-plane nodes by copying certificate authorities

and service account keys on each node and then running the following as root:

 

  kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \

    --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3 \

    --control-plane

 

Then you can join any number of worker nodes by running the following on each as root:

 

kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \

    --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

 

注意:此代码复制出来先保留

kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \

    --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

 

 

3.7 执行上方显示的指令(主节点master)

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

3.8测试

 

Nodes只有主节点,并且为NotReady,需要安装网络组件(calico或flannel)此处使用calico;

 

3.9 安装网络组件calico(版本v3.20

官网

://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-kubernetes-api-datastore-more-than-50-nodes

curl ://docs.projectcalico.org/v3.20/manifests/calico.yaml -O

 

编辑calico.yaml

如下将#去掉,并且将value对应的网络范围改为之前配置的创建pod生成的网络范围--pod-network-cidr=10.97.0.0/16中的10.97.0.0/16

curl ://docs.projectcalico.org/manifests/caloco.yaml -O

cat calico.yaml | grep 192.168

vi calico.yaml

# - name: CALICO_IPV4POOL_CIDR

#   value:  "192.168.0.0/16"

改为

- name: CALICO_IPV4POOL_CIDR

   value: "10.97.0.0/16"

 

#执行yaml

kubectl apply -f calico.yaml

 

成功之后显示如下

pod会自动生成,如果损毁会自动更新替换生成

 

 

3.10 添加子节点node

安装主节点master时生成保留的,该令牌有效时间为24小时

kubeadm join cluster-endpoint:6443 --token hums8f.vyx71prsg74ofce7 \

    --discovery-token-ca-cert-hash sha256:a394d059dd51d68bb007a532a037d0a477131480ae95f75840c461e85e2c6ae3

将此代码在node子节点机器上执行,则将该机器加入主节点之中。

 

如果过期或忘记,可在主节点master进行重新生成

kubeadm token create --print-join-command

 

再次在主节点查看节点则会显示如下

其中主节点master与子节点node配置完成

 

 

 

4. kubevirt安装

4.1检查机器是否能够进行虚拟化并安装(所有机器)

#监测是否能够实现vmx

cat /proc/cpuinfo |grep -E 'vmx|svm']

 

下载qemu、kvm、libvirt

yum install -y qemu-kvm libvirt virt-install bridge-utils

启动服务

systemctl start libvirtd

systemctl enable libvirtd

查看KVM模块加载

lsmod |grep kvm

 

#监测是否能够进行虚拟化

virt-host-validate qemu

 

4.2安装kubevirt(主节点master)(版本v0.49.0)

export KUBEVIRT_VERSION=v0.49.0

kubectl create -f ://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml

kubectl create -f ://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr.yaml

 

#查看安装情况(时间可能会长,或者重启)

kubectl get pods -n kubevirt

 

 

4.3安装CDI(vm存储插件)(版本v1.53.0)

export VERSION=v1.53.0

kubectl create -f ://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml

 

kubectl create -f ://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml

#完成可通过kubectl get pod -A

#是否创建了关于CDI的pod

 

4.4安装virtctl(vm管理工具)

wget -O virtctl ://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64

 

chmod +x virtctl

 

#完成可通过kubectl get pod -A

#是否创建了关于virtctl的pod

 

4.5构建vm

#下载一个vm.yaml

curl ://kubevirt.io/labs/manifests/vm.yaml

#或者自己构建,yaml内容如下:(注意,复制时定格apiVersion的a会被去除)

apiVersion: kubevirt.io/v1

kind: VirtualMachine

metadata:

  name: testvm

spec:

  running: false

  template:

    metadata:

      labels:

        kubevirt.io/size: small

        kubevirt.io/domain: testvm

    spec:

      domain:

        devices:

          disks:

            - name: containerdisk

              disk:

                bus: virtio

            - name: cloudinitdisk

              disk:

                bus: virtio

          interfaces:

          - name: default

            masquerade: {}

        resources:

          requests:

            memory: 64M

      networks:

      - name: default

        pod: {}

      volumes:

        - name: containerdisk

          containerDisk:

            image: quay.io/kubevirt/cirros-container-disk-demo

        - name: cloudinitdisk

          cloudInitNoCloud:

            userDataBase64: SGkuXG4=

 

#执行vm.yaml创建vm

kubectl apply -f vm.yaml

#或者直接下载创建

kubectl apply -f ://kubevirt.io/labs/manifests/vm.yaml

 

#查看vm

kubectl get vms

#启动vm实例

./virtctl start testvm

#查看vmi实例

kubectl get vmis

 

平台nt,://前需添加协议https或http

 

文章来自个人专栏
Kubevirt
2 文章 | 1 订阅
0条评论
0 / 1000
请输入你的评论
0
0