searchusermenu
  • 发布文章
  • 消息中心
点赞
收藏
评论
分享
原创

Ceph-CSI RBD卷和克隆卷的创建/删除流程

2023-05-18 10:15:23
75
0

by 云网产品事业部-弹性存储-王伟

ceph csi的创建分三种, 创建新卷, 基于父卷创建克隆卷, 基于快照创建卷, 我们先看看普通卷的创建流程

普通卷的创建

根据代码整理的创建流程为:

  1. 先去csi.volumes.default中查询卷是否存在
  2. 不存在则生成uuid, 然后写入csi.volumes.default, 键值对为"csi.volume."+pvName: uuid.
  3. 创建元数据object, 写入元数据, 元数据信息包括csi.imagename(ceph中的卷name, 格式为"csi-vol"+uuid), csi.volname(pvName)
  4. 创建卷, 并根据卷的信息写入omap csi.imageid(ceph image的id)

- 先查看的k8s的pv和pvc信息


[root@node1 vdisk]# kubectl get pvc
NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
cstor-pvc-vdisk         Bound    pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223   1Gi        RWO            cstor-csi-vdisk-sc   4m56s

[root@node1 vdisk]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                           STORAGECLASS         REASON   AGE
pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223   1Gi        RWO            Delete           Bound    default/cstor-pvc-vdisk         cstor-csi-vdisk-sc            5m21s

- 查看pv的详细信息, 其中VolumeHandle的后36位为产生的uuid


[root@node1 vdisk]# kubectl describe pv pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223
Name:            pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: vdisk.csi.cstor.com
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    cstor-csi-vdisk-sc
Status:          Bound
Claim:           default/cstor-pvc-vdisk
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:         
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            vdisk.csi.cstor.com
    FSType:            ext4
    VolumeHandle:      0101-24-4a9e463a-4853-4237-a5c5-9ae9d25bacda-00000001-768a1126-f064-11ec-9a2d-36614e068d81
    ReadOnly:          false
    VolumeAttributes:      clusterID=4a9e463a-4853-4237-a5c5-9ae9d25bacda
                           driverType=vdisk.csi.cstor.com
                           imageName=csi-vol-768a1126-f064-11ec-9a2d-36614e068d81
                           journalPool=rbd
                           pool=rbd
                           radosNamespace=
                           storage.kubernetes.io/csiProvisionerIdentity=1655707323013-8081-csi.cstor.com
Events:                <none>

 

根据最后VolumeHandle的组成, 可以得出768a1126-f064-11ec-9a2d-36614e068d81是ceph中卷的id

- 在对应的池中查询image信息, 根据id可以找出对应的image, 查看image的id


(ceph-mon)[root@node2 /]# rbd ls rbd
csi-vol-768a1126-f064-11ec-9a2d-36614e068d81

(ceph-mon)[root@node2 /]# rbd info rbd/csi-vol-768a1126-f064-11ec-9a2d-36614e068d81
rbd image 'csi-vol-768a1126-f064-11ec-9a2d-36614e068d81':
 size 1 GiB in 256 objects
 order 22 (4 MiB objects)
 snapshot_count: 1
 id: 2aa2c1b62ed876
 block_name_prefix: rbd_data.2aa2c1b62ed876
 format: 2
 features: layering, operations
 op_features: clone-parent, snap-trash
 flags:
 create_timestamp: Mon Jun 20 14:44:45 2022
 access_timestamp: Mon Jun 20 14:44:45 2022
 modify_timestamp: Mon Jun 20 14:44:45 2022

 

- 使用rados命令查看csi元数据对象


(ceph-mon)[root@node2 /]# rados -p rbd ls | grep ^csi
csi.volume.768a1126-f064-11ec-9a2d-36614e068d81
csi.volumes.default


- csi.volumes.default记录所有pvc的创建信息, 主要包括pvc与uuid的对应关系, 创建的时候根据这个记录来判断是否已经创建


csi.volume.pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223
value (36 bytes) :
00000000  37 36 38 61 31 31 32 36  2d 66 30 36 34 2d 31 31  |768a1126-f064-11|
00000010  65 63 2d 39 61 32 64 2d  33 36 36 31 34 65 30 36  |ec-9a2d-36614e06|
00000020  38 64 38 31                                       |8d81|
00000024

 

- csi.volume.768a1126-f064-11ec-9a2d-36614e068d81为卷对应的元数据对象
 


(ceph-mon)[root@node2 /]#  rados  -p rbd listomapvals csi.volume.768a1126-f064-11ec-9a2d-36614e068d81
csi.imageid
value (14 bytes) :
00000000  32 61 61 32 63 31 62 36  32 65 64 38 37 36        |2aa2c1b62ed876|
0000000e

csi.imagename
value (44 bytes) :
00000000  63 73 69 2d 76 6f 6c 2d  37 36 38 61 31 31 32 36  |csi-vol-768a1126|
00000010  2d 66 30 36 34 2d 31 31  65 63 2d 39 61 32 64 2d  |-f064-11ec-9a2d-|
00000020  33 36 36 31 34 65 30 36  38 64 38 31              |36614e068d81|
0000002c

csi.volname
value (40 bytes) :
00000000  70 76 63 2d 61 63 63 62  66 64 31 31 2d 38 62 31  |pvc-accbfd11-8b1|
00000010  33 2d 34 61 38 61 2d 39  36 61 61 2d 63 64 64 38  |3-4a8a-96aa-cdd8|
00000020  36 62 36 39 39 32 32 33                           |6b699223|
00000028

 
- csi的日志记录, 可以看出创建的过程

 
got omap values: (pool="rbd", namespace="", name="csi.volumes.default"): map[]
set omap keys (pool="rbd", namespace="", name="csi.volumes.default"):
map[csi.volume.pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223:768a1126-f064-11ec-9a2d-36614e068d81])
set omap keys (pool="rbd", namespace="", name="csi.volume.768a1126-f064-11ec-9a2d-36614e068d81"):
 map[csi.imagename:csi-vol-768a1126-f064-11ec-9a2d-36614e068d81 csi.volname:pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223])
generated Volume ID (0101-24-4a9e463a-4853-4237-a5c5-9ae9d25bacda-00000001-768a1126-f064-11ec-9a2d-36614e068d81)
and image name (csi-vol-768a1126-f064-11ec-9a2d-36614e068d81) for request name (pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223)
rbd: create rbd/csi-vol-768a1126-f064-11ec-9a2d-36614e068d81 size 1024M (features: [layering]) using mon xxx
created volume pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223 backed by image csi-vol-768a1126-f064-11ec-9a2d-36614e068d81
set omap keys (pool="rbd", namespace="", name="csi.volume.768a1126-f064-11ec-9a2d-36614e068d81"): map[csi.imageid:2aa2c1b62ed876])
 


创建克隆卷

- 创建流程

前置条件, 例如: 父卷csi-vol-768a1126-f064-11ec-9a2d-36614e068d81

  1.  先去csi.volumes.default中查询卷是否存在
  2.  不存在则生成uuid, 然后写入csi.volumes.default, 键值对为"csi.volume."+pvName: uuid.
  3.  创建元数据object, 写入元数据, 元数据信息包括csi.imagename(ceph中的卷name, 格式为   "csi-vol"+uuid), csi.volname(pvName)
  4.  创建的时候先创建父卷csi-vol-768a1126-f064-11ec-9a2d-36614e068d81的快照,名字是新卷名字("csi-vol"+uuid)+"-temp"
  5.  然后从快照克隆一个临时新卷出来, 临时新卷名字是新卷名字("csi-vol"+uuid)+"-temp"
  6.  然后删除父卷的快照("csi-vol"+uuid)+"-temp"
  7.  再根据临时新卷("csi-vol"+uuid)+"-temp"创建快照, 快照名字是新卷名字("csi-vol"+uuid)
  8.  然后从快照克隆最终卷("csi-vol"+uuid)
  9.  删除上一步的快照("csi-vol"+uuid)
  10. 查询卷的信息写入omap csi.imageid(ceph image的id)

- k8s中的pvc和pv信息


[root@node1 vdisk]# kubectl get pvc
NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
cstor-pvc-clone-vdisk   Bound    pvc-de6ef224-990d-487c-a451-482b0b99dd23   1Gi        RWO            cstor-csi-vdisk-sc   176m

[root@node1 vdisk]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                           STORAGECLASS         REASON   AGE
pvc-de6ef224-990d-487c-a451-482b0b99dd23   1Gi        RWO            Delete           Bound    default/cstor-pvc-clone-vdisk   cstor-csi-vdisk-sc            176m


- 查看pv的VolumeHandle


[root@node1 vdisk]# kubectl describe pv pvc-de6ef224-990d-487c-a451-482b0b99dd23
Name:            pvc-de6ef224-990d-487c-a451-482b0b99dd23
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: vdisk.csi.cstor.com
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    cstor-csi-vdisk-sc
Status:          Bound
Claim:           default/cstor-pvc-clone-vdisk
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:         
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            vdisk.csi.cstor.com
    FSType:            ext4
    VolumeHandle:      0101-24-4a9e463a-4853-4237-a5c5-9ae9d25bacda-00000001-865a5413-f064-11ec-9a2d-36614e068d81
    ReadOnly:          false
    VolumeAttributes:      clusterID=4a9e463a-4853-4237-a5c5-9ae9d25bacda
                           driverType=vdisk.csi.cstor.com
                           imageName=csi-vol-865a5413-f064-11ec-9a2d-36614e068d81
                           journalPool=rbd
                           pool=rbd
                           radosNamespace=
                           storage.kubernetes.io/csiProvisionerIdentity=1655707323013-8081-csi.cstor.com
Events:                <none>


根据volumeHandle找出uuid865a5413-f064-11ec-9a2d-36614e068d81

 

- 根据uuid找出属于克隆卷的两个卷, temp卷和克隆卷

csi-vol-865a5413-f064-11ec-9a2d-36614e068d81
csi-vol-865a5413-f064-11ec-9a2d-36614e068d81-temp


- 根据csi日志查看创建流程

got omap values: (pool="rbd", namespace="", name="csi.volume.768a1126-f064-11ec-9a2d-36614e068d81"):
 map[csi.imageid:2aa2c1b62ed876 csi.imagename:csi-vol-768a1126-f064-11ec-9a2d-36614e068d81
 csi.volname:pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223]
got omap values: (pool="rbd", namespace="", name="csi.volumes.default"): map[]
set omap keys (pool="rbd", namespace="", name="csi.volumes.default"):
map[csi.volume.pvc-de6ef224-990d-487c-a451-482b0b99dd23:865a5413-f064-11ec-9a2d-36614e068d81])
set omap keys (pool="rbd", namespace="", name="csi.volume.865a5413-f064-11ec-9a2d-36614e068d81"):
 map[csi.imagename:csi-vol-865a5413-f064-11ec-9a2d-36614e068d81 csi.volname:pvc-de6ef224-990d-487c-a451-482b0b99dd23])
generated Volume ID (0101-24-4a9e463a-4853-4237-a5c5-9ae9d25bacda-00000001-865a5413-f064-11ec-9a2d-36614e068d81)
 and image name (csi-vol-865a5413-f064-11ec-9a2d-36614e068d81) for request name (pvc-de6ef224-990d-487c-a451-482b0b99dd23)
rbd: snap create rbd/@csi-vol-865a5413-f064-11ec-9a2d-36614e068d81-temp using mon
rbd: clone rbd/csi-vol-768a1126-f064-11ec-9a2d-36614e068d81@csi-vol-865a5413-f064-11ec-9a2d-36614e068d81-temp
 csi-vol-865a5413-f064-11ec-9a2d-36614e068d81-temp (features: [layering]) using mon 10.20.10.12,10.20.10.13
rbd: snap rm rbd/csi-vol-768a1126-f064-11ec-9a2d-36614e068d81@csi-vol-865a5413-f064-11ec-9a2d-36614e068d81-temp using mon
rbd: snap create rbd/@csi-vol-865a5413-f064-11ec-9a2d-36614e068d81 using mon
rbd: clone rbd/csi-vol-865a5413-f064-11ec-9a2d-36614e068d81-temp@csi-vol-865a5413-f064-11ec-9a2d-36614e068d81
csi-vol-865a5413-f064-11ec-9a2d-36614e068d81 (features: [layering]) using mon 10.20.10.12,10.20.10.13
rbd: snap rm rbd/csi-vol-865a5413-f064-11ec-9a2d-36614e068d81-temp@csi-vol-865a5413-f064-11ec-9a2d-36614e068d81 using mon
set omap keys (pool="rbd", namespace="", name="csi.volume.865a5413-f064-11ec-9a2d-36614e068d81"):
map[csi.imageid:2aa2c1ea24fc3e])

删除卷

删除流程

  1.  检查是否还在使用
  2.  生成临时克隆卷, 即卷的名称+temp
  3.  尝试trash删除临时卷, 如果错误是没发现卷, 忽略该错误, 其他错误则报错返回
  4.  trash删除卷
  5.  移除omap对象, 移除csi.volums.default的记录

- csi日志


got omap values: (pool="rbd", namespace="", name="csi.volume.768a1126-f064-11ec-9a2d-36614e068d81"):
map[csi.imageid:2aa2c1b62ed876 csi.imagename:csi-vol-768a1126-f064-11ec-9a2d-36614e068d81
csi.volname:pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223]
command /usr/bin/rbd -m 10.20.10.12,10.20.10.13 --id test --keyfile=/tmp/csi/keys/keyfile-2661790888 -c /etc/ceph/ceph.conf --format=json info rbd/csi-vol-768a1126-f064-11ec-9a2d-36614e068d81 start
the command /usr/bin/rbd -m 10.20.10.12,10.20.10.13 --id test --keyfile=/tmp/csi/keys/keyfile-2661790888 -c /etc/ceph/ceph.conf --format=json info rbd/csi-vol-768a1126-f064-11ec-9a2d-36614e068d81 exits normally
Can not find rbd image: rbd/csi-vol-768a1126-f064-11ec-9a2d-36614e068d81-temp, but maybe no error
deleting image csi-vol-768a1126-f064-11ec-9a2d-36614e068d81
rbd: delete csi-vol-768a1126-f064-11ec-9a2d-36614e068d81 using mon 10.20.10.12,10.20.10.13, pool rbd
executing [rbd task add trash remove rbd/2aa2c1b62ed876 --id test --keyfile=/tmp/csi/keys/keyfile-2661790888
-m 10.20.10.12,10.20.10.13] for image (csi-vol-768a1126-f064-11ec-9a2d-36614e068d81)
 using mon 10.20.10.12,10.20.10.13, pool rbd
command /usr/bin/ceph rbd task add trash remove rbd/2aa2c1b62ed876 --id test
--keyfile=/tmp/csi/keys/keyfile-2661790888 -m 10.20.10.12,10.20.10.13 start
run the command successfully: /usr/bin/ceph rbd task add trash remove rbd/2aa2c1b62ed876
--id test --keyfile=/tmp/csi/keys/keyfile-2661790888 -m 10.20.10.12,10.20.10.13,
 stdout: {"message": "Removing image rbd/2aa2c1b62ed876 from trash",
 "id": "43152a6e-82a3-466f-b370-6173082501fc",
 "refs": {"action": "trash remove", "pool_name": "rbd", "pool_namespace": "",
 "image_id": "2aa2c1b62ed876"}, "sequence": 1}, stderr:
the command /usr/bin/ceph rbd task add trash remove rbd/2aa2c1b62ed876 --id test
 --keyfile=/tmp/csi/keys/keyfile-2661790888 -m 10.20.10.12,10.20.10.13 exits normally
removed omap keys (pool="rbd", namespace="", name="csi.volumes.default"):
[csi.volume.pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223]

 

0条评论
0 / 1000
王伟
3文章数
0粉丝数
王伟
3 文章 | 0 粉丝
王伟
3文章数
0粉丝数
王伟
3 文章 | 0 粉丝
原创

Ceph-CSI RBD卷和克隆卷的创建/删除流程

2023-05-18 10:15:23
75
0

by 云网产品事业部-弹性存储-王伟

ceph csi的创建分三种, 创建新卷, 基于父卷创建克隆卷, 基于快照创建卷, 我们先看看普通卷的创建流程

普通卷的创建

根据代码整理的创建流程为:

  1. 先去csi.volumes.default中查询卷是否存在
  2. 不存在则生成uuid, 然后写入csi.volumes.default, 键值对为"csi.volume."+pvName: uuid.
  3. 创建元数据object, 写入元数据, 元数据信息包括csi.imagename(ceph中的卷name, 格式为"csi-vol"+uuid), csi.volname(pvName)
  4. 创建卷, 并根据卷的信息写入omap csi.imageid(ceph image的id)

- 先查看的k8s的pv和pvc信息


[root@node1 vdisk]# kubectl get pvc
NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
cstor-pvc-vdisk         Bound    pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223   1Gi        RWO            cstor-csi-vdisk-sc   4m56s

[root@node1 vdisk]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                           STORAGECLASS         REASON   AGE
pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223   1Gi        RWO            Delete           Bound    default/cstor-pvc-vdisk         cstor-csi-vdisk-sc            5m21s

- 查看pv的详细信息, 其中VolumeHandle的后36位为产生的uuid


[root@node1 vdisk]# kubectl describe pv pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223
Name:            pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: vdisk.csi.cstor.com
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    cstor-csi-vdisk-sc
Status:          Bound
Claim:           default/cstor-pvc-vdisk
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:         
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            vdisk.csi.cstor.com
    FSType:            ext4
    VolumeHandle:      0101-24-4a9e463a-4853-4237-a5c5-9ae9d25bacda-00000001-768a1126-f064-11ec-9a2d-36614e068d81
    ReadOnly:          false
    VolumeAttributes:      clusterID=4a9e463a-4853-4237-a5c5-9ae9d25bacda
                           driverType=vdisk.csi.cstor.com
                           imageName=csi-vol-768a1126-f064-11ec-9a2d-36614e068d81
                           journalPool=rbd
                           pool=rbd
                           radosNamespace=
                           storage.kubernetes.io/csiProvisionerIdentity=1655707323013-8081-csi.cstor.com
Events:                <none>

 

根据最后VolumeHandle的组成, 可以得出768a1126-f064-11ec-9a2d-36614e068d81是ceph中卷的id

- 在对应的池中查询image信息, 根据id可以找出对应的image, 查看image的id


(ceph-mon)[root@node2 /]# rbd ls rbd
csi-vol-768a1126-f064-11ec-9a2d-36614e068d81

(ceph-mon)[root@node2 /]# rbd info rbd/csi-vol-768a1126-f064-11ec-9a2d-36614e068d81
rbd image 'csi-vol-768a1126-f064-11ec-9a2d-36614e068d81':
 size 1 GiB in 256 objects
 order 22 (4 MiB objects)
 snapshot_count: 1
 id: 2aa2c1b62ed876
 block_name_prefix: rbd_data.2aa2c1b62ed876
 format: 2
 features: layering, operations
 op_features: clone-parent, snap-trash
 flags:
 create_timestamp: Mon Jun 20 14:44:45 2022
 access_timestamp: Mon Jun 20 14:44:45 2022
 modify_timestamp: Mon Jun 20 14:44:45 2022

 

- 使用rados命令查看csi元数据对象


(ceph-mon)[root@node2 /]# rados -p rbd ls | grep ^csi
csi.volume.768a1126-f064-11ec-9a2d-36614e068d81
csi.volumes.default


- csi.volumes.default记录所有pvc的创建信息, 主要包括pvc与uuid的对应关系, 创建的时候根据这个记录来判断是否已经创建


csi.volume.pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223
value (36 bytes) :
00000000  37 36 38 61 31 31 32 36  2d 66 30 36 34 2d 31 31  |768a1126-f064-11|
00000010  65 63 2d 39 61 32 64 2d  33 36 36 31 34 65 30 36  |ec-9a2d-36614e06|
00000020  38 64 38 31                                       |8d81|
00000024

 

- csi.volume.768a1126-f064-11ec-9a2d-36614e068d81为卷对应的元数据对象
 


(ceph-mon)[root@node2 /]#  rados  -p rbd listomapvals csi.volume.768a1126-f064-11ec-9a2d-36614e068d81
csi.imageid
value (14 bytes) :
00000000  32 61 61 32 63 31 62 36  32 65 64 38 37 36        |2aa2c1b62ed876|
0000000e

csi.imagename
value (44 bytes) :
00000000  63 73 69 2d 76 6f 6c 2d  37 36 38 61 31 31 32 36  |csi-vol-768a1126|
00000010  2d 66 30 36 34 2d 31 31  65 63 2d 39 61 32 64 2d  |-f064-11ec-9a2d-|
00000020  33 36 36 31 34 65 30 36  38 64 38 31              |36614e068d81|
0000002c

csi.volname
value (40 bytes) :
00000000  70 76 63 2d 61 63 63 62  66 64 31 31 2d 38 62 31  |pvc-accbfd11-8b1|
00000010  33 2d 34 61 38 61 2d 39  36 61 61 2d 63 64 64 38  |3-4a8a-96aa-cdd8|
00000020  36 62 36 39 39 32 32 33                           |6b699223|
00000028

 
- csi的日志记录, 可以看出创建的过程

 
got omap values: (pool="rbd", namespace="", name="csi.volumes.default"): map[]
set omap keys (pool="rbd", namespace="", name="csi.volumes.default"):
map[csi.volume.pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223:768a1126-f064-11ec-9a2d-36614e068d81])
set omap keys (pool="rbd", namespace="", name="csi.volume.768a1126-f064-11ec-9a2d-36614e068d81"):
 map[csi.imagename:csi-vol-768a1126-f064-11ec-9a2d-36614e068d81 csi.volname:pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223])
generated Volume ID (0101-24-4a9e463a-4853-4237-a5c5-9ae9d25bacda-00000001-768a1126-f064-11ec-9a2d-36614e068d81)
and image name (csi-vol-768a1126-f064-11ec-9a2d-36614e068d81) for request name (pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223)
rbd: create rbd/csi-vol-768a1126-f064-11ec-9a2d-36614e068d81 size 1024M (features: [layering]) using mon xxx
created volume pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223 backed by image csi-vol-768a1126-f064-11ec-9a2d-36614e068d81
set omap keys (pool="rbd", namespace="", name="csi.volume.768a1126-f064-11ec-9a2d-36614e068d81"): map[csi.imageid:2aa2c1b62ed876])
 


创建克隆卷

- 创建流程

前置条件, 例如: 父卷csi-vol-768a1126-f064-11ec-9a2d-36614e068d81

  1.  先去csi.volumes.default中查询卷是否存在
  2.  不存在则生成uuid, 然后写入csi.volumes.default, 键值对为"csi.volume."+pvName: uuid.
  3.  创建元数据object, 写入元数据, 元数据信息包括csi.imagename(ceph中的卷name, 格式为   "csi-vol"+uuid), csi.volname(pvName)
  4.  创建的时候先创建父卷csi-vol-768a1126-f064-11ec-9a2d-36614e068d81的快照,名字是新卷名字("csi-vol"+uuid)+"-temp"
  5.  然后从快照克隆一个临时新卷出来, 临时新卷名字是新卷名字("csi-vol"+uuid)+"-temp"
  6.  然后删除父卷的快照("csi-vol"+uuid)+"-temp"
  7.  再根据临时新卷("csi-vol"+uuid)+"-temp"创建快照, 快照名字是新卷名字("csi-vol"+uuid)
  8.  然后从快照克隆最终卷("csi-vol"+uuid)
  9.  删除上一步的快照("csi-vol"+uuid)
  10. 查询卷的信息写入omap csi.imageid(ceph image的id)

- k8s中的pvc和pv信息


[root@node1 vdisk]# kubectl get pvc
NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
cstor-pvc-clone-vdisk   Bound    pvc-de6ef224-990d-487c-a451-482b0b99dd23   1Gi        RWO            cstor-csi-vdisk-sc   176m

[root@node1 vdisk]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                           STORAGECLASS         REASON   AGE
pvc-de6ef224-990d-487c-a451-482b0b99dd23   1Gi        RWO            Delete           Bound    default/cstor-pvc-clone-vdisk   cstor-csi-vdisk-sc            176m


- 查看pv的VolumeHandle


[root@node1 vdisk]# kubectl describe pv pvc-de6ef224-990d-487c-a451-482b0b99dd23
Name:            pvc-de6ef224-990d-487c-a451-482b0b99dd23
Labels:          <none>
Annotations:     pv.kubernetes.io/provisioned-by: vdisk.csi.cstor.com
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    cstor-csi-vdisk-sc
Status:          Bound
Claim:           default/cstor-pvc-clone-vdisk
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:         
Source:
    Type:              CSI (a Container Storage Interface (CSI) volume source)
    Driver:            vdisk.csi.cstor.com
    FSType:            ext4
    VolumeHandle:      0101-24-4a9e463a-4853-4237-a5c5-9ae9d25bacda-00000001-865a5413-f064-11ec-9a2d-36614e068d81
    ReadOnly:          false
    VolumeAttributes:      clusterID=4a9e463a-4853-4237-a5c5-9ae9d25bacda
                           driverType=vdisk.csi.cstor.com
                           imageName=csi-vol-865a5413-f064-11ec-9a2d-36614e068d81
                           journalPool=rbd
                           pool=rbd
                           radosNamespace=
                           storage.kubernetes.io/csiProvisionerIdentity=1655707323013-8081-csi.cstor.com
Events:                <none>


根据volumeHandle找出uuid865a5413-f064-11ec-9a2d-36614e068d81

 

- 根据uuid找出属于克隆卷的两个卷, temp卷和克隆卷

csi-vol-865a5413-f064-11ec-9a2d-36614e068d81
csi-vol-865a5413-f064-11ec-9a2d-36614e068d81-temp


- 根据csi日志查看创建流程

got omap values: (pool="rbd", namespace="", name="csi.volume.768a1126-f064-11ec-9a2d-36614e068d81"):
 map[csi.imageid:2aa2c1b62ed876 csi.imagename:csi-vol-768a1126-f064-11ec-9a2d-36614e068d81
 csi.volname:pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223]
got omap values: (pool="rbd", namespace="", name="csi.volumes.default"): map[]
set omap keys (pool="rbd", namespace="", name="csi.volumes.default"):
map[csi.volume.pvc-de6ef224-990d-487c-a451-482b0b99dd23:865a5413-f064-11ec-9a2d-36614e068d81])
set omap keys (pool="rbd", namespace="", name="csi.volume.865a5413-f064-11ec-9a2d-36614e068d81"):
 map[csi.imagename:csi-vol-865a5413-f064-11ec-9a2d-36614e068d81 csi.volname:pvc-de6ef224-990d-487c-a451-482b0b99dd23])
generated Volume ID (0101-24-4a9e463a-4853-4237-a5c5-9ae9d25bacda-00000001-865a5413-f064-11ec-9a2d-36614e068d81)
 and image name (csi-vol-865a5413-f064-11ec-9a2d-36614e068d81) for request name (pvc-de6ef224-990d-487c-a451-482b0b99dd23)
rbd: snap create rbd/@csi-vol-865a5413-f064-11ec-9a2d-36614e068d81-temp using mon
rbd: clone rbd/csi-vol-768a1126-f064-11ec-9a2d-36614e068d81@csi-vol-865a5413-f064-11ec-9a2d-36614e068d81-temp
 csi-vol-865a5413-f064-11ec-9a2d-36614e068d81-temp (features: [layering]) using mon 10.20.10.12,10.20.10.13
rbd: snap rm rbd/csi-vol-768a1126-f064-11ec-9a2d-36614e068d81@csi-vol-865a5413-f064-11ec-9a2d-36614e068d81-temp using mon
rbd: snap create rbd/@csi-vol-865a5413-f064-11ec-9a2d-36614e068d81 using mon
rbd: clone rbd/csi-vol-865a5413-f064-11ec-9a2d-36614e068d81-temp@csi-vol-865a5413-f064-11ec-9a2d-36614e068d81
csi-vol-865a5413-f064-11ec-9a2d-36614e068d81 (features: [layering]) using mon 10.20.10.12,10.20.10.13
rbd: snap rm rbd/csi-vol-865a5413-f064-11ec-9a2d-36614e068d81-temp@csi-vol-865a5413-f064-11ec-9a2d-36614e068d81 using mon
set omap keys (pool="rbd", namespace="", name="csi.volume.865a5413-f064-11ec-9a2d-36614e068d81"):
map[csi.imageid:2aa2c1ea24fc3e])

删除卷

删除流程

  1.  检查是否还在使用
  2.  生成临时克隆卷, 即卷的名称+temp
  3.  尝试trash删除临时卷, 如果错误是没发现卷, 忽略该错误, 其他错误则报错返回
  4.  trash删除卷
  5.  移除omap对象, 移除csi.volums.default的记录

- csi日志


got omap values: (pool="rbd", namespace="", name="csi.volume.768a1126-f064-11ec-9a2d-36614e068d81"):
map[csi.imageid:2aa2c1b62ed876 csi.imagename:csi-vol-768a1126-f064-11ec-9a2d-36614e068d81
csi.volname:pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223]
command /usr/bin/rbd -m 10.20.10.12,10.20.10.13 --id test --keyfile=/tmp/csi/keys/keyfile-2661790888 -c /etc/ceph/ceph.conf --format=json info rbd/csi-vol-768a1126-f064-11ec-9a2d-36614e068d81 start
the command /usr/bin/rbd -m 10.20.10.12,10.20.10.13 --id test --keyfile=/tmp/csi/keys/keyfile-2661790888 -c /etc/ceph/ceph.conf --format=json info rbd/csi-vol-768a1126-f064-11ec-9a2d-36614e068d81 exits normally
Can not find rbd image: rbd/csi-vol-768a1126-f064-11ec-9a2d-36614e068d81-temp, but maybe no error
deleting image csi-vol-768a1126-f064-11ec-9a2d-36614e068d81
rbd: delete csi-vol-768a1126-f064-11ec-9a2d-36614e068d81 using mon 10.20.10.12,10.20.10.13, pool rbd
executing [rbd task add trash remove rbd/2aa2c1b62ed876 --id test --keyfile=/tmp/csi/keys/keyfile-2661790888
-m 10.20.10.12,10.20.10.13] for image (csi-vol-768a1126-f064-11ec-9a2d-36614e068d81)
 using mon 10.20.10.12,10.20.10.13, pool rbd
command /usr/bin/ceph rbd task add trash remove rbd/2aa2c1b62ed876 --id test
--keyfile=/tmp/csi/keys/keyfile-2661790888 -m 10.20.10.12,10.20.10.13 start
run the command successfully: /usr/bin/ceph rbd task add trash remove rbd/2aa2c1b62ed876
--id test --keyfile=/tmp/csi/keys/keyfile-2661790888 -m 10.20.10.12,10.20.10.13,
 stdout: {"message": "Removing image rbd/2aa2c1b62ed876 from trash",
 "id": "43152a6e-82a3-466f-b370-6173082501fc",
 "refs": {"action": "trash remove", "pool_name": "rbd", "pool_namespace": "",
 "image_id": "2aa2c1b62ed876"}, "sequence": 1}, stderr:
the command /usr/bin/ceph rbd task add trash remove rbd/2aa2c1b62ed876 --id test
 --keyfile=/tmp/csi/keys/keyfile-2661790888 -m 10.20.10.12,10.20.10.13 exits normally
removed omap keys (pool="rbd", namespace="", name="csi.volumes.default"):
[csi.volume.pvc-accbfd11-8b13-4a8a-96aa-cdd86b699223]

 

文章来自个人专栏
容器存储CSI
3 文章 | 1 订阅
0条评论
0 / 1000
请输入你的评论
0
0