searchusermenu
  • 发布文章
  • 消息中心
点赞
收藏
评论
分享
原创

启用kafka 权限认证(jks)

2023-09-01 03:51:49
278
0

1.下载kafka helm 安装包

helm fetch bitnami/kafka

2.进入kafka file文件夹下面,创建jks文件夹

mkdir jks

3.下载自动创建jks secret证书脚本

wget raw.githubusercontent.(c<o>m)/confluentinc/confluent-platform-security-tools/master/kafka-generate-ssl-automatic.sh

export COUNTRY=US
export STATE=IL
export ORGANIZATION_UNIT=SE
export CITY=Chicago
export PASSWORD=secret
bash ./kafka-generate-ssl-automatic.sh

4.证书创建成功之后,会生成两个文件夹keystore 和truststore,进入keystore, 将kafka.keystore.jks, 复制成:

kafka-0.keystore.jks
kafka-1.keystore.jks
kafka-2.keystore.jks

5.创建secret

kubectl -n testkafka create secret generic kafka-jks \
--from-file=./truststore/kafka.truststore.jks \
--from-file=./keystore/kafka-0.keystore.jks \
--from-file=./keystore/kafka-1.keystore.jks \
--from-file=./keystore/kafka-2.keystore.jks

6.部署zookeeper

helm install -n testkafka zk-jks ./zookeeper \
--set replicaCount=3 \
--set auth.enabled=true \
--set allowAnonymousLogin=false \
--set persistence.enabled=false \
--set auth.clientUser=zookeeperUser \
--set auth.clientPassword=zookeeperPassword \
--set auth.serverUsers=zookeeperUser \
--set auth.serverPasswords=zookeeperPassword
这里是禁用了pv的,实际部署需要配置pv
 

7.部署kafka

helm install kafka-jks -n testkafka ./kafka \
--set replicaCount=3 \
--set zookeeper.enabled=false \
--set externalZookeeper.servers=zk-jks-zookeeper.testkafka \
--set persistence.enabled=false \
--set auth.clientProtocol=sasl \
--set auth.interBrokerProtocol=sasl_tls \
--set auth.tls.type=jks \
--set auth.jksSecret=kafka-jks \
--set auth.tls.existingSecret=kafka-jks \
--set auth.jksPassword=secret \
--set auth.jaas.zookeeperUser=zookeeperUser \
--set auth.jaas.zookeeperPassword=zookeeperPassword \
--set ssl.endpoint.identification.algorithm="" \
--set auth.tls.endpointIdentificationAlgorithm="" \
--set auth.sasl.jaas.clientUsers[0]=clientUser \
--set auth.sasl.jaas.clientPasswords[0]=clientPassword \
--set auth.sasl.jaas.interBrokerUser=admin \
--set auth.sasl.jaas.interBrokerPassword=adminPassword \
--set zookeeper.auth.enabled=true \
--set zookeeper.auth.serverUsers=zookeeperUser \
--set zookeeper.auth.serverPasswords=zookeeperPassword \
--set zookeeper.auth.clientUser=zookeeperUser \
--set zookeeper.auth.clientPassword=zookeeperPassword
同样这里是禁用了pv的
 
以上zookeeper和kafka的部署参数可以在chart里面的values.yml文件配置,可以不在部署参数里面配置
 

8.部署完成之后,连接kafka的账号密码用zookeeperUser和zookeeperPassword,即可连接

当前使用的Bitnami kafka版本尚不支持ACLs权限,因此若要启动ACLs需要手动修改helm 模板,增加配置项
 

8.1.修改templates文件夹下面的statefulset.yaml文件,增加4个配置项

 

- name: KAFKA_CFG_AUTHORIZER_CLASS_NAME
  value: {{ .Values.authorizerClassName | quote }}
- name: KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND
  value: {{ .Values.allowEveryoneIfNoAclFound | quote }}
- name: KAFKA_CFG_SUPER_USERS
  value: {{ .Values.superUsers | quote }}
- name: KAFKA_CFG_ZOOKEEPER_SET_ACL
  value: {{ .Values.zookeeperSetAcl | quote }}

8.2.修改values.yaml文件,增加配置项具体配置

## 自定义
authorizerClassName: kafka.security.authorizer.AclAuthorizer
allowEveryoneIfNoAclFound: false
superUsers: User:admin
zookeeperSetAcl: true

8.3 删除原来的kafka,重新部署即可启用ACLs权限认证

以下操作是kafka的Authorization 和 Acls 相关账号创建删除以及授权
启用acl授权相关操作时,topic必须提前创建,无法自动创建
 

9.创建新账号:(以下示例是创建一个用户名为reader的账号,密码为reader-pwd)

kafka-configs.sh --zookeeper zk-jks-zookeeper.testkafka.svc.cluster.local:2181 \
--alter --add-config 'SCRAM-SHA-256=[password=reader-pwd],SCRAM-SHA-512=[password=reader-pwd]' \
--entity-type users --entity-name reader

10.删除账号

kafka-configs.sh --zookeeper zk-jks-zookeeper.testkafka.svc.cluster.local:2181 \
--alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name writer

11.查看账号详情

kafka-configs.sh --zookeeper zk-jks-zookeeper.testkafka.svc.cluster.local:2181 \
--describe --entity-type users --entity-name writer

12.给账号授权

生产授权:
kafka-acls.sh --authorizer kafka.security.authorizer.AclAuthorizer \
--authorizer-properties zookeeper.connect=zk-jks-zookeeper.testkafka.svc.cluster.local:2181 \
--add --allow-principal User:reader --operation Write --topic test 
消费授权:(这里必须强制指定一个group,不然无法消费)
kafka-acls.sh --authorizer kafka.security.authorizer.AclAuthorizer \
--authorizer-properties zookeeper.connect=zk-jks-zookeeper.testkafka.svc.cluster.local:2181 \
--add --allow-principal User:reader --operation Read --topic test --group test-group

13.查看权限列表

kafka-acls.sh --authorizer kafka.security.authorizer.AclAuthorizer --authorizer-properties \
zookeeper.connect=zk-jks-zookeeper.testkafka.svc.cluster.local:2181 --list

14.指定topic设置权限

kafka-acls.sh --authorizer kafka.security.authorizer.AclAuthorizer \
--authorizer-properties zookeeper.connect=zk-jks-zookeeper.testkafka.svc.cluster.local:2181 \
--add --deny-principal User:writer --operation Write --topic test

15.移除某个topic 的 全部ACL权限设置

kafka-acls.sh --authorizer kafka.security.authorizer.AclAuthorizer \
--authorizer-properties zookeeper.connect=zk-jks-zookeeper.testkafka.svc.cluster.local:2181 \
--remove --topic test
0条评论
0 / 1000
李****材
5文章数
0粉丝数
李****材
5 文章 | 0 粉丝
原创

启用kafka 权限认证(jks)

2023-09-01 03:51:49
278
0

1.下载kafka helm 安装包

helm fetch bitnami/kafka

2.进入kafka file文件夹下面,创建jks文件夹

mkdir jks

3.下载自动创建jks secret证书脚本

wget raw.githubusercontent.(c<o>m)/confluentinc/confluent-platform-security-tools/master/kafka-generate-ssl-automatic.sh

export COUNTRY=US
export STATE=IL
export ORGANIZATION_UNIT=SE
export CITY=Chicago
export PASSWORD=secret
bash ./kafka-generate-ssl-automatic.sh

4.证书创建成功之后,会生成两个文件夹keystore 和truststore,进入keystore, 将kafka.keystore.jks, 复制成:

kafka-0.keystore.jks
kafka-1.keystore.jks
kafka-2.keystore.jks

5.创建secret

kubectl -n testkafka create secret generic kafka-jks \
--from-file=./truststore/kafka.truststore.jks \
--from-file=./keystore/kafka-0.keystore.jks \
--from-file=./keystore/kafka-1.keystore.jks \
--from-file=./keystore/kafka-2.keystore.jks

6.部署zookeeper

helm install -n testkafka zk-jks ./zookeeper \
--set replicaCount=3 \
--set auth.enabled=true \
--set allowAnonymousLogin=false \
--set persistence.enabled=false \
--set auth.clientUser=zookeeperUser \
--set auth.clientPassword=zookeeperPassword \
--set auth.serverUsers=zookeeperUser \
--set auth.serverPasswords=zookeeperPassword
这里是禁用了pv的,实际部署需要配置pv
 

7.部署kafka

helm install kafka-jks -n testkafka ./kafka \
--set replicaCount=3 \
--set zookeeper.enabled=false \
--set externalZookeeper.servers=zk-jks-zookeeper.testkafka \
--set persistence.enabled=false \
--set auth.clientProtocol=sasl \
--set auth.interBrokerProtocol=sasl_tls \
--set auth.tls.type=jks \
--set auth.jksSecret=kafka-jks \
--set auth.tls.existingSecret=kafka-jks \
--set auth.jksPassword=secret \
--set auth.jaas.zookeeperUser=zookeeperUser \
--set auth.jaas.zookeeperPassword=zookeeperPassword \
--set ssl.endpoint.identification.algorithm="" \
--set auth.tls.endpointIdentificationAlgorithm="" \
--set auth.sasl.jaas.clientUsers[0]=clientUser \
--set auth.sasl.jaas.clientPasswords[0]=clientPassword \
--set auth.sasl.jaas.interBrokerUser=admin \
--set auth.sasl.jaas.interBrokerPassword=adminPassword \
--set zookeeper.auth.enabled=true \
--set zookeeper.auth.serverUsers=zookeeperUser \
--set zookeeper.auth.serverPasswords=zookeeperPassword \
--set zookeeper.auth.clientUser=zookeeperUser \
--set zookeeper.auth.clientPassword=zookeeperPassword
同样这里是禁用了pv的
 
以上zookeeper和kafka的部署参数可以在chart里面的values.yml文件配置,可以不在部署参数里面配置
 

8.部署完成之后,连接kafka的账号密码用zookeeperUser和zookeeperPassword,即可连接

当前使用的Bitnami kafka版本尚不支持ACLs权限,因此若要启动ACLs需要手动修改helm 模板,增加配置项
 

8.1.修改templates文件夹下面的statefulset.yaml文件,增加4个配置项

 

- name: KAFKA_CFG_AUTHORIZER_CLASS_NAME
  value: {{ .Values.authorizerClassName | quote }}
- name: KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND
  value: {{ .Values.allowEveryoneIfNoAclFound | quote }}
- name: KAFKA_CFG_SUPER_USERS
  value: {{ .Values.superUsers | quote }}
- name: KAFKA_CFG_ZOOKEEPER_SET_ACL
  value: {{ .Values.zookeeperSetAcl | quote }}

8.2.修改values.yaml文件,增加配置项具体配置

## 自定义
authorizerClassName: kafka.security.authorizer.AclAuthorizer
allowEveryoneIfNoAclFound: false
superUsers: User:admin
zookeeperSetAcl: true

8.3 删除原来的kafka,重新部署即可启用ACLs权限认证

以下操作是kafka的Authorization 和 Acls 相关账号创建删除以及授权
启用acl授权相关操作时,topic必须提前创建,无法自动创建
 

9.创建新账号:(以下示例是创建一个用户名为reader的账号,密码为reader-pwd)

kafka-configs.sh --zookeeper zk-jks-zookeeper.testkafka.svc.cluster.local:2181 \
--alter --add-config 'SCRAM-SHA-256=[password=reader-pwd],SCRAM-SHA-512=[password=reader-pwd]' \
--entity-type users --entity-name reader

10.删除账号

kafka-configs.sh --zookeeper zk-jks-zookeeper.testkafka.svc.cluster.local:2181 \
--alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name writer

11.查看账号详情

kafka-configs.sh --zookeeper zk-jks-zookeeper.testkafka.svc.cluster.local:2181 \
--describe --entity-type users --entity-name writer

12.给账号授权

生产授权:
kafka-acls.sh --authorizer kafka.security.authorizer.AclAuthorizer \
--authorizer-properties zookeeper.connect=zk-jks-zookeeper.testkafka.svc.cluster.local:2181 \
--add --allow-principal User:reader --operation Write --topic test 
消费授权:(这里必须强制指定一个group,不然无法消费)
kafka-acls.sh --authorizer kafka.security.authorizer.AclAuthorizer \
--authorizer-properties zookeeper.connect=zk-jks-zookeeper.testkafka.svc.cluster.local:2181 \
--add --allow-principal User:reader --operation Read --topic test --group test-group

13.查看权限列表

kafka-acls.sh --authorizer kafka.security.authorizer.AclAuthorizer --authorizer-properties \
zookeeper.connect=zk-jks-zookeeper.testkafka.svc.cluster.local:2181 --list

14.指定topic设置权限

kafka-acls.sh --authorizer kafka.security.authorizer.AclAuthorizer \
--authorizer-properties zookeeper.connect=zk-jks-zookeeper.testkafka.svc.cluster.local:2181 \
--add --deny-principal User:writer --operation Write --topic test

15.移除某个topic 的 全部ACL权限设置

kafka-acls.sh --authorizer kafka.security.authorizer.AclAuthorizer \
--authorizer-properties zookeeper.connect=zk-jks-zookeeper.testkafka.svc.cluster.local:2181 \
--remove --topic test
文章来自个人专栏
lrc的专栏
5 文章 | 1 订阅
0条评论
0 / 1000
请输入你的评论
0
0