searchusermenu
  • 发布文章
  • 消息中心
点赞
收藏
评论
分享

kafka常见问题处理

2022-06-29 08:58:23
22
0

消费消息超时,导致重复消费问题,消息堆积

报错日志示例

 

org.springframework.kafka.KafkaListenerEndpointContainer#2-0-C-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.maybeAutoCommitOffsetsSync:648 - Auto-commit of offsets {gate_contact_modify-0=OffsetAndMetadata{offset=2801, metadata=''}} failed for group smart-building-consumer-group: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records

bug造成的后果

1、重复消费

2、消息堆积,由于offset一直没有提交,导致一直在重复消费同一批次消息,后面的消息没有被消费,造成了消息堆积

问题分析

This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms

这里已经告诉我们消费时间,大于max.poll.interval.ms(默认300秒(5分钟))

既然知道是消费时,大于了这个时间,那就好解决了,我们可以看下每条消息处理耗时。

解决方案

1、减少每批次拉去数据量,即修改max.poll.records参数(kafka 0.9以后版本才有)默认拉取记录是500
2、优化每条消息的处理时间,是的改批次消息处理的时间在max.poll.interval.ms内
3、多线程处理拿到的消息数据

————————————————

版权声明:本文为CSDN博主「有趣的灵魂_不世俗的心」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。

原文链接:https://blog.csdn.net/weixin_42324471/article/details/123403625

0条评论
0 / 1000
天翼云文档找茬小助手
41文章数
7粉丝数
天翼云文档找茬小助手
41 文章 | 7 粉丝

kafka常见问题处理

2022-06-29 08:58:23
22
0

消费消息超时,导致重复消费问题,消息堆积

报错日志示例

 

org.springframework.kafka.KafkaListenerEndpointContainer#2-0-C-1 org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.maybeAutoCommitOffsetsSync:648 - Auto-commit of offsets {gate_contact_modify-0=OffsetAndMetadata{offset=2801, metadata=''}} failed for group smart-building-consumer-group: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records

bug造成的后果

1、重复消费

2、消息堆积,由于offset一直没有提交,导致一直在重复消费同一批次消息,后面的消息没有被消费,造成了消息堆积

问题分析

This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms

这里已经告诉我们消费时间,大于max.poll.interval.ms(默认300秒(5分钟))

既然知道是消费时,大于了这个时间,那就好解决了,我们可以看下每条消息处理耗时。

解决方案

1、减少每批次拉去数据量,即修改max.poll.records参数(kafka 0.9以后版本才有)默认拉取记录是500
2、优化每条消息的处理时间,是的改批次消息处理的时间在max.poll.interval.ms内
3、多线程处理拿到的消息数据

————————————————

版权声明:本文为CSDN博主「有趣的灵魂_不世俗的心」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。

原文链接:https://blog.csdn.net/weixin_42324471/article/details/123403625

文章来自个人专栏
云知识的搬运工
224 文章 | 7 订阅
0条评论
0 / 1000
请输入你的评论
0
0