微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

主题看不到Mongodb Kafka消息

如何解决主题看不到Mongodb Kafka消息

我遇到了尽管运行和操作我的主题并没有注册在我的 MongoDB 中发生的事件。

每次我插入/修改记录时,我都不再从 kafka-console-consumer 命令获取日志。

有没有办法清除 Kafka 的缓存/偏移量? 源和接收器连接已启动并正在运行。整个集群也很健康,事情是一切都像往常一样,但每隔几周我就会看到这种情况回来,或者当我从其他位置登录到我的 Mongo 云时。

--partition 0 参数没有帮助,也将 retention_ms 更改为 1

enter image description here

enter image description here

我检查了两个连接器的状态并得到了 RUNNING

curl localhost:8083/connectors | jq

enter image description here

curl localhost:8083/connectors/monit_people/status | jq

enter image description here

运行 docker-compose logs connect 我发现:

    WARN Failed to resume change stream: Resume of change stream was not possible,as the resume point may no longer be in the oplog. 286

If the resume token is no longer available then there is the potential for data loss.
Saved resume tokens are managed by Kafka and stored with the offset data.
 
When running Connect in standalone mode offsets are configured using the:
`offset.storage.file.filename` configuration.
When running Connect in distributed mode the offsets are stored in a topic.

Use the `kafka-consumer-groups.sh` tool with the `--reset-offsets` flag to reset offsets.

Resetting the offset will allow for the connector to be resume from the latest resume token. 
Using `copy.existing=true` ensures that all data will be outputted by the connector but it will duplicate existing data.
Future releases will support a configurable `errors.tolerance` level for the source connector and make use of the `postBatchResumetoken

解决方法

问题需要更多的 Confluent Platform 练习,因此现在我通过删除整个容器来重建整个环境:

docker system prune -a -f --volumes

docker container stop $(docker container ls -a -q -f "label=io.confluent.docker")

运行 docker-compose up -d 后一切正常。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。