如何解决风暴日志文件不可访问
我正在尝试使Apache Kafka-Storm Pipeline正常工作,但是由于我没有收到任何元组,所以我认为查看日志是个好主意。我在docker-compose中启动了整个管道,包括logviewer和超级用户,当我直接访问logviewer时,它会给我一条成功消息-这意味着启动。但是,对于概述中的拓扑日志,我无法访问单个日志文件,无论是用于拓扑结构的日志文件,还是用于http://192.168.99.100:8000/api/v1/daemonlog?file=supervisor.log
下的主管的日志文件,都不会收到任何结果。
我怀疑主管本身在沟通方面存在问题,但是我的详细知识相当有限。另外,我正在使用docker toolBox,并在192.168.99.100
下运行所有图像。
services:
zookeeper:
image: bitnami/zookeeper
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: bitnami/kafka
ports:
- '9092:9092'
environment:
- KAFKA_broKER_ID=1
- KAFKA_LISTENERS=PLAINTEXT://:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.99.100:9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_LOG4J_ROOT_LOGLEVEL=DEBUG
depends_on:
- zookeeper
restart: always
nimbus:
image: storm
container_name: nimbus
command: >
storm nimbus
-c storm.zookeeper.servers="[\"zookeeper\"]"
-c storm.local.hostname=192.168.99.100
depends_on:
- zookeeper
links:
- zookeeper
restart: always
ports:
- 6627:6627
supervisor:
image: storm
container_name: supervisor
command: >
storm supervisor
-c nimbus.seeds="[\"nimbus\"]"
-c storm.zookeeper.servers="[\"zookeeper\"]"
-c storm.local.hostname=192.168.99.100
depends_on:
- nimbus
links:
- nimbus
restart: always
ui:
image: storm
container_name: ui
command: >
storm ui
-c nimbus.seeds="[\"nimbus\"]"
depends_on:
- nimbus
links:
- nimbus
restart: always
ports:
- 8080:8080
log:
image: storm
container_name: logviewer
command: >
storm logviewer
depends_on:
- supervisor
links:
- supervisor
restart: always
ports:
- 8000:8000
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。