如何解决Hadoop Docker 集群的 Java HDFS 写入错误:有 3 个数据节点正在运行,并且此操作中排除了 3 个节点
我在 Docker 上运行 hadoop 集群,当我尝试从 Java 编写 HDFS 时,出现以下错误。我不知道是什么原因造成的:
type b1 = Extract<Union,{event: {eventName: 'b1'}}> // ok
type a1 = Extract<Union,{event: {eventName: 'a1'}}> // never
Exception in thread "main" org.apache.hadoop.ipc.remoteexception(java.io.IOException): File /user/javadeveloperzone/javareadwriteexample/read_write_hdfs_example.txt Could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2219)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2789)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:892)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:574)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:528)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:999)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:927)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupinformation.doAs(UserGroupinformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2915)
存储库: (https://github.com/nsquare-jdzone/hadoop-examples/tree/master/ReadWriteHDFSExample) 从本教程: https://javadeveloperzone.com/hadoop/java-read-write-files-hdfs-example/
总结一下,本教程使用大数据欧洲存储库(https://github.com/big-data-europe/docker-hadoop) 并另外使用此 Docker-compose.yml 使其成为多个数据节点,而不是单个。本教程版本位于 Big Data Europe Repository 之后,因此我将 Docker-compose.yml 文件更改为如下所示:
public static void writeFiletoHDFS() throws IOException {
Configuration configuration = new Configuration();
configuration.set("fs.defaultFS","hdfs://localhost:9000");
FileSystem fileSystem = FileSystem.get(configuration);
//Create a path
String fileName = "read_write_hdfs_example.txt";
Path hdfsWritePath = new Path("/user/javadeveloperzone/javareadwriteexample/" + fileName);
FSDataOutputStream fsDataOutputStream = fileSystem.create(hdfsWritePath,true);
BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(fsDataOutputStream,StandardCharsets.UTF_8));
bufferedWriter.write("Java API to write data in HDFS");
bufferedWriter.newLine();
bufferedWriter.close();
fileSystem.close();
}
任何帮助将不胜感激。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。