如何解决从Dataproc中的Google存储读取文件
我正试图将Scala Spark作业从hadoop集群迁移到GCP,我有这段代码片段,可以读取文件并创建ArrayBuffer [String]
import java.io._
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.Path
import org.apache.hadoop.fs.FSDataInputStream
val filename="it.txt.1604607878987"
val fs = FileSystem.get(new Configuration())
val dataInputStream: FSDataInputStream = fs.open(new Path(filename))
val sourceEDR=new BufferedReader(new InputStreamReader(dataInputStream,"UTF-8")); }
val outputEDRFile = ArrayBuffer[String]()
buffer = new Array[Char](300)
var num_of_chars = 0
while (sourceEDR.read(buffer) > -1) {
val str = new String(buffer)
num_of_chars += str.length
outputEDRFile += (str + "\n");}
println(num_of_chars)
此代码在集群中运行,并给我3025000个字符,我试图在dataproc中运行此代码:
val path_gs = new Path("gs://my-bucket")
val filename="it.txt.1604607878987"
val fs = path_gs.getFileSystem(new Configuration())
val dataInputStream: FSDataInputStream = fs.open(new Path(filename))
val sourceEDR =new BufferedReader(new InputStreamReader(dataInputStream,"UTF-8")); }
val outputEDRFile = ArrayBuffer[String]()
buffer = new Array[Char](300)
var num_of_chars = 0
while (sourceEDR.read(buffer) > -1) {
val str = new String(buffer)
num_of_chars += str.length
outputEDRFile += (str + "\n");}
println(num_of_chars)
它给出了3175025个字符,我认为在文件内容中添加了空格,或者我必须使用另一个接口从dataproc中的google存储中读取文件? 我也尝试了其他编码选项,但它给出了相同的结果。 有帮助吗?
解决方法
我没有找到使用缓冲区的解决方案,因此我尝试按char读取char,这对我来说是有用的:
var i = 0
var r=0
val response = new StringBuilder
while ( ({r=sourceEDR.read(); r} != -1)) {
val ch= r.asInstanceOf[Char]
if(response.length < 300) { response.append(ch)}
else { val str = response.toString().replaceAll("[\\r\\n]"," ")
i += str.length
outputEDRFile += (str + "\n");
response.setLength(0)
response.append(ch)
}
}
val str = response.toString().replaceAll("[\\r\\n]"," ")
i += str.length
outputEDRFile += (str + "\n");
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。