本篇内容介绍了“hadoop+hive使用中遇到的问题怎么解决”的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!
1.datanode无法正常启动
添加datanode后,datanode无法正常启动,进程一会莫名其妙挂掉,查看namenode日志显示如下:
2013-06-21 18:53:39,182 FATAL org.apache.hadoop.hdfs.StateChange: BLOCK* NameSystem.getDatanode: Data node x.x.x.x:50010 is attempting to report storage ID DS-1357535176-x.x.x.x-50010-1371808472808. Node y.y.y.y:50010 is expected to serve this storage.
原因分析:
拷贝hadoop安装包时,包含data与tmp文件夹,未成功格式化datanode
解决办法:
rm -rf /data/hadoop/hadoop-1.1.2/datarm -rf /data/hadoop/hadoop-1.1.2/tmphadoop datanode -format
2. safe mode
2013-06-20 10:35:43,758 ERROR org.apache.hadoop.security.UserGroupinformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot renew lease for DFSClient_hb_rs_wdev1.corp.qihoo.net,60020,1371631589073. Name node is in safe mode.
解决方案:
hadoop dfsadmin -safemode leave
3.连接异常
2013-06-21 19:55:05,801 WARN org.apache.hadoop.hdfs.server.datanode.Datanode: java.io.IOException: Call to homename/x.x.x.x:9000 Failed on local exception: java.io.EOFException
可能原因:
namenode监听127.0.0.1:9000,而非0.0.0.0:9000或外网IP:9000 iptables限制
解决方案:
检查/etc/hosts配置,使得hostname绑定到非127.0.0.1的IP上 iptables放开端口
4. namenode id
ERROR org.apache.hadoop.hdfs.server.datanode.Datanode: java.io.IOException: Incompatible namespaceIDs in /var/lib/hadoop-0.20/cache/hdfs/dfs/data: namenode namespaceID = 240012870; datanode namespaceID = 1462711424 .
问题:Namenode上namespaceID与datanode上namespaceID不一致。
问题产生原因:每次namenode format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode format清空了namenode下的数据,但是没有清空datanode下的数据,所以造成namenode节点上的namespaceID与datanode节点上的namespaceID不一致。启动失败。
解决办法:参考该网址 http://blog.csdn.net/wh72592855/archive/2010/07/21/5752199.aspx 给出两种解决方法,我们使用的是第一种解决方法:即:
(1)停掉集群服务
(2)在出问题的datanode节点上删除data目录,data目录即是在hdfs-site.xml文件中配置的dfs.data.dir目录,本机器上那个是/var/lib/hadoop-0.20/cache/hdfs/dfs/data/ (注:我们当时在所有的datanode和namenode节点上均执行了该步骤。以防删掉后不成功,可以先把data目录保存一个副本).
(3)格式化namenode.
(4)重新启动集群。
问题解决。
这种方法带来的一个副作用即是,hdfs上的所有数据丢失。如果hdfs上存放有重要数据的时候,不建议采用该方法,可以尝试提供的网址中的第二种方法。
5. 目录权限
start-dfs.sh执行无错,显示启动datanode,执行完后无datanode。查看datanode机器上的日志,显示因dfs.data.dir目录权限不正确导致:
expected: drwxr-xr-x,current:drwxrwxr-x
解决办法:
查看dfs.data.dir的目录配置,修改权限即可。
hive错误
1.NoClassDefFoundError
Could not initialize class java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.hbase.io.HbaSEObjectWritable
将protobuf-***.jar添加到jars路径
//$HIVE_HOME/conf/hive-site.xml<property> <name>hive.aux.jars.path</name> <value>file:///data/hadoop/hive-0.10.0/lib/hive-hbase-handler-0.10.0.jar,file:///data/hadoop/hive-0.10.0/lib/hbase-0.94.8.jar,file:///data/hadoop/hive-0.10.0/lib/zookeeper-3.4.5.jar,file:///data/hadoop/hive-0.10.0/lib/guava-r09.jar,file:///data/hadoop/hive-0.10.0/lib/hive-contrib-0.10.0.jar,file:///data/hadoop/hive-0.10.0/lib/protobuf-java-2.4.0a.jar</value></property>
2.hive动态分区异常
[Fatal error] Operator FS_2 (id=2): Number of dynamic partitions exceeded hive.exec.max.dynamic.partitions.pernode
hive> set hive.exec.max.dynamic.partitions.pernode = 10000;
3.mapreduce进程超内存限制——hadoop Java heap space
vim mapred-site.xml添加:
//mapred-site.xml<property><name>mapred.child.java.opts</name><value>-Xmx2048m</value></property>
#$HADOOP_HOME/conf/hadoop_env.shexport HADOOP_HEAPSIZE=5000
4.hive文件数限制
[Fatal error] total number of created files Now is 100086, which exceeds 100000
hive> set hive.exec.max.created.files=655350;
5.metastore连接超时
Failed: SemanticException org.apache.thrift.transport.TTransportException: java.net.socketTimeoutException: Read timed out
解决方案:
hive> set hive.metastore.client.socket.timeout=500;
6. java.io.IOException: error=7, Argument list too long
Task with the most failures(5): -----Task ID: task_201306241630_0189_r_000009URL: http://namenode.godlovesdog.com:50030/taskdetails.jsp?jobid=job_201306241630_0189&tipid=task_201306241630_0189_r_000009-----Diagnostic Messages for this Task:java.lang.RuntimeException: org.apache.hadoop.hive.ql.Metadata.HiveException: Hive Runtime Error while processing row (tag=0) {"key":{"reducesinkkey0":"164058872","reducesinkkey1":"djh,S1","reducesinkkey2":"20130117170703","reducesinkkey3":"xxx"},"value":{"_col0":"1","_col1":"xxx","_col2":"20130117170703","_col3":"164058872","_col4":"xxx,S1"},"alias":0}at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:270)at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:520)at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421)at org.apache.hadoop.mapred.Child$4.run(Child.java:255)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:415)at org.apache.hadoop.security.UserGroupinformation.doAs(UserGroupinformation.java:1149)at org.apache.hadoop.mapred.Child.main(Child.java:249)Caused by: org.apache.hadoop.hive.ql.Metadata.HiveException: Hive Runtime Error while processing row (tag=0) {"key":{"reducesinkkey0":"164058872","reducesinkkey1":"xxx,S1","reducesinkkey2":"20130117170703","reducesinkkey3":"xxx"},"value":{"_col0":"1","_col1":"xxx","_col2":"20130117170703","_col3":"164058872","_col4":"djh,S1"},"alias":0}at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:258)... 7 moreCaused by: org.apache.hadoop.hive.ql.Metadata.HiveException: [Error 20000]: Unable to initialize custom script.at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:354)at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:800)at org.apache.hadoop.hive.ql.exec.ExtractOperator.processOp(ExtractOperator.java:45)at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:474)at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:249)... 7 moreCaused by: java.io.IOException: Cannot run program "/usr/bin/python2.7": error=7, 参数列表过长at java.lang.ProcessBuilder.start(ProcessBuilder.java:1042)at org.apache.hadoop.hive.ql.exec.ScriptOperator.processOp(ScriptOperator.java:313)... 15 moreCaused by: java.io.IOException: error=7, 参数列表过长at java.lang.UNIXProcess.forkAndExec(Native Method)at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)at java.lang.ProcessImpl.start(ProcessImpl.java:130)at java.lang.ProcessBuilder.start(ProcessBuilder.java:1023)... 16 moreFailed: Execution Error, return code 20000 from org.apache.hadoop.hive.ql.exec.MapRedTask. Unable to initialize custom script.
“hadoop+hive使用中遇到的问题怎么解决”的内容就介绍到这里了,感谢大家的阅读。如果想了解更多行业相关的知识可以关注编程之家网站,小编将为大家输出更多高质量的实用文章!
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。