微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Ubuntu16 搭建 Hadoop2.x HDFS 源码测试环境

由于整个 Hadoop 源代码过于臃肿,此处搭建 HDFS 源码分析环境,分析 HDFS 源码,为后续分析 MapReduce 和 Yarn 等作基础。

1. 下载 hadoop-2.7.3.tar.gz 和 hadoop-2.7.3-src.tar.gz

官网下载地址:http://apache.fayea.com/hadoop/common/hadoop-2.7.3/
分别解压两个压缩包:
hadoop-2.7.3.tar.gz 用于搭建本地伪分布式环境;
hadoop-2.7.3-src.tar.gz 用于搭建 HDFS 源码环境。

2. 搭建 HDFS 源码环境

1、从解压的目录下找到 hadoop-2.7.3-src -> hadoop-hdfs-project -> hadoop-hdfs ,复制 hadoop-hdfs(是一个包含pom.xml文件的工程目录) 到一定目录,打开 IDEA,使用 maven 工程打开,此时的目录结构如下:

2、打开 hadoop-2.7.3 下的 bin 目录下的 hdfs,找到启动 Datanode 对应的类为org.apache.hadoop.hdfs.server.datanode.Datanode

3、IDEA 中打开 Datanode,运行该类,编译会报如下错误

org.apache.hadoop.hdfs.protocol.proto包找不到。
搜索的结果是,protobuf 是一种数据交换格式,Hadoop 使用 protobuf 用于分布式应用之间的数据通信或者异构环境下的数据交换。该包是通过 protobuf 动态生成的。
4、安装 protobuf。
(1)下载 protobuf 源码:https://github.com/google/protobuf
(2)下载必须的工具包:sudo apt-get install autoconf automake libtool curl make g++ unzip
(3)依次执行如下命令:

./autogen.sh
./configure
make
make check
sudo make install
sudo ldconfig  # important! refresh shared library cache.

运行 protoc --version ,正确输出,则安装成功
(4)安装 Java 的 protobuf 运行环境:

cd java
mvn install  # Install the library into your Maven repository
  1. IDEA 中执行 mvn compile:

生成代码如下,报错信息消失了:


如果生成代码时报错,可查看 pom.xml 文件中 protoc 插件需要传递 hadoop-common 工程的相关参数,所以需要指定正确路径:

6. 再次运行 Datanode,
(1)报如下错误

java.lang.classNotFoundException: org.apache.hadoop.tracing.TraceAdminProtocol

修改 hadoop-common 依赖的 scope 范围

<dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-auth</artifactId>
      <!--<scope>provided</scope>-->
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-common</artifactId>
      <!--<scope>provided</scope>-->
    </dependency>

(2)将 hadoop-2.7.3 下的 etc/hadoop 中的配置文件复制到工程的 resources 目录下,运行 Datanode,报如下错误webapps/datanode not found in CLAsspATH

最简单的方法是将 main/webapps 目录复制到 resources 目录下(最好不要这样)或添加 webapps 到 CLAsspATH中,启动Datanode:

此时 Datanode 启动成功,并尝试链接 NameNode,由于此时并没有搭建 NameNode,所以会retrying connect to server: ubuntu16/192.168.19.1:9000

3. 方法一源码运行NameNode

运行 org.apache.hadoop.hdfs.server.namenode.NameNode

16/12/25 11:25:33 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
16/12/25 11:25:33 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
16/12/25 11:25:33 INFO impl.MetricsSystemImpl: NameNode metrics system started
16/12/25 11:25:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/12/25 11:25:35 INFO hdfs.Dfsutil: Starting Web-server for hdfs at: http://0.0.0.0:50070
16/12/25 11:25:35 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined
16/12/25 11:25:35 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
16/12/25 11:25:35 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
16/12/25 11:25:35 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
16/12/25 11:25:35 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
16/12/25 11:25:35 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,pathSpec=/webhdfs/v1/*
16/12/25 11:25:35 INFO http.HttpServer2: Jetty bound to port 50070
16/12/25 11:25:36 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
16/12/25 11:25:36 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
16/12/25 11:25:36 INFO namenode.FSNamesystem: No KeyProvider found.
16/12/25 11:25:36 INFO namenode.FSNamesystem: fsLock is fair:true
16/12/25 11:25:36 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/12/25 11:25:36 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/12/25 11:25:36 INFO util.GSet: Computing capacity for map BlocksMap
16/12/25 11:25:36 INFO util.GSet: VM type = 64-bit
16/12/25 11:25:36 INFO util.GSet: 2.0% max memory 859 MB = 17.2 MB
16/12/25 11:25:36 INFO util.GSet: capacity      = 2^21 = 2097152 entries
16/12/25 11:25:36 INFO namenode.FSNamesystem: fsOwner             = sunnymarkliu (auth:SIMPLE)
16/12/25 11:25:36 INFO namenode.FSNamesystem: supergroup          = supergroup
16/12/25 11:25:36 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/12/25 11:25:36 INFO namenode.FSNamesystem: HA Enabled: false
16/12/25 11:25:36 INFO namenode.FSNamesystem: Append Enabled: true
16/12/25 11:25:37 INFO util.GSet: Computing capacity for map INodeMap
16/12/25 11:25:37 INFO util.GSet: VM type = 64-bit
16/12/25 11:25:37 INFO util.GSet: 1.0% max memory 859 MB = 8.6 MB
16/12/25 11:25:37 INFO util.GSet: capacity      = 2^20 = 1048576 entries
16/12/25 11:25:37 INFO util.GSet: Computing capacity for map cachedBlocks
16/12/25 11:25:37 INFO util.GSet: VM type = 64-bit
16/12/25 11:25:37 INFO util.GSet: 0.25% max memory 859 MB = 2.1 MB
16/12/25 11:25:37 INFO util.GSet: capacity      = 2^18 = 262144 entries
16/12/25 11:25:37 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/12/25 11:25:37 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/12/25 11:25:37 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
16/12/25 11:25:37 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/12/25 11:25:37 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/12/25 11:25:37 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/12/25 11:25:37 INFO util.GSet: VM type = 64-bit
16/12/25 11:25:37 INFO util.GSet: 0.029999999329447746% max memory 859 MB = 263.9 KB
16/12/25 11:25:37 INFO util.GSet: capacity      = 2^15 = 32768 entries
16/12/25 11:25:38 INFO common.Storage: Lock on /home/sunnymarkliu/software/hadoop-2.7.3/hadoop-sunnymarkliu/dfs/namenode/in_use.lock acquired by nodename 4960@ubuntu16
16/12/25 11:25:38 INFO namenode.FileJournalManager: Recovering unfinalized segments in /home/sunnymarkliu/software/hadoop-2.7.3/hadoop-sunnymarkliu/dfs/namenode/current
16/12/25 11:25:38 INFO namenode.FileJournalManager: Finalizing edits file /home/sunnymarkliu/software/hadoop-2.7.3/hadoop-sunnymarkliu/dfs/namenode/current/edits_inprogress_0000000000000000015 -> /home/sunnymarkliu/software/hadoop-2.7.3/hadoop-sunnymarkliu/dfs/namenode/current/edits_0000000000000000015-0000000000000000015
16/12/25 11:25:38 INFO namenode.FSImage: Planning to load image: FSImageFile(file=/home/sunnymarkliu/software/hadoop-2.7.3/hadoop-sunnymarkliu/dfs/namenode/current/fsimage_0000000000000000014,cpktTxId=0000000000000000014)
16/12/25 11:25:38 INFO namenode.FSImageFormatPBINode: Loading 1 INodes.
16/12/25 11:25:38 INFO namenode.FSImage: Loaded image for txid 14 from /home/sunnymarkliu/software/hadoop-2.7.3/hadoop-sunnymarkliu/dfs/namenode/current/fsimage_0000000000000000014
16/12/25 11:25:38 INFO namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundanteditLogInputStream@1e097d59 expecting start txid #15
16/12/25 11:25:38 INFO namenode.FSImage: Start loading edits file /home/sunnymarkliu/software/hadoop-2.7.3/hadoop-sunnymarkliu/dfs/namenode/current/edits_0000000000000000015-0000000000000000015
16/12/25 11:25:38 INFO namenode.EditLogInputStream: fast-forwarding stream '/home/sunnymarkliu/software/hadoop-2.7.3/hadoop-sunnymarkliu/dfs/namenode/current/edits_0000000000000000015-0000000000000000015' to transaction ID 15
16/12/25 11:25:38 INFO namenode.FSImage: Edits file /home/sunnymarkliu/software/hadoop-2.7.3/hadoop-sunnymarkliu/dfs/namenode/current/edits_0000000000000000015-0000000000000000015 of size 1048576 edits # 1 loaded in 0 seconds
16/12/25 11:25:39 INFO namenode.FSNamesystem: Need to save fs image? true (staleImage=true,haEnabled=false,isRollingUpgrade=false)
16/12/25 11:25:39 INFO namenode.FSImage: Save namespace ...
16/12/25 11:25:39 INFO namenode.NNStorageRetentionManager: Going to retain 2 images with txid >= 14
16/12/25 11:25:39 INFO namenode.NNStorageRetentionManager: Purging old image FSImageFile(file=/home/sunnymarkliu/software/hadoop-2.7.3/hadoop-sunnymarkliu/dfs/namenode/current/fsimage_0000000000000000012,cpktTxId=0000000000000000012)
16/12/25 11:25:39 INFO namenode.FSEditLog: Starting log segment at 16
16/12/25 11:25:40 INFO namenode.NameCache: initialized with 0 entries 0 lookups
16/12/25 11:25:40 INFO namenode.FSNamesystem: Finished loading FSImage in 2321 msecs
16/12/25 11:25:41 INFO ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
16/12/25 11:25:41 INFO ipc.Server: Starting Socket Reader #1 for port 9000
16/12/25 11:25:41 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean
16/12/25 11:25:41 INFO namenode.LeaseManager: Number of blocks under construction: 0
16/12/25 11:25:41 INFO namenode.LeaseManager: Number of blocks under construction: 0
16/12/25 11:25:41 INFO namenode.FSNamesystem: initializing replication queues
16/12/25 11:25:41 INFO blockmanagement.DatanodeDescriptor: Number of Failed storage changes from 0 to 0
16/12/25 11:25:41 INFO ipc.Server: IPC Server Responder: starting
16/12/25 11:25:41 INFO ipc.Server: IPC Server listener on 9000: starting
16/12/25 11:25:41 INFO namenode.FSNamesystem: Starting services required for active state

再运行Datanode,可发现NameNode将该Datanode添加进集群:

16/12/25 11:26:51 INFO blockmanagement.DatanodeDescriptor: Number of Failed storage changes from 0 to 0
16/12/25 11:26:51 INFO net.NetworkTopology: Adding a new node: /default-rack/192.168.19.1:50010
16/12/25 11:26:52 INFO blockmanagement.DatanodeDescriptor: Number of Failed storage changes from 0 to 0
16/12/25 11:26:52 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-1dfa323c-0aa5-4a00-b818-465fb11c03a1 for DN 192.168.19.1:50010
16/12/25 11:28:37 INFO namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 125 Number of transactions batched in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 101

4. 方法二搭建 Hadoop 伪分布式环境,提供NameNode

(1)具体搭建步骤参考 Ubuntu16.04环境搭建 Hadoop 2.7.0 全分布式环境
将 etc/hadoop 目录下的 slaves 设置为空。

(2)命令行启动 hadoop-2.7.3 下的 NameNode:

(3)启动 IDEA 的 Datanode:

2016-12-20 15:52:22,213 INFO  datanode.Datanode (LogAdapter.java:info(45)) - STARTUP_MSG: 
/************************************************************ STARTUP_MSG: Starting Datanode STARTUP_MSG: host = ubuntu16/192.168.19.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 2.7.3 STARTUP_MSG: classpath = /home/sunnymarkliu/software/jdk1.8.0_101/jre/lib/charsets.jar:... STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'vinodkv' on 2016-08-18T01:01Z STARTUP_MSG: java = 1.8.0_101 ************************************************************/
2016-12-20 15:52:22,243 INFO  datanode.Datanode (LogAdapter.java:info(45)) - registered UNIX signal handlers for [TERM,HUP,INT]
2016-12-20 15:52:24,216 WARN  util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-12-20 15:52:26,681 INFO  impl.MetricsConfig (MetricsConfig.java:loadFirst(112)) - loaded properties from hadoop-metrics2.properties
2016-12-20 15:52:27,267 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(375)) - Scheduled snapshot period at 10 second(s).
2016-12-20 15:52:27,267 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:start(192)) - Datanode metrics system started
2016-12-20 15:52:27,274 INFO  datanode.BlockScanner (BlockScanner.java:<init>(172)) - Initialized block scanner with targetBytesPerSec 1048576
2016-12-20 15:52:27,313 INFO  datanode.Datanode (Datanode.java:<init>(424)) - Configured hostname is ubuntu16
......
2016-12-20 15:52:32,281 INFO  datanode.Datanode (BPServiceActor.java:offerService(626)) - For namenode ubuntu16/192.168.19.1:9000 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2016-12-20 15:52:32,637 INFO  datanode.Datanode (BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool BP-684189179-192.168.19.1-1482203980767 (Datanode Uuid 2a495dbf-e87e-410f-ad63-9d36bdc3bd22) service to ubuntu16/192.168.19.1:9000 trying to claim ACTIVE state with txid=12
2016-12-20 15:52:32,637 INFO  datanode.Datanode (BPOfferService.java:updateActorStatesFromHeartbeat(523)) - AckNowledging ACTIVE Namenode Block pool BP-684189179-192.168.19.1-1482203980767 (Datanode Uuid 2a495dbf-e87e-410f-ad63-9d36bdc3bd22) service to ubuntu16/192.168.19.1:9000
2016-12-20 15:52:32,737 INFO  datanode.Datanode (BPServiceActor.java:blockReport(492)) - Successfully sent block report 0x2746d6afc60,containing 1 storage report(s),of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 20 msec to generate and 79 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
2016-12-20 15:52:32,737 INFO  datanode.Datanode (BPOfferService.java:processCommandFromActive(694)) - Got finalize command for block pool BP-684189179-192.168.19.1-1482203980767

NameNode 和 Datanode 启动成功!

Done!

原文地址:https://www.jb51.cc/ubuntu/355356.html

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。

相关推荐