CentOS6.5中怎么通过Hadoop用户实现HDFS伪分布式部署-创新互联
这篇文章给大家介绍CentOS6.5中怎么通过Hadoop用户实现HDFS伪分布式部署,内容非常详细,感兴趣的小伙伴们可以参考借鉴,希望对大家能有所帮助。
1、检查hadoop用户是否存在
[root@hadoop001 hadoop]# pwd
/opt/software/hadoop-2.8.1/etc/hadoop
[root@hadoop001 hadoop]# vim core-site.xml
--添加
,localhost修改为IP fs.defaultFS hdfs://localhost:9000
#修改DataNode数据节点访问IP
[root@hadoop001 hadoop]# pwd
/opt/software/hadoop-2.8.1/etc/hadoop
[root@hadoop001 hadoop]# vim slaves
localhost --localhost修改为IP,多DN逗号分隔ip
#修改SecondayNameNode数据节点访问IP
[root@hadoop001 hadoop]# vim hdfs-site.xml
-- 添加
dfs.namenode.secondary.http-address 192.168.0.129:50090 dfs.namenode.secondary.https-address 192.168.0.129:50091 dfs.replication 1
6、删除root用户的DFS文件及DFS磁盘格式化
[root@hadoop001 tmp]# rm -rf /tmp/hadoop-* /tmp/hsperfdata-*
[root@hadoop001 tmp]# su - hadoop
[hadoop@hadoop001 hadoop-2.8.1]$ hdfs namenode -format
7、hadoop用户启动
[hadoop@hadoop001 sbin]$ pwd
/opt/software/hadoop-2.8.1/sbin
[hadoop@hadoop001 sbin]$ ./start-dfs.sh -- 第一次启动输入密码
Starting namenodes on [hadoop001]
hadoop001: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop001.out
192.168.0.129: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop001.out
Starting secondary namenodes [hadoop001]
hadoop001: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop001.out
关于CentOS6.5中怎么通过Hadoop用户实现HDFS伪分布式部署就分享到这里了,希望以上内容可以对大家有一定的帮助,可以学到更多知识。如果觉得文章不错,可以把它分享出去让更多的人看到。
标题名称:CentOS6.5中怎么通过Hadoop用户实现HDFS伪分布式部署-创新互联
转载来于:http://pwwzsj.com/article/esdcc.html