hadoop2.XHA详细配置

hadoop-daemon.sh与hadoop-daemons.sh区别

我们提供的服务有:网站制作、成都网站设计、微信公众号开发、网站优化、网站认证、米林ssl等。为千余家企事业单位解决了网站和推广的问题。提供周到的售前咨询和贴心的售后服务,是有科学管理、有技术的米林网站制作公司

hadoop-daemon.sh只能本地执行

hadoop-daemons.sh能远程执行

1. 启动JN

hadoop-daemons.sh start journalnode

hdfs namenode -initializeSharedEdits //复制edits log文件到journalnode节点上,第一次创建得在格式化namenode之后使用

http://hadoop-yarn1:8480来看journal是否正常

2.格式化namenode,并启动Active Namenode

一、Active NameNode节点上格式化namenode

hdfs namenode -format
hdfs namenode -initializeSharedEdits

初始化journalnode完毕

二、启动Active Namenode

hadoop-daemon.sh start namenode

3.启动 Standby namenode

一、Standby namenode节点上格式化Standby节点

复制Active Namenode上的元数据信息拷贝到Standby Namenode节点上

hdfs namenode -bootstrapStandby

二、启动Standby节点

hadoop-daemon.sh start namenode

4.启动Automatic Failover

在zookeeper上创建 /hadoop-ha/ns1这样一个监控节点(ZNode)

hdfs zkfc -formatZK
start-dfs.sh

5.查看namenode状态

hdfs  haadmin -getServiceState nn1
active

6.自动failover

hdfs  haadmin -failover nn1 nn2

配置文件详细信息

core-site.xml


    
        fs.defaultFS
        hdfs://ns1
    
    
    
        hadoop.tmp.dir
        /opt/modules/hadoop-2.2.0/data/tmp
    
    
    
        fs.trash.interval
        60*24
    
    
    
        ha.zookeeper.quorum
        hadoop-yarn1:2181,hadoop-yarn2:2181,hadoop-yarn3:2181
    
    
      
        hadoop.http.staticuser.user
        yuanhai
    

hdfs-site.xml


    
        dfs.replication
        3
    
    
    
        dfs.nameservices
        ns1
    
    
    
        dfs.ha.namenodes.ns1
        nn1,nn2
        
        
    
        dfs.namenode.rpc-address.ns1.nn1
        hadoop-yarn1:8020
    
    
        
        dfs.namenode.rpc-address.ns1.nn2
        hadoop-yarn2:8020
    
    
    
        dfs.namenode.http-address.ns1.nn1
        hadoop-yarn1:50070
    
    
    
        dfs.namenode.http-address.ns1.nn2
        hadoop-yarn2:50070
    
    
    
        dfs.namenode.shared.edits.dir
        qjournal://hadoop-yarn1:8485;hadoop-yarn2:8485;hadoop-yarn3:8485/ns1
    
    
    
        dfs.journalnode.edits.dir
        /opt/modules/hadoop-2.2.0/data/tmp/journal
    
    
     
        dfs.ha.automatic-failover.enabled
        true
    
    
    
        dfs.client.failover.proxy.provider.ns1
        org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
    
    
    
        dfs.ha.fencing.methods
        sshfence
    
    
    
        dfs.ha.fencing.ssh.private-key-files
        /home/hadoop/.ssh/id_rsa
    
    
    
        dfs.permissions.enabled
        false
    
    

    

slaves

hadoop-yarn1
hadoop-yarn2
hadoop-yarn3

yarn-site.xml


    
        yarn.nodemanager.aux-services
        mapreduce_shuffle
    
    
    
        yarn.resourcemanager.hostname
        hadoop-yarn1
     
    
    
        yarn.log-aggregation-enable
        true
    

    
        yarn.log-aggregation.retain-seconds
        604800
     

mapred-site.xml


    
        mapreduce.framework.name
        yarn
    

    
        mapreduce.jobhistory.address
        hadoop-yarn1:10020
        MapReduce JobHistory Server IPC host:port
    

    
        mapreduce.jobhistory.webapp.address
        hadoop-yarn1:19888
        MapReduce JobHistory Server Web UI host:port
    
    
    
        mapreduce.job.ubertask.enable
        true
    
    

hadoop-env.sh

export JAVA_HOME=/opt/modules/jdk1.6.0_24

其他相关文章:

http://blog.csdn.net/zhangzhaokun/article/details/17892857


网页题目:hadoop2.XHA详细配置
本文来源:http://pwwzsj.com/article/pchsss.html