hadoop2.XHA詳細(xì)配置-創(chuàng)新互聯(lián)

hadoop-daemon.sh與hadoop-daemons.sh區(qū)別

成都創(chuàng)新互聯(lián)專注于思茅網(wǎng)站建設(shè)服務(wù)及定制,我們擁有豐富的企業(yè)做網(wǎng)站經(jīng)驗(yàn)。 熱誠為您提供思茅營銷型網(wǎng)站建設(shè),思茅網(wǎng)站制作、思茅網(wǎng)頁設(shè)計(jì)、思茅網(wǎng)站官網(wǎng)定制、小程序設(shè)計(jì)服務(wù),打造思茅網(wǎng)絡(luò)公司原創(chuàng)品牌,更為您提供思茅網(wǎng)站排名全網(wǎng)營銷落地服務(wù)。

hadoop-daemon.sh只能本地執(zhí)行

hadoop-daemons.sh能遠(yuǎn)程執(zhí)行

1. 啟動(dòng)JN

hadoop-daemons.sh start journalnode

hdfs namenode -initializeSharedEdits //復(fù)制edits log文件到j(luò)ournalnode節(jié)點(diǎn)上,第一次創(chuàng)建得在格式化namenode之后使用

http://hadoop-yarn1:8480來看journal是否正常

2.格式化namenode,并啟動(dòng)Active Namenode

一、Active NameNode節(jié)點(diǎn)上格式化namenode

hdfs namenode -format
hdfs namenode -initializeSharedEdits

初始化journalnode完畢

二、啟動(dòng)Active Namenode

hadoop-daemon.sh start namenode

3.啟動(dòng) Standby namenode

一、Standby namenode節(jié)點(diǎn)上格式化Standby節(jié)點(diǎn)

復(fù)制Active Namenode上的元數(shù)據(jù)信息拷貝到Standby Namenode節(jié)點(diǎn)上

hdfs namenode -bootstrapStandby

二、啟動(dòng)Standby節(jié)點(diǎn)

hadoop-daemon.sh start namenode

4.啟動(dòng)Automatic Failover

在zookeeper上創(chuàng)建 /hadoop-ha/ns1這樣一個(gè)監(jiān)控節(jié)點(diǎn)(ZNode)

hdfs zkfc -formatZK
start-dfs.sh

5.查看namenode狀態(tài)

hdfs  haadmin -getServiceState nn1
active

6.自動(dòng)failover

hdfs  haadmin -failover nn1 nn2

配置文件詳細(xì)信息

core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://ns1</value>
    </property>
    
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/modules/hadoop-2.2.0/data/tmp</value>
    </property>
    
    <property>
        <name>fs.trash.interval</name>
        <value>60*24</value>
    </property>
    
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>hadoop-yarn1:2181,hadoop-yarn2:2181,hadoop-yarn3:2181</value>
    </property>
    
    <property>  
        <name>hadoop.http.staticuser.user</name>
        <value>yuanhai</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    
    <property>
        <name>dfs.nameservices</name>
        <value>ns1</value>
    </property>
    
    <property>
        <name>dfs.ha.namenodes.ns1</name>
        <value>nn1,nn2</value>
        </property>
        
    <property>
        <name>dfs.namenode.rpc-address.ns1.nn1</name>
        <value>hadoop-yarn1:8020</value>
    </property>
    
        <property>
        <name>dfs.namenode.rpc-address.ns1.nn2</name>
        <value>hadoop-yarn2:8020</value>
    </property>
    
    <property>
        <name>dfs.namenode.http-address.ns1.nn1</name>
        <value>hadoop-yarn1:50070</value>
    </property>
    
    <property>
        <name>dfs.namenode.http-address.ns1.nn2</name>
        <value>hadoop-yarn2:50070</value>
    </property>
    
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://hadoop-yarn1:8485;hadoop-yarn2:8485;hadoop-yarn3:8485/ns1</value>
    </property>
    
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/opt/modules/hadoop-2.2.0/data/tmp/journal</value>
    </property>
    
     <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    
    <property>
        <name>dfs.client.failover.proxy.provider.ns1</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>
    
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
    </property>
    
    <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>
    

<!--     <property>
        <name>dfs.namenode.http-address</name>
        <value>hadoop-yarn.dragon.org:50070</value>
    </property>

    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop-yarn.dragon.org:50090</value>
    </property>
    
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file://${hadoop.tmp.dir}/dfs/name</value>
    </property>
    
    <property>
        <name>dfs.namenode.edits.dir</name>
        <value>${dfs.namenode.name.dir}</value>
    </property>
    
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file://${hadoop.tmp.dir}/dfs/data</value>
    </property>
    
    <property>
        <name>dfs.namenode.checkpoint.dir</name>
        <value>file://${hadoop.tmp.dir}/dfs/namesecondary</value>
    </property>
    
    <property>
        <name>dfs.namenode.checkpoint.edits.dir</name>
        <value>${dfs.namenode.checkpoint.dir}</value>
    </property>
-->    
</configuration>

slaves

hadoop-yarn1
hadoop-yarn2
hadoop-yarn3

yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hadoop-yarn1</value>
    </property> 
    
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>

    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>604800</value>
    </property> 

</configuration>

mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>hadoop-yarn1:10020</value>
        <description>MapReduce JobHistory Server IPC host:port</description>
    </property>

    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>hadoop-yarn1:19888</value>
        <description>MapReduce JobHistory Server Web UI host:port</description>
    </property>
    
    <property>
        <name>mapreduce.job.ubertask.enable</name>
        <value>true</value>
    </property>
    
</configuration>

hadoop-env.sh

export JAVA_HOME=/opt/modules/jdk1.6.0_24

其他相關(guān)文章:

http://blog.csdn.net/zhangzhaokun/article/details/17892857

另外有需要云服務(wù)器可以了解下創(chuàng)新互聯(lián)scvps.cn,海內(nèi)外云服務(wù)器15元起步,三天無理由+7*72小時(shí)售后在線,公司持有idc許可證,提供“云服務(wù)器、裸金屬服務(wù)器、高防服務(wù)器、香港服務(wù)器、美國服務(wù)器、虛擬主機(jī)、免備案服務(wù)器”等云主機(jī)租用服務(wù)以及企業(yè)上云的綜合解決方案,具有“安全穩(wěn)定、簡單易用、服務(wù)可用性高、性價(jià)比高”等特點(diǎn)與優(yōu)勢,專為企業(yè)上云打造定制,能夠滿足用戶豐富、多元化的應(yīng)用場景需求。

新聞名稱:hadoop2.XHA詳細(xì)配置-創(chuàng)新互聯(lián)
網(wǎng)站網(wǎng)址:http://muchs.cn/article40/dchpho.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供網(wǎng)站制作搜索引擎優(yōu)化、品牌網(wǎng)站設(shè)計(jì)、微信小程序建站公司、云服務(wù)器

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場,如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)

微信小程序開發(fā)