hadoop該怎么部署

這篇文章主要講解了“hadoop該怎么部署”,文中的講解內(nèi)容簡(jiǎn)單清晰,易于學(xué)習(xí)與理解,下面請(qǐng)大家跟著小編的思路慢慢深入,一起來研究和學(xué)習(xí)“hadoop該怎么部署”吧!

成都創(chuàng)新互聯(lián)提供高防物理服務(wù)器租用、云服務(wù)器、香港服務(wù)器、內(nèi)蒙古服務(wù)器托管

hadoop部署

Hadoop介紹:

廣義: 以apache hadoop軟件為主的生態(tài)圈(hive zookeeper spark hbase)

狹義: 單指apache hadoop軟件

相關(guān)官網(wǎng):

hadoop.apache.org

hive.apache.org

spark.apache.org

cdh-hadoop:http://archive.cloudera.com/cdh6/cdh/5/hadoop-2.6.0-cdh6.7.0.tar.gz

hadoop軟件及版本:

1.x 企業(yè)不用

2.x 主流

3.x 沒有企業(yè)敢用

a.采坑

b.很多公司都是CDH5.x部署大數(shù)據(jù)環(huán)境 (www.cloudera.com),即2.6.0-cdh6.7.0 =? apache hadoop2.6.0

很多公司都是CDH5.X部署大數(shù)據(jù)環(huán)境(www.cloudera.com),相當(dāng)于是把一個(gè)生態(tài)圈的組件,集中成為一個(gè)系統(tǒng)。

作為基礎(chǔ)環(huán)境,里面裝的2.6.0-cdh6.7.0,注意此版本不等于apache hadoop2.6.0,因?yàn)?/p>

cdh6.7.0中hadoop做了bug升級(jí)。

hadoop軟件:

hdfs:存儲(chǔ) 分布式文件系統(tǒng)

mapreduce:計(jì)算。用java計(jì)算job1,job2,但企業(yè)不用java(開發(fā)難度大,代碼復(fù)雜)

yarn: 資源和作業(yè)調(diào)度(cpu memory分配),即:哪個(gè)作業(yè)分配到哪個(gè)節(jié)點(diǎn)中調(diào)度。

--如果需要按照ssh

Ubuntu Linux:

$ sudo apt-get install ssh

$ sudo apt-get install rsync

----------------------------------------------------------------------------------------------------

安裝部分:

環(huán)境:CentOS 偽分布安裝:即單節(jié)點(diǎn)安裝

HADOOP版本:hadoop-2.6.0-cdh6.7.0.tar.gz

JDK版本:jdk-8u45-linux-x64.gz

安裝原則:不同軟件需要指定對(duì)應(yīng)的用戶

linux       root用戶

MySQL     mysqladmin用戶

hadoop  hadoop用戶

1.創(chuàng)建hadoop用戶和上傳hadoop軟件

******************************

useradd hadoop

su - hadoop

mkdir app

cd app/

上傳hadoop包

結(jié)果如下:

[hadoop@hadoop app]$ pwd

/home/hadoop/app

[hadoop@hadoop app]$ ls -l

total 304288

drwxr-xr-x 15 hadoop hadoop      4096 Feb 14 23:37 hadoop-2.6.0-cdh6.7.0

-rw-r--r--  1 root   root   311585484 Feb 14 17:32 hadoop-2.6.0-cdh6.7.0.tar.gz

***********************************

2.部署jdk ,要用CDH版本的JDK

***********************************

創(chuàng)建JDK目錄,上傳JDK包要用CDH版本的JDK

su - root

mkdir /usr/java             #上傳JDK包到此目錄

mkdir /usr/share/java   #部署CDH環(huán)境時(shí)jdbc jar包需要放到此目錄,否則報(bào)錯(cuò)

cd   /usr/java

tar   -xzvf     jdk-8u45-linux-x64.gz  #解壓JDK

drwxr-xr-x 8 uucp  143      4096 Apr 11  2015 jdk1.8.0_45   #注意解壓后用戶、組是不對(duì)的,需要改用戶組為root:root

chown -R root:root jdk1.8.0_45

drwxr-xr-x 8 root root      4096 Apr 11  2015 jdk1.8.0_45

結(jié)果如下:

[root@hadoop java]# pwd

/usr/java

[root@hadoop java]# ll

total 169216

drwxr-xr-x 8 root root      4096 Apr 11  2015 jdk1.8.0_45

-rw-r--r-- 1 root root 173271626 Jan 26 18:35 jdk-8u45-linux-x64.gz

*****************************************

3.設(shè)置java環(huán)境變量

su - root

vi /etc/profile

export JAVA_HOME=/usr/java/jdk1.8.0_45

export JRE_HOME=$JAVA_HOME/jre

export CLASSPATH=.:$JAVA_HOME/lib:$JER_HOME/lib:$CLASSPATH

export PATH=$JAVA_HOME/bin:$JER_HOME/bin:$PATH

source /etc/profile

[root@hadoop java]# which java

/usr/java/jdk1.8.0_45/bin/java

**********************

4.解壓hadoop

su - hadoop

cd  /home/hadoop/app

[hadoop@hadoop002 app]$ tar -xzvf hadoop-2.6.0-cdh6.7.0.tar.gz

[hadoop@hadoop002 app]$ cd hadoop-2.6.0-cdh6.7.0

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$ ll

total 76

drwxr-xr-x  2 hadoop hadoop  4096 Mar 24  2016 bin  可執(zhí)行腳本

drwxr-xr-x  2 hadoop hadoop  4096 Mar 24  2016 bin-mapreduce1

drwxr-xr-x  3 hadoop hadoop  4096 Mar 24  2016 cloudera

drwxr-xr-x  6 hadoop hadoop  4096 Mar 24  2016 etc  配置目錄(conf)

drwxr-xr-x  5 hadoop hadoop  4096 Mar 24  2016 examples

drwxr-xr-x  3 hadoop hadoop  4096 Mar 24  2016 examples-mapreduce1

drwxr-xr-x  2 hadoop hadoop  4096 Mar 24  2016 include

drwxr-xr-x  3 hadoop hadoop  4096 Mar 24  2016 lib  jar包目錄

drwxr-xr-x  2 hadoop hadoop  4096 Mar 24  2016 libexec

drwxr-xr-x  3 hadoop hadoop  4096 Mar 24  2016 sbin hadoop組件的啟動(dòng) 停止腳本

drwxr-xr-x  4 hadoop hadoop  4096 Mar 24  2016 share

drwxr-xr-x 17 hadoop hadoop  4096 Mar 24  2016 src

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

*********************************************************

4.解壓并配置hadoop

su - hadoop

cd app

tar -xzvf hadoop-2.6.0-cdh6.7.0.tar.gz


hadoop該怎么部署

cd  /home/hadoop/app/hadoop-2.6.0-cdh6.7.0/etc/hadoop

vi  core-site.xml

<configuration>

  <property>

       <name>fs.defaultFS</name>

       <value>hdfs://localhost:9000</value>

   </property>

</configuration>

vi hdfs-site.xml

<configuration>

<property>

       <name>dfs.replication</name>

       <value>1</value>

   </property>

</configuration>

配置hadoop的環(huán)境變量,否則會(huì)在啟動(dòng)時(shí)候報(bào)錯(cuò)

vi /home/hadoop/app/hadoop-2.6.0-cdh6.7.0/etc/hadoop/hadoop-env.sh

export HADOOP_CONF_DIR=/home/hadoop/app/hadoop-2.6.0-cdh6.7.0/etc/hadoop

export JAVA_HOME=/usr/java/jdk1.8.0_45

*****************************

*****************************

5.配置ssh localhost無密碼信任關(guān)系

su - hadoop

ssh-keygen  #一直回車

cd .ssh   #可以看到兩個(gè)文件


hadoop該怎么部署

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys  #生成authorized_keys信任文件

ssh localhost date

The authenticity of host 'localhost (127.0.0.1)' can't be established.

RSA key fingerprint is b1:94:33:ec:95:89:bf:06:3b:ef:30:2f:d7:8e:d2:4c.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'localhost' (RSA) to the list of known hosts.

Wed Feb 13 22:41:17 CST 2019

chmod 600 authorized_keys   #非常重要,如果不更改權(quán)限,執(zhí)行ssh localhost date時(shí)會(huì)讓輸入密碼,但hadoop用戶根本無密碼,此時(shí)就是權(quán)限搞的貓膩。

**********************************

6.格式化

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$ bin/hdfs namenode -format

***************************************

cd /home/hadoop/app/hadoop-2.6.0-cdh6.7.0

bin/hdfs namenode -format  #為何進(jìn)入bin 再 hdfs namenode -format說找不到hdfs命令

***************************************

7.啟動(dòng)hadoop服務(wù)

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$ sbin/start-dfs.sh

19/02/13 22:47:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Starting namenodes on [localhost]

localhost: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh6.7.0/logs/hadoop-hadoop-namenode-hadoop002.out

localhost: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh6.7.0/logs/hadoop-hadoop-datanode-hadoop002.out

Starting secondary namenodes [0.0.0.0]

The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.

RSA key fingerprint is b1:94:33:ec:95:89:bf:06:3b:ef:30:2f:d7:8e:d2:4c.

Are you sure you want to continue connecting (yes/no)? yes  #輸入yes,,因?yàn)閟sh 信任關(guān)系 是配置的是localhost,而非0.0.0.0

0.0.0.0: Warning: Permanently added '0.0.0.0' (RSA) to the list of known hosts.

0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh6.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop002.out

19/02/13 22:49:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$ sbin/stop-dfs.sh

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$ sbin/start-dfs.sh

19/02/13 22:57:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Starting namenodes on [localhost]

localhost: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh6.7.0/logs/hadoop-hadoop-namenode-hadoop002.out

localhost: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh6.7.0/logs/hadoop-hadoop-datanode-hadoop002.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh6.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop002.out

19/02/13 22:57:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$ jps  #檢驗(yàn)是否正常啟動(dòng),需啟動(dòng)以下四個(gè)服務(wù)

15059 Jps

14948 SecondaryNameNode 第二名稱節(jié)點(diǎn) 老二

14783 DataNode  數(shù)據(jù)節(jié)點(diǎn)  小弟

14655 NameNode  名稱節(jié)點(diǎn)  老大 讀寫

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

open http://ip:50070  #安裝成功可以打開hadoop的web管理界面:如圖


hadoop該怎么部署

8.配置hadoop命令環(huán)境變量

***************************************************************

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$ cat ~/.bash_profile

# .bash_profile

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

# User specific environment and startup programs

export HADOOP_PREFIX=/home/hadoop/app/hadoop-2.6.0-cdh6.7.0

export PATH=$HADOOP_PREFIX/bin:$PATH

source ~/.bash_profile

/home/hadoop/app/hadoop-2.6.0-cdh6.7.0

***************************************************************

9.操作hadoop, hdfs dfs操作命令和Linux命令極其相似

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$ bin/hdfs dfs -ls /

19/02/13 23:08:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$ bin/hdfs dfs -ls /

19/02/13 23:11:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$ ls /

bin   dev  home  lib64       media  opt   root  sbin     srv  tmp  var

boot  etc  lib   lost+found  mnt    proc  run   selinux  sys  usr

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$ bin/hdfs dfs -mkdir /ruozedata

19/02/13 23:11:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$ bin/hdfs dfs -ls /

19/02/13 23:11:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Found 1 items

drwxr-xr-x   - hadoop supergroup          0 2019-02-13 23:11 /ruozedata

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$ ls /

bin   dev  home  lib64       media  opt   root  sbin     srv  tmp  var

boot  etc  lib   lost+found  mnt    proc  run   selinux  sys  usr

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$

10.查看幫助

[hadoop@hadoop002 hadoop-2.6.0-cdh6.7.0]$  bin/hdfs --help

作業(yè):

1.ssh博客 閱讀 摘抄

http://blog.itpub.net/30089851/viewspace-1992210/

http://blog.itpub.net/30089851/viewspace-2127102/

2.部署hdfs偽分布式

3.博客要寫到hdfs偽分布式

小提示:

如果 su - zookeeper不能切換

解決方法:

更改:/etc/passwd中zookeeper用戶的登錄方式由/sbin/nologin==>/bin/bash即可

感謝各位的閱讀,以上就是“hadoop該怎么部署”的內(nèi)容了,經(jīng)過本文的學(xué)習(xí)后,相信大家對(duì)hadoop該怎么部署這一問題有了更深刻的體會(huì),具體使用情況還需要大家實(shí)踐驗(yàn)證。這里是創(chuàng)新互聯(lián),小編將為大家推送更多相關(guān)知識(shí)點(diǎn)的文章,歡迎關(guān)注!

名稱欄目:hadoop該怎么部署
分享網(wǎng)址:http://muchs.cn/article36/jpdesg.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供定制網(wǎng)站網(wǎng)站維護(hù)、服務(wù)器托管、電子商務(wù)、虛擬主機(jī)、用戶體驗(yàn)

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來源: 創(chuàng)新互聯(lián)

微信小程序開發(fā)