hadoop集群搭建.docx

上传人:b****1 文档编号:723576 上传时间:2023-04-29 格式:DOCX 页数:27 大小:246.43KB
下载 相关 举报
hadoop集群搭建.docx_第1页
第1页 / 共27页
hadoop集群搭建.docx_第2页
第2页 / 共27页
hadoop集群搭建.docx_第3页
第3页 / 共27页
hadoop集群搭建.docx_第4页
第4页 / 共27页
hadoop集群搭建.docx_第5页
第5页 / 共27页
hadoop集群搭建.docx_第6页
第6页 / 共27页
hadoop集群搭建.docx_第7页
第7页 / 共27页
hadoop集群搭建.docx_第8页
第8页 / 共27页
hadoop集群搭建.docx_第9页
第9页 / 共27页
hadoop集群搭建.docx_第10页
第10页 / 共27页
hadoop集群搭建.docx_第11页
第11页 / 共27页
hadoop集群搭建.docx_第12页
第12页 / 共27页
hadoop集群搭建.docx_第13页
第13页 / 共27页
hadoop集群搭建.docx_第14页
第14页 / 共27页
hadoop集群搭建.docx_第15页
第15页 / 共27页
hadoop集群搭建.docx_第16页
第16页 / 共27页
hadoop集群搭建.docx_第17页
第17页 / 共27页
hadoop集群搭建.docx_第18页
第18页 / 共27页
hadoop集群搭建.docx_第19页
第19页 / 共27页
hadoop集群搭建.docx_第20页
第20页 / 共27页
亲,该文档总共27页,到这儿已超出免费预览范围,如果喜欢就下载吧!
下载资源
资源描述

hadoop集群搭建.docx

《hadoop集群搭建.docx》由会员分享,可在线阅读,更多相关《hadoop集群搭建.docx(27页珍藏版)》请在冰点文库上搜索。

hadoop集群搭建.docx

hadoop集群搭建

Hadoop搭建

安装vmware

安装linux

配置linux

配置IP

192.168.1.100master

192.168.1.102slave2

192.168.1.103slave1

配置主机名

[root@localhost~]#vim/etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=master

[root@localhost~]#hostnamemaster

三台主机都配置主机名

[root@localhost~]#vim/etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=slave1

[root@localhost~]#hostnameslave1

[root@localhost~]#vim/etc/sysconfig/network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=slave2

[root@localhost~]#hostnameslave2

重启才能生效

Rebot

然后使用远程连接工具,这里使用xshell

关闭防火墙和selinux

[root@master~]#vim/etc/sysconfig/selinux

#ThisfilecontrolsthestateofSELinuxonthesystem.

#SELINUX=cantakeoneofthesethreevalues:

#enforcing-SELinuxsecuritypolicyisenforced.

#permissive-SELinuxprintswarningsinsteadofenforcing.

#disabled-SELinuxisfullydisabled.

SELINUX=disabled

#SELINUXTYPE=typeofpolicyinuse.Possiblevaluesare:

#targeted-Onlytargetednetworkdaemonsareprotected.

#strict-FullSELinuxprotection.

SELINUXTYPE=targeted

[root@master~]#serviceiptablesstatus

表格:

filter

ChainINPUT(policyACCEPT)

numtargetprotoptsourcedestination

ChainFORWARD(policyACCEPT)

numtargetprotoptsourcedestination

ChainOUTPUT(policyACCEPT)

numtargetprotoptsourcedestination

域名解析

使得master和slave1slave2连接对方

三台都要做

[root@master~]#vim/etc/hosts

#Donotremovethefollowingline,orvariousprograms

#thatrequirenetworkfunctionalitywillfail.

127.0.0.1localhost.localdomainlocalhost

:

:

1localhost6.localdomain6localhost6

192.168.7.100master

192.168.7.101slave1

192.168.7.102slave2

配置java

启动ftp

[root@master~]#/etc/init.d/vsftpdrestart

关闭vsftpd:

[失败]

为vsftpd启动vsftpd:

[确定]

默认情况下root不允许使用ftp

[root@mastervsftpd]#pwd

/etc/vsftpd

[root@mastervsftpd]#ls

ftpusersuser_list

将这两个文件中的root注释掉。

然后重启ftp

在三个机器上创建文件夹

[root@master~]#mkdirinstaller

[root@master~]#

上传jdk

E:

\开发工具

jdk-6u27-linux-i586-rpm.bin

到installer目录

[root@masterinstaller]#ll

总计78876

-rw-r--r--1rootroot8068021912-0112:

50jdk-6u27-linux-i586-rpm.bin

[root@masterinstaller]#chmoda+xjdk-6u27-linux-i586-rpm.bin

[root@masterinstaller]#

[root@masterinstaller]#./jdk-6u27-linux-i586-rpm.bin

Unpacking...

Checksumming...

Extracting...

UnZipSFX5.50of17February2002,byInfo-ZIP(Zip-Bugs@lists.wku.edu).

inflating:

jdk-6u27-linux-i586.rpm

这时候可以再开一个连接进行远程拷贝

[root@masterinstaller]#scpjdk-6u27-linux-i586-rpm.binslave1:

/root/installer

root@slave1'spassword:

jdk-6u27-linux-i586-rpm.bin

拷贝完在三个机器上都要安装jdk。

[root@masterinstaller]#java-version

javaversion"1.6.0_27"

添加用户

三台机器都做

[root@master~]#useraddhadoop

[root@master~]#passwdhadoop

Changingpasswordforuserhadoop.

NewUNIXpassword:

BADPASSWORD:

itisbasedonadictionaryword

RetypenewUNIXpassword:

passwd:

allauthenticationtokensupdatedsuccessfully.

[root@master~]#

并且在slave1和slave2使用hadoop用户创建installers目录

[root@slave1~]#su-hadoop

[hadoop@slave1~]$mkdirinstaller

配置ssh等效性

[hadoop@master~]#ssh-keygen-trsa

Generatingpublic/privatersakeypair.

Enterfileinwhichtosavethekey(/root/.ssh/id_rsa):

Enterpassphrase(emptyfornopassphrase):

Entersamepassphraseagain:

Youridentificationhasbeensavedin/root/.ssh/id_rsa.

Yourpublickeyhasbeensavedin/root/.ssh/id_rsa.pub.

Thekeyfingerprintis:

5b:

d6:

30:

95:

6f:

37:

b1:

d8:

e0:

ec:

b4:

cb:

94:

cc:

3f:

ccroot@master

[hadoop@master~]#

一路回车

这一个动作在三台机器上都执行

[hadoop@master~]$cd.ssh/

[hadoop@master.ssh]$ls

id_rsaid_rsa.pub

[hadoop@master.ssh]$

[hadoop@master.ssh]$catid_rsa.pub>authorized_keys

[hadoop@master.ssh]$ls

authorized_keysid_rsaid_rsa.pub

[hadoop@master.ssh]$

将生成的authorized_keys文件拷贝到slave1和slave2

[hadoop@master.ssh]$scpauthorized_keysslave1:

~/.ssh/

Theauthenticityofhost'slave1(192.168.1.103)'can'tbeestablished.

RSAkeyfingerprintis61:

e5:

be:

d1:

92:

41:

b4:

22:

4e:

88:

ff:

b1:

b1:

a1:

64:

bb.

Areyousureyouwanttocontinueconnecting(yes/no)?

yes

Warning:

Permanentlyadded'slave1,192.168.1.103'(RSA)tothelistofknownhosts.

hadoop@slave1'spassword:

authorized_keys100%3950.4KB/s00:

00

[hadoop@master.ssh]$

然后进入slave1

[hadoop@slave1.ssh]$catid_rsa.pub>>authorized_keys

[hadoop@slave1.ssh]$catauthorized_keys

ssh-rsaAAAAB3NzaC1yc2EAAAABIwAAAQEAw8taarZ+/ndWV04MqGsnT5cKcYs5LqMmtocWSsIxfUttYpMjwcgktjEPSByb/SFPE3alx0/Te7bjG8nFu2HHV4v++2jNfraqoBjIrO3/ITzHOSGduYmM4xbvBcXCAX5BSawwbpKn8RifPM5M1ZbExFhdZ0njsYSBlq6ZAMV+2F77enfwCI6jB/WhtfClj4QpWuMTQ8O/gqaMbM0OMrIuY84ssoYfDSpl2uUtGBBGY3cyyTDEbQukRH5doapSNPwZQs6lJSVIO7JWLGMfOQbvsqlS0r1nly57I1b7hAMZcGdVWZy2CGclQX3s8a7vjpJ8+iTFtwiAdydFsP+aQ9ldUw==hadoop@master

ssh-rsaAAAAB3NzaC1yc2EAAAABIwAAAQEAqhiMNhNlBZ1+aC+tU9O8HKTd7lSMmqhi7FcBKue/q/H37hyMp+PqS/BVYStvEhtHzcy+1/SJWKqSV0ut1Qh8zUo42w81KW/g1xCt5fAJLe61/XtC2WyTrwfVQbFVXCPTpAarYJTlgy+ZgarD8Qg4hS642dmXKbSUQf/Mjbxd7PpcAZx1GCVOX3wck+7LIQJuLInlAFIXhyP0rq+I80CX9u40utkgJQd6ZVvsqJdnB+eeXr08w16GEOSY8ER2Vksbw69PGJjjKz1eMFpCUNatlf3bgmLp+JBOnlbgEizc21ogwcnyTXKCP9j3ZHTO2pDxAaHJ2hYJnOjr2+GSALzeOw==hadoop@slave1

[hadoop@slave1.ssh]$

然后在slave1传输到slave2

[hadoop@slave1.ssh]$scpauthorized_keysslave2:

~/.ssh

Theauthenticityofhost'slave2(192.168.1.102)'can'tbeestablished.

RSAkeyfingerprintis61:

e5:

be:

d1:

92:

41:

b4:

22:

4e:

88:

ff:

b1:

b1:

a1:

64:

bb.

Areyousureyouwanttocontinueconnecting(yes/no)?

yes

Warning:

Permanentlyadded'slave2,192.168.1.102'(RSA)tothelistofknownhosts.

hadoop@slave2'spassword:

authorized_keys100%7900.8KB/s00:

00

[hadoop@slave1.ssh]$

到slave2上

[hadoop@slave2.ssh]$catid_rsa.pub>>authorized_keys

[hadoop@slave2.ssh]$catauthorized_keys

ssh-rsaAAAAB3NzaC1yc2EAAAABIwAAAQEAw8taarZ+/ndWV04MqGsnT5cKcYs5LqMmtocWSsIxfUttYpMjwcgktjEPSByb/SFPE3alx0/Te7bjG8nFu2HHV4v++2jNfraqoBjIrO3/ITzHOSGduYmM4xbvBcXCAX5BSawwbpKn8RifPM5M1ZbExFhdZ0njsYSBlq6ZAMV+2F77enfwCI6jB/WhtfClj4QpWuMTQ8O/gqaMbM0OMrIuY84ssoYfDSpl2uUtGBBGY3cyyTDEbQukRH5doapSNPwZQs6lJSVIO7JWLGMfOQbvsqlS0r1nly57I1b7hAMZcGdVWZy2CGclQX3s8a7vjpJ8+iTFtwiAdydFsP+aQ9ldUw==hadoop@master

ssh-rsaAAAAB3NzaC1yc2EAAAABIwAAAQEAqhiMNhNlBZ1+aC+tU9O8HKTd7lSMmqhi7FcBKue/q/H37hyMp+PqS/BVYStvEhtHzcy+1/SJWKqSV0ut1Qh8zUo42w81KW/g1xCt5fAJLe61/XtC2WyTrwfVQbFVXCPTpAarYJTlgy+ZgarD8Qg4hS642dmXKbSUQf/Mjbxd7PpcAZx1GCVOX3wck+7LIQJuLInlAFIXhyP0rq+I80CX9u40utkgJQd6ZVvsqJdnB+eeXr08w16GEOSY8ER2Vksbw69PGJjjKz1eMFpCUNatlf3bgmLp+JBOnlbgEizc21ogwcnyTXKCP9j3ZHTO2pDxAaHJ2hYJnOjr2+GSALzeOw==hadoop@slave1

ssh-rsaAAAAB3NzaC1yc2EAAAABIwAAAQEAzyFZKYRXh1HIm+p//kh/P268u6CHQJ88M+vEcb0fEjpXhNoDaVDceuYhQZxc0E/3dJRd86jaRNWnV+G+IPN00ykV2+UJhE2yjsdMa+Yqwy6XU14H25lMaImJGtxpoXO+3kWKJZ1uGB0E2TU2nS+Epb8EI+6ezZ0ilQhgwpc0kQR/jN6d6hUKKK5yTxKZg4agn4QsOZhyBNQZX7tLofHELR970T5n7to19UejB1j09AVdME+TYf7q3reLYHtVA1NsD7+wQcPB3WOKCRhHU5Uas+Rd3ukIP2/H8h13mJ5NHhq5FzxdVa62OPw9BKZVVO2vXp7SvxJG0MW0Aw8fO+AuRQ==hadoop@slave2

[hadoop@slave2.ssh]$

然后将这个文件传回slave1和master

[hadoop@slave2.ssh]$scpauthorized_keysmaster:

~/.ssh/

Theauthenticityofhost'master(192.168.1.100)'can'tbeestablished.

RSAkeyfingerprintis61:

e5:

be:

d1:

92:

41:

b4:

22:

4e:

88:

ff:

b1:

b1:

a1:

64:

bb.

Areyousureyouwanttocontinueconnecting(yes/no)?

yes

Warning:

Permanentlyadded'master,192.168.1.100'(RSA)tothelistofknownhosts.

hadoop@master'spassword:

authorized_keys100%11851.2KB/s00:

00

[hadoop@slave2.ssh]$

[hadoop@slave2.ssh]$scpauthorized_keysslave1:

~/.ssh/

Theauthenticityofhost'slave1(192.168.1.103)'can'tbeestablished.

RSAkeyfingerprintis61:

e5:

be:

d1:

92:

41:

b4:

22:

4e:

88:

ff:

b1:

b1:

a1:

64:

bb.

Areyousureyouwanttocontinueconnecting(yes/no)?

yes

Warning:

Permanentlyadded'slave1,192.168.1.103'(RSA)tothelistofknownhosts.

hadoop@slave1'spassword:

authorized_keys100%11851.2KB/s00:

00

[hadoop@slave2.ssh]$

在三台机器上修改权限

[hadoop@master.ssh]$chmod600authorized_keys

到这里配置完毕,可以直接使用

Sshslave1链接不需要提示密码。

配置Hadoop

上传hadoop使用hadoop用户

解压缩

[hadoop@masterinstaller]$tarxzfhadoop-1.2.1.tar.gz

[hadoop@masterinstaller]$ll

总计62428

drwxr-xr-x15hadoophadoop40962013-07-23hadoop-1.2.1

-rw-r--r--1hadoophadoop6385163012-0113:

20hadoop-1.2.1.tar.gz

[hadoop@masterinstaller]$

创建软连接

[hadoop@masterinstaller]$mvhadoop-1.2.1..

[hadoop@masterinstaller]$cd..

[hadoop@master~]$ln-shadoop-1.2.1/hadoop

[hadoop@master~]$ll

总计8

lrwxrwxrwx1hadoophadoop1312-0113:

22hadoop->hadoop-1.2.1/

drwxr-xr-x15hadoophadoop40962013-07-23hadoop-1.2.1

drwxrwxr-x2hadoophadoop409612-0113:

22installer

[hadoop@master~]$

配置环境变量

[hadoop@master~]$vim.bashrc

#.bashrc

#Sourceglobaldefinitions

if[-f/etc/bashrc];then

./etc/bashrc

fi

#Userspecificaliasesandfunctions

#Hadoop1.0

exportJAVA_HOME=/usr/java/jdk1.6.0_27

exportHADOOP1_HOME=/home/hadoop/hadoop

exportPATH=$PATH:

$JAVA_HOME/bin:

$HADOOP1_HOME/bin

exportCLASSPATH=$CLASSPATH:

$JAVA_HOME/lib

拷贝到slave1和slave2

[hadoop@master~]$scp.bashrcslave1:

~

.bashrc100%3080.3KB/s00:

00

[hadoop@master~]$scp.bashrcslave2:

~

Theauthenticityofhost'slave2(192.168.1.102)'can'tbeestablished.

RSAkeyfingerprintis61:

e5:

be:

d1:

92:

41:

b4:

22:

4e:

88:

ff:

b1:

b1:

a1:

64:

bb.

Areyousureyouwanttocontinueconnecting(yes/no)?

yes

Warning:

Permanentlyadded'slave2,192.168.1.102'(RSA)tothelistofknownhosts.

.bashrc100%3080.3KB/s00:

00

[hadoop@master~]$

配置hadoop文件

[hadoop@master~]$cdhadoop

[hadoop@masterhadoop]$cdconf

[hadoop@masterconf]$vimhadoop-env.sh

[hadoop@masterconf]$vimcore-site.xml

xmlversion="1.0"?

>

xml-stylesheettype="text/xsl"href="configuration.xsl"?

>

--Putsite-specificpropertyoverridesinthisfile.-->

fs.default.name

hdfs:

//master:

9000

hadoop.tmp.dir

/home/hadoop/

展开阅读全文
相关资源
猜你喜欢
相关搜索
资源标签

当前位置:首页 > 临时分类 > 批量上传

copyright@ 2008-2023 冰点文库 网站版权所有

经营许可证编号:鄂ICP备19020893号-2