1、Hadoop安装部署文档1. 系统版本版本机器masterLinux hadooop1 2.6.32-431.el6.x86_64 #1 SMP Sun Nov 10 22:19:54 EST 2013 x86_64 x86_64 x86_64 GNU/Linux机器slaveLinux hadooop1 2.6.32-431.el6.x86_64 #1 SMP Sun Nov 10 22:19:54 EST 2013 x86_64 x86_64 x86_64 GNU/LinuxHadopphadoop-2.5.22. 环境规划IpUser/pwdHostnme192.168.124.145H
2、adoop/123123Hmaster192.168.124.145Hadoop/123123Hslave下面将在Hmaster所执行的命令使用黑色框,Hslave执行的命令使用灰色框.3. 分别使用root用户,设置两台机器的主机名在Hmaster# echo kernel.hostname=Hmaster /etc/sysctl.conf保存设置,并检查:# /sbin/sysctl -p# hostnameHmaster# vi /etc/sysconfig/networkHOSTNAME=Hmaster然后,设置主机名和IP对应关系:# vi /etc/hosts 192.168.12
3、4.145Hmaster192.168.124.146 Hslave在Hslave# echo kernel.hostname=Hslave /etc/sysctl.conf保存设置,并检查:# /sbin/sysctl -p# hostnameHslave然后,设置主机名和IP对应关系:# vi /etc/hosts 192.168.124.145 Hmaster192.168.124.146 Hslave4. 分别创建用户Hmaster、Hslave在Hmaster使用root用户创建Hadoop组,Hadoop用户# groupadd Hadoop# useradd -g Hadoop
4、Hadoop# passwd Hadoop在Hslave使用root用户创建Hadoop组,Hadoop用户# groupadd Hadoop# useradd -g Hadoop Hadoop# passwd Hadoop5. 设置两台机器ssh无密码登录一般系统是默认安装了ssh命令的,如果没有,自行安装。在Hmaster,切换到Hadoop用户第一步:产生密钥$ ssh-keygen -t dsa -P -f /.ssh/id_dsaGenerating public/private dsa key pair.Created directory /home/Hadoopm/.ssh.Yo
5、ur identification has been saved in /home/Hadoopm/.ssh/id_dsa.Your public key has been saved in /home/Hadoopm/.ssh/id_dsa.pub.The key fingerprint is:6e:96:18:54:e0:01:0f:09:d8:ba:9b:87:40:c3:d8:6d HadoopmHmasterThe keys randomart image is:+- DSA 1024-+| o.ooo. |. . .+ o |oo . + |o+. E. |. . S |o + .
6、 |.+ . = |+ . o | . |+-+第二步:导入authorized_keys$ cat /.ssh/id_dsa.pub /.ssh/authorized_keys$ chmod 600 authorized_keys第三步:ssh无密码连接测试$ ssh Hmaster第一次登录需要密码,exit退出后,重新登录不需要如果出现提示$ ssh -o StrictHostKeyChecking=no Hmaster同样,在Hslave,切换到Hadoop用户产生密钥$ ssh-keygen -t dsa -P -f /.ssh/id_dsaGenerating public/pri
7、vate dsa key pair.Your identification has been saved in /home/Hadoops/.ssh/id_dsa.Your public key has been saved in /home/Hadoops/.ssh/id_dsa.pub.The key fingerprint is:a6:c2:c8:17:76:95:8a:9a:3d:c9:f1:61:67:2d:09:c5 HadoopsHslaveThe keys randomart image is:+- DSA 1024-+| . | .E. | . o | . + o | = =
8、 S . | . O B * . | = O o | . o | |+-+导入authorized_keys$ cat /.ssh/id_dsa.pub /.ssh/authorized_keys$ chmod 600 authorized_keysssh无密码连接测试$ sshHslave第四步:设置Hmaster与Hslave机器相互无密访问进入Hmaster机器Hadoop用户的.ssh目录(Hadoop用户)$ scp authorized_keys HadoopHslave:/.ssh/authorized_keys_from_master输入Hslave机器Hadoop的密码(Ha
9、doop用户)Hadoopshslaves password: authorized_keys 100% 605 0.6KB/s 00:00进入Hslave机器Hadoop的.ssh目录(Hadoop用户)$ cat authorized_keys_from_master authorized_keys$ scp authorized_keys HadoopHmaster:/.ssh/authorized_keys_from_slave输入Hmaster机器Hadoop的密码(Hadoop用户)Hadoophmasters password: authorized_keys 100% 1207
10、 1.2KB/s 00:00进入Hmaster机器Hadoop用户的.ssh目录(Hadoop用户)$ cat authorized_keys_from_slave authorized_keys第五步:互相测试无密登录在Hmaster机器(Hadoop用户)$ ssh HslaveLast login: Tue Nov 1 23:55:42 2016 from hmaster成功。在Hslave机器(Hadoop用户)$ ssh HmasterLast login: Tue Nov 1 23:56:13 2016 from hslave成功。6. 安装jdk上传jdk到Hmaster及Hsl
11、ave机器,并执行以下命令(Hadoop用户)$ tar -zxvf jdk-7u75-linux-x64.tar.gz复制jdk解压路径,设置Hadoop用户环境变量$ vi /.bash_profileexport JAVA_HOME=/home/Hadoop/jdk1.7.0_75export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jarexport PATH=$JAVA_HOME/bin:$PATH7. 安装hadoop-2.5.21.解压在Hmaser机器,解压hadoop-2.5.2.tar.gz,并创建以下文
12、件夹(Hadoop用户)$ tar -zxvf hadoop-2.5.2.tar.gz$ mkdir /dfs/$ mkdir /dfs/name$ mkdir /dfs/data$ mkdir /tmp2.修改配置文件进入/home/Hadoop/hadoop-2.5.2/etc/hadoop目录(Hadoop用户)$ cd /home/Hadoop/hadoop-2.5.2/etc/hadoop2.1.修改core-site.xml$ vi core-site.xml hadoop.tmp.dir /home/Hadoop/tmp Abase for other temporary dir
13、ectories. fs.defaultFS hdfs:/Hmaster:9000 io.file.buffer.size 4096 2.2.修改hdfs-site.xml$ vi hdfs-site.xml dfs.nameservices hadoop-cluster1 dfs.namenode.secondary.http-address Hmaster:50090 dfs.namenode.name.dir file:/home/Hadoop/dfs/name dfs.datanode.data.dir file:/home/Hadoop/dfs/data dfs.replicatio
14、n 1 dfs.webhdfs.enabled true 2.3.修改mapred-site.xml$ cp mapred-site.xml.template mapred-site.xml$ vi mapred-site.xml mapreduce.framework.name yarn mapreduce.jobtracker.http.address Hmaster:50030 mapreduce.jobhistory.address Hmaster:10020 mapreduce.jobhistory.webapp.address Hmaster:19888 2.4.修改yarn-si
15、te.xml yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.address Hmaster:8032 yarn.resourcemanager.scheduler.address Hmaster:8030 yarn.resourcemanager.resource-tracker.address Hmaster:8031 yarn.resourcemanager.admin.address Hmaster:8033 yarn.resourcemanager.webapp.address Hmaster:
16、8088 8088为访问hadoop端口2.5.修改slaves$ vi slavesHslave2.6.修改hadoop-env.sh$ vi hadoop-env.shexport JAVA_HOME=/home/Hadoop/jdk1.7.0_752.7.修改yarn-env.sh$ vi yarn-env.shexport JAVA_HOME=/home/Hadoop/jdk1.7.0_753.将hadoop-2.5.2文件夹传输到Hslave$ scp /hadoop-2.5.2 HadoopHslave:/8. 格式化文件系统在机器Hmaster及Hslave$ /home/Had
17、oop/hadoop-2.5.2/bin/hdfs namenode-format注意:这里的格式化文件系统并不是硬盘格式化,只是针对主服务器hdfs-site.xml的dfs.namenode.name.dir和dfs.datanode.data.dir目录做相应的清理工作。9. 启动hdfs:在Hmaster$ /home/Hadoop/hadoop-2.5.2/sbin$ ./start-dfs.sh如果提示util.NativeCodeLoader: Unable to load native-hadoop library for your platform则上传hadoop-nati
18、ve-64-2.5.2所以文件覆盖至/home/Hadoop/hadoop-2.5.2/lib/native/下载地址(10. 启动yarn:在Hmaster$ /home/Hadoop/hadoop-2.5.2/sbin$ ./start-yarn.sh11. 查看进程信息在Hmaster在Hslave12. 查看HDFS集群状态http:/Hmaster:50070/http:/Hmaster:8088/13. 关闭hdfs:$ /home/Hadoop/hadoop-2.5.2/sbin$ ./stop-dfs.sh14. 关闭yarn:$ /home/Hadoop/hadoop-2.5.2/sbin$ ./stop-yarn.sh
copyright@ 2008-2023 冰点文库 网站版权所有
经营许可证编号:鄂ICP备19020893号-2