刘明帅
热爱生活!
刘明帅
Hadoop全分布式搭建实战
Hadoop全分布式搭建实战

实验环境:

  • Centos 7主机三台
主机名IP地址
Master10.30.59.130
Slave110.30.59.131
Slave210.30.59.132

软件要求:

软件名称软件版本
JDK8u77
Hadoop2.6.0

  • 软件约定:
    • 安装包在 /opt/soft
    • 安装目录在 /opt

先决条件:

实验步骤:

一、关闭防火墙与SELinux

  • 三个节点均需此操作
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master ~]# setenforce 0

二、解压组件

[root@master ~]# cd /opt 
[root@master opt]# tar -xzvf soft/jdk-8u77-linux-x64.tar.gz 
[root@master opt]# tar -xzvf soft/hadoop-2.6.0.tar.gz 
[root@master opt]# mv jdk1.8.0_77/ jdk 
[root@master opt]# mv hadoop-2.6.0/ hadoop 

三、填写配置文件

[root@master opt]# vi haoop/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <property>
        <name>dfs.namenode.rpc-address</name>
        <value>master:9000</value>
    </property>
    <property>
        <name>dfs.namenode.http-address</name>
        <value>master:50070</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.rpc-address</name>
        <value>slave1:9000</value>
    </property>
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>slave1:50070</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///opt/hadoop-repo/name</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///opt/hadoop-repo/data</value>
    </property>
</configuration>
[root@master opt]# vi hadoop/etc/hadoop/core-site.xml
<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop-repo/tmp</value>
    </property> 
</configuration>
[root@master opt]# cp haeoop/etc/hadoop/mapred-site.xml.template hadoop/etc/hadoop/mapred-site.xml 
[root@master opt]# vi hadoop/etc/hadoop/mapred-site.xml 
[root@master opt]# cp haeoop/etc/hadoop/mapred-site.xml.template hadoop/etc/hadoop/mapred-site.xml 
[root@master opt]# vi hadoop/etc/hadoop/mapred-site.xml 
<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.staging-dir</name>
        <value>/opt/hadoop-repo/history</value>
    </property>
</configuration>
[root@master opt]# vi hadoop/etc/hadoop/yarn-site.xml
<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>slave1</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
</configuration>
[root@master opt]# vi hadoop/etc/hadoop/slaves
master
slave1
slave2

四、配置环境变量并令其立即生效

[root@master opt]# vi /etc/profile.d/hadoop-etc.sh
export JAVA_HOME=/opt/jdk
export PATH=$PATH:$JAVA_HOME/bin

export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

[root@master opt]# source /etc/profile.d/hadoop-etc.sh

五、格式化HDFS

  • 仅需在master上启动
[root@master opt]# hdfs namenode -format

六、启动Hadoop

  • 在master上启动
[root@master opt]# start-dfs.sh 
  • 在slave1上启动
[root@slave1 opt]# start-yarn.sh 
  • 三台均需启动
[root@master opt]# mr-jobhistory-daemon.sh start historyserver 

实验验证:

[root@master opt]# jps
14310 NameNode
15046 NodeManager
15159 Jps
14059 QuorumPeerMain
14587 DataNode

[root@slave1 ~]# jps
14501 SecondaryNameNode
14699 Jps
14588 NodeManager
14269 QuorumPeerMain
14415 DataNode

[root@slave2 ~]# jps
13120 NodeManager
13218 Jps
13016 DataNode
12863 QuorumPeerMain

[root@master opt]# hdfs dfsadmin -report
Configured Capacity: 93344772096 (86.93 GB)
Present Capacity: 86899245056 (80.93 GB)
DFS Remaining: 86899232768 (80.93 GB)
DFS Used: 12288 (12 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Live datanodes (3):

Name: 10.30.59.132:50010 (slave2)
Hostname: slave2
Decommission Status : Normal
Configured Capacity: 31114924032 (28.98 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 2017771520 (1.88 GB)
DFS Remaining: 29097148416 (27.10 GB)
DFS Used%: 0.00%
DFS Remaining%: 93.52%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Jun 07 02:32:49 CST 2019


Name: 10.30.59.130:50010 (master)
Hostname: master
Decommission Status : Normal
Configured Capacity: 31114924032 (28.98 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 2409652224 (2.24 GB)
DFS Remaining: 28705267712 (26.73 GB)
DFS Used%: 0.00%
DFS Remaining%: 92.26%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Jun 07 02:32:49 CST 2019


Name: 10.30.59.131:50010 (slave1)
Hostname: slave1
Decommission Status : Normal
Configured Capacity: 31114924032 (28.98 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 2018103296 (1.88 GB)
DFS Remaining: 29096816640 (27.10 GB)
DFS Used%: 0.00%
DFS Remaining%: 93.51%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Jun 07 02:32:50 CST 2019

且以下页面均有正常显示

  • http://10.30.59.131:50070

文章链接: https://lmshuai.com/archives/190
版权声明: 本博客所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明出处!

推荐文章

发表评论

textsms
account_circle
email

刘明帅

Hadoop全分布式搭建实战
实验环境: Centos 7主机三台 主机名IP地址Master10.30.59.130Slave110.30.59.131Slave210.30.59.132 软件要求: 软件名称软件版本JDK8u77Hadoop2.6.0 …
扫描二维码继续阅读
2019-07-04