刘明帅
热爱生活!
刘明帅
Hadoop本地伪分布式搭建实战
Hadoop本地伪分布式搭建实战

实验环境:

  • Centos7主机一台
主机名IP地址
Master10.30.59.130
Slave110.30.59.131
Slave210.30.59.132

软件要求:

软件名称软件版本
JDK8u77
Zookeeper3.4.5
Hadoop2.6.0
Hbase1.2.11
  • 软件约定:
    • 安装包在 /opt/soft
    • 安装目录在 /opt

先决条件:

  • 需要给localhost、0.0.0.0两个地址配置ssh免密登录

实验步骤:

一、关闭防火墙与SELinux

[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# systemctl disable firewalld 
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service. Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. [root@localhost ~]# setenforce 0

二、解压组件

[root@localhost ~]# cd /opt 
[root@localhost opt]# tar -xzvf soft/jdk-8u77-linux-x64.tar.gz [root@localhost opt]# tar -xzvf soft/hadoop-2.6.0.tar.gz 
[root@localhost opt]# mv jdk1.8.0_77/ jdk 
[root@localhost opt]# mv hadoop-2.6.0/ hadoop 

三、填写配置文件

[root@localhost opt]# vi haoop/etc/hadoop/hdfs-site.xml
<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///opt/hadoop-repo/name1,file:///opt/hadoop-repo/name1</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///opt/hadoop-repo/data1,file:///opt/hadoop-repo/data2</value> 
    </property> 
</configuration>
[root@localhost opt]# vi hadoop/etc/hadoop/core-site.xml
<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://172.0.0.1:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop-repo/tmp</value>
    </property> 
</configuration>
[root@localhost opt]# cp haeoop/etc/hadoop/mapred-site.xml.template hadoop/etc/hadoop/mapred-site.xml 
[root@localhost opt]# vi hadoop/etc/hadoop/mapred-site.xml 
<?xml version="1.0" encoding="utf-8"?>
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>yarn.app.mapreduce.am.staging-dir</name>
        <value>/opt/hadoop-repo/history</value>
    </property>
</configuration>
[root@localhost opt]# vi hadoop/etc/hadoop/yarn-site.xml

四、配置环境变量并令其立即生效

[root@localhost opt]# vi /etc/profile.d/hadoop-etc.sh export JAVA_HOME=/opt/jdk export PATH=$PATH:$JAVA_HOME/bin export HADOOP_HOME=/opt/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin 

[root@localhost opt]# source /etc/profile.d/hadoop-etc.sh

五、格式化HDFS

[root@localhost opt]# hdfs namenode -format

六、启动Hadoop

[root@localhost opt]# start-dfs.sh 
[root@localhost opt]# start-yarn.sh 
[root@localhost opt]# mr-jobhistory-daemon.sh start historyserver 

实验验证:

[root@localhost opt]# jps 
14400 ResourceManager 
14673 NodeManager 
14867 JobHistoryServer 
14903 Jps 
12568 NameNode 
12843 SecondaryNameNode 
12686 DataNode 

[root@localhost opt]# hdfs dfsadmin -report 
Configured Capacity: 62229848064 (57.96 GB) 
Present Capacity: 57338130432 (53.40 GB) 
DFS Remaining: 57155321856 (53.23 GB) 
DFS Used: 182808576 (174.34 MB) 
DFS Used%: 0.32% 
Under replicated blocks: 0 
Blocks with corrupt replicas: 0 
Missing blocks: 0 
------------------------------------------------- 
Live datanodes (1): 

Name: 127.0.0.1:50010 (localhost) 
Hostname: localhost 
Decommission Status : Normal 
Configured Capacity: 62229848064 (57.96 GB) 
DFS Used: 182808576 (174.34 MB) 
Non DFS Used: 4891717632 (4.56 GB) 
DFS Remaining: 57155321856 (53.23 GB) 
DFS Used%: 0.29% 
DFS Remaining%: 91.85% 
Configured Cache Capacity: 0 (0 B) 
Cache Used: 0 (0 B) 
Cache Remaining: 0 (0 B) 
Cache Used%: 100.00% 
Cache Remaining%: 0.00% 
Xceivers: 1 
Last contact: Thu Jun 06 19:41:43 CST 2019

此时:hdfs上传文件/mapred清洗均可正常运行

伪分布式即搭建成功

文章链接: https://lmshuai.com/archives/177
版权声明: 本博客所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明出处!

推荐文章

发表评论

textsms
account_circle
email

刘明帅

Hadoop本地伪分布式搭建实战
实验环境: Centos7主机一台 主机名IP地址Master10.30.59.130Slave110.30.59.131Slave210.30.59.132 软件要求: 软件名称软件版本JDK8u77Zookeeper3.4.5Hadoop2.6.0Hbase1.2…
扫描二维码继续阅读
2019-06-29