一、系统环境:CENTOS7
三台机器,m1作为namenode,另外的机器作为datanode,全部机器的/etc/hosts也有对应的纪录
192.168.99.81 m1
192.168.99.82 m2
192.168.99.83 m3
关闭防火墙,selinux
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config #检查一遍,改错了可能系统无法启动
配置ssh互信,为方便登陆和将来的测试,执行这个脚本
[root@m1 ~]# cat ssh2.sh
#!/bin/bash
yum install expect -y
if [ ! -f ~/.ssh/id_rsa ];then
/usr/bin/expect <<-EOF
spawn ssh-keygen -t rsa
expect {
"Enter file in which to save the key (/root/.ssh/id_rsa):" { send "\r"; exp_continue}
"Enter passphrase (empty for no passphrase):" { send "\r"; exp_continue}
"Enter same passphrase again:" { send "\r"; exp_continue}
}
interact
expect eof
EOF
fi
cat pwd.txt | while read line
do
hostip=`echo $line | cut -d" " -f1`
uname=`echo $line | cut -d" " -f2`
pwd=`echo $line | cut -d" " -f3`
/usr/bin/expect <<-EOF
set time 30
spawn ssh-copy-id -i $uname@$hostip
expect {
"*yes/no" { send "yes\r"; exp_continue }
"*password:" { send "$pwd\r" }
}
interact
expect eof
EOF
done
[root@m2 ~]# cat pwd.txt
m1 root 密码
m2 root 密码
m3 root 密码
二、安装部署
1、解压源文件
[root@m1 ~]# tar -zxvf /shared/app/install/hadoop-3.2.0.tar.gz -C /hadoop/
配置好JAVA_HOME,将$JAVA_HOME写入hadoop-env.sh
[root@m1 ~]# cat /hadoop/hadoop-3.2.0/etc/hadoop/hadoop-env.sh | grep JAVA_HOME=
# JAVA_HOME=/usr/java/testing hdfs dfs -ls
export JAVA_HOME=/usr/local/java
2、修改配置文件:
/hadoop/hadoop-3.2.0/etc/hadoop/core-site.xml
[root@m1 hadoop]# cat core-site.xml | grep "<configuration>" -A 20
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://m1:9000</value>
</property>
</configuration>
3、修改配置文件:
/hadoop/hadoop-3.2.0/etc/hadoop/hdfs-site.xml
[root@m1 hadoop]# cat hdfs-site.xml | grep "<configuration>" -A 50
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/hadoop/hdfs/data</value>
</property>
##sencondary namenode
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>m2:50090</value>
</property>
</configuration>
4、写入同一目录的slaves
[root@m1 hadoop]# cat slaves
m2
m3
5、将hadoop-3.2.0目录分发到全部服务器,路径保持一致
6、在namemode执行格式化,并启动各个服务
在/hadoop/hadoop-3.2.0/bin/目录
[root@m1 bin]# ./hadoop namenode -format
在主节点上面到/hadoop/hadoop-3.2.0/sbin/目录启动namenode,为方便使用可将/hadoop/hadoop-3.2.0/bin/,/hadoop/hadoop-3.2.0/sbin/加入PATH再source
启动服务
[root@m1 sbin]# ./hadoop-daemon.sh start namenode
再到各数据节点启动datanode
/hadoop/hadoop-3.2.0/sbin/
./hadoop-daemon.sh start datanode
查看启动状态
在namenode批量启动
如果出现root用户不能启动start-dfs.sh的情况,修改该脚本文件,在最前面加入
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
三、测试安装结果,写入并查看文件
/hadoop/hadoop-3.2.0/bin/
[root@m1 bin]# ./hdfs dfs -mkdir /demo1
[root@m1 bin]# ./hdfs dfs -put /etc/hosts /demo1
[root@m1 tmp]# hdfs dfs -text /demo1/hosts