hadoop部署spark的简单记录

部署环境就是前面部署hadoop的环境

 

一、下载解压

下载压缩包

wget https://mirror.bit.edu.cn/apache/spark/spark-3.0.0-preview2/spark-3.0.0-preview2-bin-hadoop3.2.tgz

wget https://downloads.lightbend.com/scala/2.12.2/scala-2.12.2.tgz

 

解压到对应的目录

解压后的目录结构如下:

[root@m1 hadoop]# pwd
/hadoop
[root@m1 hadoop]# ls
hadoop-3.2.0  hdfs  hive  scala  spark

 

二、配置scala和spark

配置scala和spark的环境变量并source /etc/profile

后面几行如下:

export JAVA_HOME=/usr/local/java
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=$JAVA_HOME/jre/lib/ext:$JAVA_HOME/lib/tools.jar
export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
export MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so
export HADOOP_HOME=/hadoop/hadoop-3.2.0
export PATH=$HADOOP_HOME/sbin:$PATH
export PATH=$HADOOP_HOME/bin:$PATH
export HIVE_HOME=/hadoop/hive
export PATH=$HIVE_HOME/bin:${HIVE_HOME}/conf:$PATH
export SCALA_HOME=/hadoop/scala
export PATH=$SCALA_HOME/bin:$PATH
export SPARK_HOME=/hadoop/spark
export PATH=$SPARK_HOME/bin:$PATH

 

 

 

配置spark

 

进入spark的conf目录,按如下方式配置

[root@m1 conf]#  cp spark-env.sh.template spark-env.sh

[root@m1 conf]# cp slaves.template slaves

[root@m1 conf]# cat spark-env.sh | grep -v ^# | grep -v ^$
export JAVA_HOME=/usr/local/java
export SCALA_HOME=/hadoop/scala
export SPARK_MASTER_IP=m1
export SPARK_WORKER_CORES=1   
export SPARK_WORKER_MEMORY=512M
export HADOOP_CONF_DIR=/hadoop/hadoop-3.2.0/etc/hadoop

 


[root@m1 conf]# cat slaves | grep -v ^# | grep -v ^$      
m1
m2
m3

 

再将spark目录拷到其它节点上,环境变量也一并拷贝并source

[root@m1 conf]# scp -r /hadoop/spark root@m2:/hadoop/

[root@m1 conf]# scp -r /hadoop/spark root@m3:/hadoop/

[root@m1 conf]# scp -r /etc/profile root@m2:/etc/

[root@m1 conf]# scp -r /etc/profile root@m2:/etc/

 

三、启动spark

[root@m1 sbin]# ./start-all.sh

 

已标记关键词 清除标记
相关推荐
©️2020 CSDN 皮肤主题: 大白 设计师:CSDN官方博客 返回首页