hadoop部署hive的简单记录

部署环境就是前面部署hadoop的环境

一、下载解压hive

下载链接:https://mirror.bit.edu.cn/apache/hive/hive-3.1.2/apache-hive-3.1.2-bin.tar.gz

解压:tar -zxvf apache-hive-3.1.2-bin.tar.gz -C /hadoop/hive/

导入环境变量,/etc/profile再加两行,并source

export HIVE_HOME=/hadoop/hive
export PATH=$HIVE_HOME/bin:${HIVE_HOME}/conf:$PATH

 

二、编辑hive的配置

[root@m1 conf]# pwd
/hadoop/hive/conf

[root@m1 conf]# cp hive-env.sh.template hive-env.sh

[root@m1 conf]# cp hive-log4j2.properties.template hive-log4j.properties

[root@m1 conf]# cp hive-default.xml.template hive-site.xml

拷贝后再编辑这些文件

[root@m1 conf]# cat hive-env.sh | grep -v ^#
HADOOP_HOME=/hadoop/hadoop-3.2.0

export HIVE_CONF_DIR=/hadoop/hive/conf

[root@m2 conf]# cat hive-site.xml
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
-->
<configuration>
        <!-- 设置 hive仓库的HDFS上的位置 -->
        <property>
            <name>hive.exec.scratchdir</name>
            <value>/user/hive/tmp</value>
        </property>
        <!--资源临时文件存放位置 -->
        <property>
            <name>hive.metastore.warehouse.dir</name>
            <value>/user/hive/warehouse</value>
        </property>
        <!-- 设置日志位置 -->
        <property>
            <name>hive.querylog.location</name>
            <value>/user/hive/log</value>
        </property>
        <property>  
           <name>hive.metastore.uris</name>  
           <value>thrift://192.168.156.81:9083</value>  
        </property>
        <property>
            <name>javax.jdo.option.ConnectionURL</name>
            <value>jdbc:mysql://192.168.99.81:3306/hive?createDatabaseIfNotExist=true</value>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionDriverName</name>
            <value>com.mysql.jdbc.Driver</value>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionUserName</name>
            <value>数据库连接帐号</value>
        </property>
        <property>
            <name>javax.jdo.option.ConnectionPassword</name>
            <value>数据库连接密码</value>
        </property>
</configuration>

 

hive-log4j2.properties这次没有改动,一般都是日志相关的配置

 

 

三、hdfs添加相应的配置

在hadoop的hdfs添加hive将会使用到的目录,对应上面hive-site.xml的配置

[root@m1 bin]# hdfs dfs -mkdir -p /tmp

[root@m1 bin]# hdfs dfs -mkdir -p /user/hive/warehouse
[root@m1 bin]# hdfs dfs  -chmod g+w /tmp
[root@m1 bin]# hdfs dfs  -chmod g+w /user/hive/warehouse

 

 

四、配置mariadb

安装mariadb用于存放hive的元数据,就是描述这些数据的数据

[root@m1 profile.d]# yum install mariadb-server -y

[root@m1 profile.d]# yum install -y mysql-connector-java

启动并初始化数据库

[root@m1 profile.d]# service mariadb start

[root@m1 profile.d]# mysql_secure_installation

Set root password? [Y/n] y
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
 ... Success!

将用户名和密码设置成和前面的配文件一致

复制mysql的jdbc驱动到hive的lib目录

[root@m1 /]# cp /usr/share/java/mysql-connector-java.jar /hadoop/hive/lib/

 

 

五、初始化hive

[root@m1 conf]#  schematool -dbType mysql -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL:        jdbc:mysql://192.168.99.81:3306/hive?createDatabaseIfNotExist=true
Metastore Connection Driver :    com.mysql.jdbc.Driver
Metastore connection User:       root
Starting metastore schema initialization to 3.1.0
Initialization script hive-schema-3.1.0.mysql.sql

.....

Initialization script completed
schemaTool completed

此时会在mysql实例里面自动建立一个hive的数据库

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| hive               |
| mysql              |
| performance_schema |
| test               |
+--------------------+
5 rows in set (0.00 sec)

 

 

六、启动hive,因为加进了环境变量,可以直接输入hive启动

[root@m1 conf]# hive

创建一个测试表

hive> create table t1 (id int,names string);

hive> show tables;
OK
t1
Time taken: 0.155 seconds, Fetched: 1 row(s)

 

 

七、部署hive的slaves节点

直接将目录拷到从节点,照样复制环境变量并source一遍

[root@m1 hadoop]# scp -r hive root@192.168.99.82:/hadoop/

并在从节点的hive-site.xml添加一节配置,如下:

<property>  
           <name>hive.metastore.uris</name>  
           <value>thrift://192.168.156.81:9083</value>  
        </property>

启动hive主节点的metastore服务

[root@m1 hadoop]# hive --service metastore &
[1] 76822
[root@m1 hadoop]# j2020-04-07 14:43:09: Starting Hive Metastore Server
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

[root@m1 conf]# jps
82035 Jps
21861 DataNode
102340 ResourceManager
76822 RunJar
21545 NameNode
103756 NodeManager

 

 

八、测试从节点的访问

在从节点启动hive并查询

[root@m2 conf]# hive

Hive Session ID = 500d03cb-31e9-4f68-8737-90a3d2b4d6d2
hive> show tables;
OK
t1
Time taken: 0.704 seconds, Fetched: 1 row(s)

 

已标记关键词 清除标记
相关推荐
©️2020 CSDN 皮肤主题: 大白 设计师:CSDN官方博客 返回首页