Hive的使用方法

本篇內(nèi)容主要講解“Hive的使用方法”,感興趣的朋友不妨來(lái)看看。本文介紹的方法操作簡(jiǎn)單快捷,實(shí)用性強(qiáng)。下面就讓小編來(lái)帶大家學(xué)習(xí)“Hive的使用方法”吧!

10年積累的網(wǎng)站建設(shè)、網(wǎng)站設(shè)計(jì)經(jīng)驗(yàn),可以快速應(yīng)對(duì)客戶(hù)對(duì)網(wǎng)站的新想法和需求。提供各種問(wèn)題對(duì)應(yīng)的解決方案。讓選擇我們的客戶(hù)得到更好、更有力的網(wǎng)絡(luò)服務(wù)。我雖然不認(rèn)識(shí)你,你也不認(rèn)識(shí)我。但先網(wǎng)站設(shè)計(jì)后付款的網(wǎng)站建設(shè)流程,更有覃塘免費(fèi)網(wǎng)站建設(shè)讓你可以放心的選擇與我們合作。

    1、運(yùn)行模式(集群與本地)

        1.1、集群模式:>SET mapred.job.tracker=cluster

        1.2、本地模式:>SET mapred.job.tracker=local

    2、訪問(wèn)Hive的3鐘方式

        2.1、終端訪問(wèn)

            #hive  或者  #hive --service cli  

        2.2、web訪問(wèn),端口9999

            #hive --service hwi &

        2.3、hive遠(yuǎn)程服務(wù),端口10000

            #hive --service hiveserver &

    3、數(shù)據(jù)類(lèi)型

       3.1、基本數(shù)據(jù)類(lèi)型 :    

            數(shù)據(jù)類(lèi)型
            占用長(zhǎng)度
            tinyint
      1byte(-128~127)
            smallint
      2byte(-2^16 ~ 2^16-1)
            int
      4byte(-2^31 ~ 2^31-1)
            bigint
      8byte(-2^63 ~ 2^63-1)
            float
      4byte單精度
            double
      8byte雙精度
            string

            boolean

        3.2、復(fù)合數(shù)據(jù)類(lèi)型:ARRAY,MAP,STRUCT,UNION

    4、數(shù)據(jù)存儲(chǔ)

        4.1、基于HDFS

        4.2、存儲(chǔ)結(jié)構(gòu):database 、table 、file 、view

        4.3、指定行、列分隔符即可解析數(shù)據(jù)

    5、基本操作

        5.1、創(chuàng)建數(shù)據(jù)庫(kù):>create database db_name

        5.2、指定數(shù)據(jù)庫(kù):>use db

        5.3、顯示表:show tables;

        5.4、創(chuàng)建表

                5.4.1、內(nèi)部表(默認(rèn)):create table table_name(param_name type1,param_name2 type2,...) row format delimited fields terminated by '分隔符';

                 例:create table trade_detail(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t';

                內(nèi)部表類(lèi)似數(shù)據(jù)庫(kù)表,存儲(chǔ)在HDFS上(位置通過(guò)hive.metastore.warehouse.dir參數(shù)查看,除了外部表以外都保存在此處的表),表被刪除時(shí),表的元數(shù)據(jù)信息一起被刪除。

                加載數(shù)據(jù):load data local inpath 'path' into table table_name;

                5.4.2、分區(qū)表:create table table_name(param_name type1,param_name2 type2,...) partitioned by (param_name type)row format delimited fields terminated by '分隔符';

                例:create table td_part(id bigint, account string, income double, expenses double, time string) partitioned by (logdate string) row format delimited fields terminated by '\t';

                和普通表的區(qū)別:各個(gè)數(shù)據(jù)劃分到不同的分區(qū)文件,表中的每一個(gè)partition對(duì)應(yīng)表下的一個(gè)目錄,盡管

                加載數(shù)據(jù):load data local inpath 'path' into table table_name partition (parti_param1='value',parti_param2='value',..); 

                添加分區(qū):alter table partition_table add partition (daytime='2013-02-04',city='bj');

                刪除分區(qū):alter table partition_table drop partition (daytime='2013-02-04',city='bj'),元數(shù)據(jù)和數(shù)據(jù)文件被刪除,但是目錄還存在

                5.4.3、外部表:create externaltable td_ext(id bigint, account string, income double, expenses double, time string) row format delimited fields terminated by '\t' location 'hdfs_path';

                加載數(shù)據(jù):load data inpath 'hdfs_path' table_name;

                5.4.4、桶表:是對(duì)數(shù)據(jù)進(jìn)行哈希取值,然后放到不同文件中存儲(chǔ)。
                創(chuàng)建表:create table bucket_table(id string) clustered by(id) into 4 buckets;

                加載數(shù)據(jù):

                        set hive.enforce.bucketing = true;

                        必須先把以上的操作執(zhí)行才能加載數(shù)據(jù)
                        insert into table bucket_table select name from stu;    
                        insert overwrite table bucket_table select name from stu;

                數(shù)據(jù)加載到桶表時(shí),會(huì)對(duì)字段取hash值,然后與桶的數(shù)量取模。把數(shù)據(jù)放到對(duì)應(yīng)的文件中。

                對(duì)數(shù)據(jù)抽樣調(diào)查:select * from bucket_table tablesample(bucket 1 out of 4 on id);
        6、創(chuàng)建視圖:CREATE VIEW v1 AS select * from t1;

        7、修改表:alter table tb_name add columns (param_name,type);
        8、刪除表:drop table tb_name;

        9、數(shù)據(jù)導(dǎo)入

            9.1、加載數(shù)據(jù):LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE]     INTO TABLE tablename     [PARTITION (partcol1=val1, partcol2=val2 ...)]

                    數(shù)據(jù)加載到表時(shí),不會(huì)對(duì)數(shù)據(jù)進(jìn)行轉(zhuǎn)移,LOAD操作只是將數(shù)據(jù)復(fù)制到HIVE表對(duì)應(yīng)的位置       
           9.2、Hive中表的互導(dǎo):INSERT OVERWRITE TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement FROM from_statement
            9.3、create as :CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name  (col_name data_type, ...)    …AS SELECT * FROM TB_NAME;

        10、查詢(xún)

            10.1、語(yǔ)法結(jié)構(gòu)

                        SELECT [ALL | DISTINCT] select_expr, select_expr, ...
                        FROM table_reference
                        [WHERE where_condition]
                        [GROUP BY col_list]
                        [ CLUSTER BY col_list | [DISTRIBUTE BY col_list] [SORT BY col_list] | [ORDER BY col_list] ]
                        [LIMIT number]

                        ALL and DISTINCT :去重

            10.2、partition查詢(xún)

                        利用分區(qū)剪枝(input pruning)的特性,類(lèi)似“分區(qū)索引”,只有當(dāng)語(yǔ)句中出現(xiàn)WHERE才會(huì)啟動(dòng)分區(qū)剪枝

            10.3、LIMIT Clause

                        Limit 可以限制查詢(xún)的記錄數(shù)。查詢(xún)的結(jié)果是隨機(jī)選擇的。語(yǔ)法:SELECT * FROM t1 LIMIT 5
            10.4、Top N
                        SET mapred.reduce.tasks = 1   SELECT * FROM sales SORT BY amount DESC LIMIT 5

        11、表連接

            11.1、內(nèi)連接:select b.name,a.* from dim_ac a join acinfo b on (a.ac=b.acip) limit 10;
            11.2、左外連接:select b.name,a.* from dim_ac a left outer join acinfo b on a.ac=b.acip limit 10;

        12、Java客戶(hù)端

            12.1、啟動(dòng)遠(yuǎn)程服務(wù)#hive --service hiveserver

            12.2、相關(guān)代碼     

Class.forName("org.apache.hadoop.hive.jdbc.HiveDriver");
Connection con = DriverManager.getConnection("jdbc:hive://192.168.1.102:10000/wlan_dw", "", "");
Statement stmt = con.createStatement();
String querySQL="SELECT * FROM wlan_dw.dim_m order by flux desc limit 10";

ResultSet res = stmt.executeQuery(querySQL);  

while (res.next()) {
    System.out.println(res.getString(1) +"\t" +res.getLong(2)+"\t" +res.getLong(3)+"\t" +res.getLong(4)+"\t" +res.getLong(5));
}

        13、自定義函數(shù)(UDF)

            13.1、UDF函數(shù)可以直接應(yīng)用于select語(yǔ)句,對(duì)查詢(xún)結(jié)構(gòu)做格式化處理后,再輸出內(nèi)容。
            13.2、編寫(xiě)UDF函數(shù)的時(shí)候需要注意一下幾點(diǎn):
                a)自定義UDF需要繼承org.apache.hadoop.hive.ql.UDF。
                b)需要實(shí)現(xiàn)evaluate函數(shù),evaluate函數(shù)支持重載。

            13.3、步驟
                a)把程序打包放到目標(biāo)機(jī)器上去;
                b)進(jìn)入hive客戶(hù)端,添加jar包:hive>add jar /run/jar/udf_test.jar;
                c)創(chuàng)建臨時(shí)函數(shù):hive>CREATE TEMPORARY FUNCTION add_example AS 'hive.udf.Add';
                d)查詢(xún)HQL語(yǔ)句:
                    SELECT add_example(8, 9) FROM scores;
                    SELECT add_example(scores.math, scores.art) FROM scores;
                    SELECT add_example(6, 7, 8, 6.8) FROM scores;
                e)銷(xiāo)毀臨時(shí)函數(shù):hive> DROP TEMPORARY FUNCTION add_example;
                注:UDF只能實(shí)現(xiàn)一進(jìn)一出的操作,如果需要實(shí)現(xiàn)多進(jìn)一出,則需要實(shí)現(xiàn)UDAF

            13.4、代碼

package cn.itheima.bigdata.hive;

import java.util.HashMap;

import org.apache.hadoop.hive.ql.exec.UDF;

public class AreaTranslationUDF extends UDF{
    
    private static HashMap<String, String> areaMap = new HashMap<String, String>();
    
    static{
        
        areaMap.put("138", "beijing");
        areaMap.put("139", "shanghai");
        areaMap.put("137", "guangzhou");
        areaMap.put("136", "niuyue");
        
    }

    //用來(lái)將手機(jī)號(hào)翻譯成歸屬地,evaluate方法一定要是public修飾的,否則調(diào)不到
    public String evaluate(String phonenbr) {

        String area = areaMap.get(phonenbr.substring(0,3));
        return area==null?"other":area;

    }
    
    //用來(lái)求兩個(gè)字段的和
    public int evaluate(int x,int y){
        
        return x+y;
    }

}

到此,相信大家對(duì)“Hive的使用方法”有了更深的了解,不妨來(lái)實(shí)際操作一番吧!這里是創(chuàng)新互聯(lián)網(wǎng)站,更多相關(guān)內(nèi)容可以進(jìn)入相關(guān)頻道進(jìn)行查詢(xún),關(guān)注我們,繼續(xù)學(xué)習(xí)!

本文題目:Hive的使用方法
新聞來(lái)源:http://muchs.cn/article34/ghehse.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供電子商務(wù)、網(wǎng)站設(shè)計(jì)手機(jī)網(wǎng)站建設(shè)、小程序開(kāi)發(fā)網(wǎng)站營(yíng)銷(xiāo)、網(wǎng)站內(nèi)鏈

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶(hù)投稿、用戶(hù)轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來(lái)源: 創(chuàng)新互聯(lián)

綿陽(yáng)服務(wù)器托管