Sunteți pe pagina 1din 11

LAB 2

(Screenshots at the end)

Task1.

Yes, the queries work as expected.

hive> select distinct(itemtype) from auction;

Query ID = user01_20200531122944_91607f44-7d6a-4520-b90f-d9e3fab57ad6

Total jobs = 1

Launching Job 1 out of 1

Number of reduce tasks not specified. Estimated from input data size: 1

In order to change the average load for a reducer (in bytes):

set hive.exec.reducers.bytes.per.reducer=<number>

In order to limit the maximum number of reducers:

set hive.exec.reducers.max=<number>

In order to set a constant number of reducers:

set mapreduce.job.reduces=<number>

Starting Job = job_1590951671240_0001, Tracking URL =


http://maprdemo:8088/proxy/application_1590951671240_0001/

Kill Command = /opt/mapr/hadoop/hadoop-2.7.0/bin/hadoop job -kill job_1590951671240_0001

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1

2020-05-31 12:29:55,005 Stage-1 map = 0%, reduce = 0%

2020-05-31 12:30:01,428 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.57 sec

2020-05-31 12:30:08,912 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.7 sec

MapReduce Total cumulative CPU time: 2 seconds 700 msec

Ended Job = job_1590951671240_0001

MapReduce Jobs Launched:

Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 2.7 sec MAPRFS Read: 0 MAPRFS Write: 0 SUCCESS

Total MapReduce CPU Time Spent: 2 seconds 700 msec


OK

cartier

palm

xbox

Time taken: 25.051 seconds, Fetched: 3 row(s)

hive> select count(*) from auction;

Query ID = user01_20200531123018_35b13051-af90-4c66-8daa-9b3d858c81e8

Total jobs = 1

Launching Job 1 out of 1

Number of reduce tasks determined at compile time: 1

In order to change the average load for a reducer (in bytes):

set hive.exec.reducers.bytes.per.reducer=<number>

In order to limit the maximum number of reducers:

set hive.exec.reducers.max=<number>

In order to set a constant number of reducers:

set mapreduce.job.reduces=<number>

Starting Job = job_1590951671240_0002, Tracking URL =


http://maprdemo:8088/proxy/application_1590951671240_0002/

Kill Command = /opt/mapr/hadoop/hadoop-2.7.0/bin/hadoop job -kill job_1590951671240_0002

Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1

2020-05-31 12:30:27,986 Stage-1 map = 0%, reduce = 0%

2020-05-31 12:30:33,277 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.4 sec

2020-05-31 12:30:40,774 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 2.53 sec

MapReduce Total cumulative CPU time: 2 seconds 530 msec

Ended Job = job_1590951671240_0002

MapReduce Jobs Launched:

Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 2.53 sec MAPRFS Read: 0 MAPRFS Write: 0
SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 530 msec

OK

10654

Time taken: 23.493 seconds, Fetched: 1 row(s)

CREATE A DATABASE
hive> SHOW DATABASES;

OK

default

ebay

user01

Time taken: 0.012 seconds, Fetched: 3 row(s)

[user01@maprdemo user01]$ hadoop fs -ls /user/user01/hive

Found 1 items

drwx------ - user01 mapr 0 2020-05-31 12:32 /user/user01/hive/user01.db


Create Partitioned and External Tables
Create an External Table

Load Data into Tables


Query Data with SELECT
1.Let’s look at all the temperatures from January, 1970, which is the time when Unix time began

2. Let’s try the same query, but for July, when it is winter in Antarctica:

3. The name of the weather station at the South Pole is called Clean Air, because very little manmade
pollution can be found there. Let’s find the temperatures in July at the South Pole

4. Find the average temperature in Antarctica in 1970


5. Find the hottest and coldest temperatures recorded in Antarctica:

S-ar putea să vă placă și