Sunteți pe pagina 1din 6

QUESTION 1

(1/1 point)
1. HDFS is designed for:

Large files, streaming data access, and commodity hardware Large files, streaming
data access, and commodity hardware –
t Large files, low latency data access, and commodity hardware
Large files, streaming data access, and high-end hardware
Small files, streaming data access, and commodity hardware
None of the options is correct
You have used 2 of 2 submissions

QUESTION 2

(1 point possible)
2. The Hadoop distributed file system (HDFS) is the only distributed file
system supported by Hadoop. True or false?

True True – incorrect


t False
You have used 1 of 1 submissions

QUESTION 3

(1 point possible)
3. The input to a mapper takes the form < k1, v1 > . What form does the
mapper's output take?

< list(k2), v2 > < list(k2), v2 > - incorrect


list( < k2, v2 > )
< k2, list(v2) >
< k1, v1 >
None of the options is correct
You have used 2 of 2 submissions
QUESTION 4

(1 point possible)
4. What is Flume?

A service for moving large amounts of data around a cluster soon after the data is
produced.
A distributed file system
. A programming language that translates high-level queries into map tasks and
reduce tasks.
A platform for executing MapReduce jobs. A platform for executing MapReduce
jobs. – incorrect
t None of the options is correct
You have used 2 of 2 submissions

QUESTION 5

(1/1 point)
5. What is the purpose of the shuffle operation in Hadoop MapReduce?

To pre-sort the data before it enters each mapper node


. To distribute input splits among mapper nodes
. To transfer each mapper's output to the appropriate reducer node based on a
partitioning function. To transfer each mapper's output to the appropriate reducer node
based on a partitioning function. – correct
To randomly distribute mapper output among reducer nodes.
None of the options is correct
FINAL CHECKYOUR ANSWER SAVEYOUR ANSWER
You have used 1 of 2 submissions

QUESTION 6

(1 point possible)
6. Which of the following is a duty of the DataNodes in HDFS?

Control the execution of an individual map task or a reduce task.


Maintain the file system tree and metadata for all files and directories. Maintain the
file system tree and metadata for all files and directories. – incorrect
Manage the file system namespace
. Store and retrieve blocks when told to by clients or the NameNode
. None of the options is correct
You have used 2 of 2 submissions

QUESTION 7

(1 point possible)
7. Which of the following is a duty of the NameNode in HDFS?

Control the MapReduce job from end-to-end


Maintain the file system tree and metadata for all files and directories
Store the block data
Transfer block data from the data nodes to the clients Transfer block data from the
data nodes to the clients – incorrect
None of the options is correct
You have used 2 of 2 submissions

QUESTION 8

(1/1 point)
8. Which component determines the specific nodes that a MapReduce task
will run on?

The NameNode
The JobTracker The JobTracker – correct
The TaskTrackers
The JobClient
None of the options is correct
FINAL CHECKYOUR ANSWER SAVEYOUR ANSWER
You have used 1 of 2 submissions
QUESTION 9

(1/1 point)
9). Which of the following characteristics is common to Pig, Hive, and Jaql?

All translate high-level languages to MapReduce jobs All translate high-level


languages to MapReduce jobs – correct
All operate on JSON data structures
All are data flow languages
All support random reads/writes
None of the options is correct
FINAL CHECKYOUR ANSWER SAVEYOUR ANSWER
You have used 1 of 2 submissions

QUESTION 10

(1/1 point)
10. Which of the following is NOT an open source project related to Hadoop?

Pig UIMA Jackal Jackal - correct Avro Lucene


FINAL CHECKYOUR ANSWER SAVEYOUR ANSWER
You have used 1 of 2 submissions

QUESTION 11

(1/1 point)
11. During the replication process, a block of data is written to all specified
DataNodes in parallel. True or false?

True False False - correct


You have used 1 of 1 submissions

QUESTION 12

(1 point possible)
12. With IBM BigInsights, Hadoop components can be started and stopped
from a command line and from the Ambari Console. True or false?
True False False - incorrect
You have used 1 of 1 submissions

QUESTION 13

(1 point possible)
13. When loading data into HDFS, data is held at the NameNode until the
block is filled and then the data is sent to a DataNode. True or false?

True True - incorrect False


You have used 1 of 1 submissions

QUESTION 14

(1 point possible)
14. Which of the following is true about the Hadoop federation?

Uses JournalNodes to decide the active NameNode Allows non-Hadoop


programs to access data in HDFS Allows non-Hadoop programs to access data in
HDFS - incorrect Allows multiple NameNodes with their own namespaces to share a
pool of DataNodes Implements a resource manager external to all Hadoop
frameworks
You have used 2 of 2 submissions

QUESTION 15

(1 point possible)
15. Which of the following is true about Hadoop high availability?

Uses JournalNodes to decide the active NameNode Allows non-Hadoop


programs to access data in HDFS Allows non-Hadoop programs to access data in
HDFS - incorrect Allows multiple NameNodes with their own namespaces to share a
pool of DataNodes Implements a resource manager external to all Hadoop
frameworks
You have used 2 of 2 submissions

QUESTION 16
(1 point possible)
16. Which of the following is true about YARN?

Uses JournalNodes to decide the active NameNode Allows non-Hadoop


programs to access data in HDFS Allows non-Hadoop programs to access data in
HDFS - incorrect Allows multiple NameNodes with their own namespaces to share a
pool of DataNodes Implements a resource manager external to all Hadoop
frameworks
You have used 2 of 2 submissions

QUESTION 17

(1 point possible)
17. Which of the following sentences is true?

Hadoop is good for OLTP, DSS, and big data Hadoop includes open source
components and closed source components Hadoop is a new technology designed
to replace relational databases Hadoop is a new technology designed to replace
relational databases - incorrect All of the options are correct None of the options
is correct
You have used 2 of 2 submissions

QUESTION 18

(1/1 point)
18. In which of these scenarios should Hadoop be used?

Processing billions of email messages to perform text analytics Processing billions


of email messages to perform text analytics - correct Obtaining stock price trends on
a per-minute basis Processing weather sensor information to predict a hurricane
path Analyzing vital signs of a baby in real time None of the options is correct
FINAL CHECKYOUR ANSWER SAVEYOUR ANSWER
You have used 1 of 2 submissions

S-ar putea să vă placă și