Big Data Hadoop Prep Questions



This post is dedicated for Big Data Hadoop Questions. Please keep on checking for new questions.


What is HDFS?
Full form is the Hadoop Distributed File System, is a distributed file system that very large amounts of data (>terabytes or more). It provide high-throughput access to this information. Files are stored in a redundant manner across multiple machines that ensures high availability and durability to failure of very parallel applications.

The HDFS is a block-structured file system: individual files are broken into blocks of a fixed size. These blocks are stored across a cluster of one or more machines with data storage capacity. Individual machines in the cluster are referred to as DataNodes.
What are the two core components Hadoop framework?
Hadoop distributed file system (HDFS) and MapReduce

What are Hadoop clusters?
They are a sets of machines running core components, a) Hadoop distributed file system (HDFS) and b) MapReduce

Hadoop Cluster is designed for distributed processing of large data sets across group of commodity machines (low-cost servers). The Data could be unstructured, semi-structured and also could be structured data.It is designed to scale up to thousands of machines, with a high degree of fault tolerance and software has the intelligence to detect & handle the failures at the application layer.

What are nodes in Hadoop technology?
Individual machines running core components, Hadoop distributed file system (HDFS) and MapReduce.

What are types of machines in Hadoop cluster environment (with their specific roles)?

1. Client machines:

- Load data (input files) into a cluster
- MapReduce Job/s Submission
- Collect results and view analytics

2. Master nodes:

- The Name Node: Data storage function coordination (HDFS) and Meta data information maintenance
- ResourceManager negotiates necessary resources for a container and launches an ApplicationMaster to represent the submitted application.

3. Slave nodes:

Major part of cluster consists of Slave Nodes to perform computation.
The NodeManager manages each node within a YARN cluster. The NodeManager provides per-node services within the cluster - management of a container over its life cycle to monitoring resources and tracking the health of its node.

Container represents an allocated resource in the cluster. The resource Manager is the sole authority to allocate any container to applications. The allocated container is always on a single node and has unique containerID. It has a specific amount of resource allocated. Typically, an ApplicationMaster receive the container from the ResourceManager during resource negotiation and then talks to the NodeManager to start/stop container. Resource models a set of computer resources. Currently it only models Memeory [may be in future other resources like CPUs will be added.

What is a block size in HDFS?
Default block size is 64mb. But 128mb is typical.

Tell the interface needs to be implemented to create Mapper and Reducer for the Hadoop?
- org.apache.hadoop.mapreduce.Mapper
and
- org.apache.hadoop.mapreduce.Reducer

Describe the InputFormat?
The InputFormat defines how to read data from a file into the Mapper instances. Hadoop comes with several implementations of InputFormat; some work with text files and describe different ways in which the text files can be interpreted. Others, like SequenceFileInputFormat, are purpose-built for reading particular binary file formats.

More powerfully, you can define your own InputFormat implementations to format the input to your programs however you want. For example, the default TextInputFormat reads lines of text files. The key it emits for each record is the byte offset of the line read (as a LongWritable), and the value is the contents of the line up to the terminating \'\\n\' character (as a Text object). If you have multi-line records each separated by a $ character, you could write your own InputFormat that parses files into records split on this character instead.

What is the typical block size of an HDFS block?
Default block size is 64mb. But 128mb is typical.

Describe Name Node?
The Name Node holds all the file system metadata for the cluster and oversees the health of Data Nodes and coordinates access to data.  The Name Node is the central controller of HDFS.  It does not hold any cluster data itself.  The Name Node only knows what blocks make up a file and where those blocks are located in the cluster.  The Name Node points Clients to the Data Nodes they need to talk to and keeps track of the cluster’s storage capacity, the health of each Data Node, and making sure each block of data is meeting the minimum defined replica policy.
Data Nodes send heartbeats to the Name Node every 3 seconds via a TCP handshake, using the same port number defined for the Name Node daemon, usually TCP 9000.  Every tenth heartbeat is a Block Report, where the Data Node tells the Name Node about all the blocks it has.  The block reports allow the Name Node build its metadata and insure (3) copies of the block exist on different nodes, in different racks.
The Name Node is a critical component of the Hadoop Distributed File System (HDFS).  Without it, Clients would not be able to write or read files from HDFS, and it would be impossible to schedule and execute Map Reduce jobs.  Because of this, it’s a good idea to equip the Name Node with a highly redundant enterprise class server configuration; dual power supplies, hot swappable fans, redundant NIC connections, etc

Secondary Name Node:
Hadoop has server role called the Secondary Name Node.  A common misconception is that this role provides a high availability backup for the Name Node.  This is not the case.
The Secondary Name Node occasionally connects to the Name Node (by default, ever hour) and grabs a copy of the Name Node’s in-memory metadata and files used to store metadata (both of which may be out of sync).  The Secondary Name Node combines this information in a fresh set of files and delivers them back to the Name Node, while keeping a copy for itself.
Should the Name Node die, the files retained by the Secondary Name Node can be used to recover the Name Node.  In a busy cluster, the administrator may configure the Secondary Name Node to provide this housekeeping service much more frequently than the default setting of one hour.  Maybe every minute.

What is a Data Node?
Data Node is what where actual data resides in the Hadoop HDFS system. For the same Meta info is maintained at Name node, which chunk is in which node.
There are some cases in which a Data Node daemon itself will need to read a block of data from HDFS.  One such case is where the Data Node has been asked to process data that it does not have locally, and therefore it must retrieve the data from another Data Node over the network before it can begin processing.

This is another key example of the Name Node’s Rack Awareness knowledge providing optimal network behaviour.  When the Data Node asks the Name Node for location of block data, the Name Node will check if another Data Node in the same rack has the data.  If so, the Name Node provides the in-rack location from which to retrieve the data.  The flow does not need to traverse two more switches and congested links find the data in another rack.  With the data retrieved quicker in-rack, the data processing can begin sooner, and the job completes that much faster.


Q12. What are the Hadoop Server Roles?


The three major categories of machine roles in a Hadoop deployment are Client machines, Masters nodes, and Slave nodes.  The Master nodes oversee the two key functional pieces that make up Hadoop: storing lots of data (HDFS), and running parallel computations on all that data (Map Reduce).  The Name Node oversees and coordinates the data storage function (HDFS), while the Job Tracker oversees and coordinates the parallel processing of data using Map Reduce.  Slave Nodes make up the vast majority of machines and do all the dirty work of storing the data and running the computations.  Each slave runs both a Data Node and Task Tracker daemon that communicate with and receive instructions from their master nodes.  The Task Tracker daemon is a slave to the Job Tracker, the Data Node daemon a slave to the Name Node.

Client machines have Hadoop installed with all the cluster settings, but are neither a Master or a Slave.  Instead, the role of the Client machine is to load data into the cluster, submit Map Reduce jobs describing how that data should be processed, and then retrieve or view the results of the job when its finished.  In smaller clusters (~40 nodes) you may have a single physical server playing multiple roles, such as both Job Tracker and Name Node.  With medium to large clusters you will often have each role operating on a single server machine.
In real production clusters there is no server virtualization, no hypervisor layer.  That would only amount to unnecessary overhead impeding performance.  Hadoop runs best on Linux machines, working directly with the underlying hardware.  That said, Hadoop does work in a virtual machine.  That’s a great way to learn and get Hadoop up and running fast and cheap.  I have a 6-node cluster up and running in VMware Workstation on my Windows 7 laptop.

How will you write a custom partitioner for a Hadoop job
To have hadoop use a custom partitioner you will have to do minimum the following three
- Create a new class that extends Partitioner class
- Override method getPartition
- In the wrapper that runs the Map Reducer, either
  - add the custom partitioner to the job programtically using method setPartitionerClass or
  - add the custom partitioner to the job as a config file (if your wrapper reads from config file or oozie)

How did you debug your Hadoop code
There can be several ways of doing this but most common ways are
- By using counters
- The web interface provided by Hadoop framework

Did you ever built a production process in Hadoop ? If yes then what was the process when your hadoop job fails due to any reason?

Its an open ended question but most candidates, if they have written a production job, should talk about some type of alert mechanisn like email is sent or there monitoring system sends an alert. Since Hadoop works on unstructured data, its very important to have a good alerting system for errors since unexpected data can very easily break the job.

Did you ever ran into a lop sided job that resulted in out of memory error, if yes then how did you handled it
This is an open ended question but a candidate who claims to be an intermediate developer and has worked on large data set (10-20GB min) should have run into this problem. There can be many ways to handle this problem but most common way is to alter your algorithm and break down the job into more map reduce phase or use a combiner if possible.

Distributed Cache in Hadoop:
Distributed Cache is a facility provided by the Map/Reduce framework to cache files (text, archives, jars and so on) needed by applications during execution of the job. The framework will copy the necessary files to the slave node before any tasks for the job are executed on that node.

What is the benifit of Distributed cache, why can we just have the file in HDFS and have the application read it?

This is because distributed cache is much faster. It copies the file to all trackers at the start of the job. Now if the task tracker runs 10 or 100 mappers or reducer, it will use the same copy of distributed cache. On the other hand, if you put code in file to read it from HDFS in the MR job then every mapper will try to access it from HDFS hence if a task tracker run 100 map jobs then it will try to read this file 100 times from HDFS. Also HDFS is not very efficient when used like this.

What mechanism does Hadoop framework provides to synchronize changes made in Distribution Cache during runtime of the application?
This is a trick questions. There is no such mechanism. Distributed Cache by design is read only during the time of Job execution

Counters in Hadoop. Give us an example scenario
Anybody who claims to have worked on a Hadoop project is expected to use counters.

Counter are for counting default, dummy, vague number of values in a particular data set. Take an example of signalling data from some telecom provider which could be of the form +135+22+45... But this data can also contain some dummy or to be more precise bad value such as +12a@3$+45+@...

Now this data is not good for processing. We cannot avoid or bypass this data but in a given file we can find out the number of such bad records by implementing counters.

Everytime you encounter such data the counter is implemented. This brings several advantages-
suppose data is coming from different signalling instrument installed may be in noida then you would get to know that which instrument is generating bad records more often and thus we could fix that.

Counter are important and if using MR framework thet can be used using enum in java. Also hadoop itself provides several built in counters which you can see after you job is completed.

Is it possible to provide multiple input to Hadoop? If yes then how can you give multiple directories as input to the Hadoop job
Yes, The input format class provides methods to add multiple directories as input to a Hadoop job

Is it possible to have Hadoop job output in multiple directories. If yes then how
Yes, by using Multiple Outputs class

What will a hadoop job do if you try to run it with an output directory that is already present? Will it
- overwrite it
- warn you and continue
- throw an exception and exit
The hadoop job will throw an exception and exit.

Q12. How can you set an arbitary number of mappers to be created for a job in Hadoop
This is a trick question. You cannot set it

Q13. How can you set an arbitary number of reducers to be created for a job in Hadoop
You can either do it progamatically by using method setNumReduceTasksin the JobConfclass or set it up as a configuration setting


Q22. What is the difference between a Hadoop database and Relational Database?
Hadoop is not a database rather is a framework which is consist of HDFS (Hadoop distributed file system) File system and Map-Reduce (Used for processing).

Structured data is data that is organized into entities that have a defined format, such as XML documents or database tables that conform to a particular predefined schema. This is the realm of the RDBMS. But map reduce works with Semi Structured Data (spread sheet), Structured Data (image files, plain text) because the input keys and values for MapReduce are not an intrinsic property of the data, but they are chosen by the person analyzing the data.

HADOOP ADMINISTRATOR INTERVIEW QUESTIONS
Following are some questions and answers to ask a Hadoop Administrator Interviewee

Q1. What are the default configuration files that are used in Hadoop
As of 0.20 release, Hadoop supported the following read-only default configurations
- src/core/core-default.xml
- src/hdfs/hdfs-default.xml
- src/mapred/mapred-default.xml

Q2. How will you make changes to the default configuration files
Hadoop does not recommends changing the default configuration files, instead it recommends making all site specific changes in the following files
- conf/core-site.xml
- conf/hdfs-site.xml
- conf/mapred-site.xml

Unless explicitly turned off, Hadoop by default specifies two resources, loaded in-order from the classpath:
- core-default.xml : Read-only defaults for hadoop.
- core-site.xml: Site-specific configuration for a given hadoop installation.

Hence if same configuration is defined in file core-default.xml and src/core/core-default.xml then the values in file core-default.xml (same is true for other 2 file pairs) is used.

Q3. Consider case scenario where you have set property mapred.output.compress totrue to ensure that all output files are compressed for efficient space usage on the cluster.  If a cluster user does not want to compress data for a specific job then what will you recommend him to do ?
Ask him to create his own configuration file and specify configuration mapred.output.compressto false and load this file as a resource in his job.

Q4. In the above case scenario, how can ensure that user cannot override the configuration mapred.output.compress to false in any of his jobs
This can be done by setting the property final to true in the core-site.xml file

Q5. What of the following is the only required variable that needs to be set in file conf/hadoop-env.sh for hadoop to work

- HADOOP_LOG_DIR

- JAVA_HOME
- HADOOP_CLASSPATH
The only required variable to set is JAVA_HOME that needs to point to directory


Q6. List all the daemons required to run the Hadoop cluster
- NameNode
- DataNode
- JobTracker
- TaskTracker


Q7. Whats the default port that jobtrackers listens to
50030

Q8. Whats the default  port where the dfs namenode web ui will listen on
50070

JAVA INTERVIEW QUESTIONS FOR HADOOP DEVELOPER
Q1. Explain difference of Class Variable and Instance Variable and how are they declared in Java
Class Variable is a variable which is declared with static modifier.
Instance variable is a variable in a class without static modifier.
The main difference between the class variable and Instance variable is, that first time, when class is loaded in to memory, then only memory is allocated for all class variables. That means, class variables do not depend on the Objets of that classes. What ever number of objects are there, only one copy is created at the time of class loding.

Q2. Since an Abstract class in Java cannot be instantiated then how can you use its non static methods
By extending it

Q3. How would you make a copy of an entire Java object with its state?
Have this class implement Cloneable interface and call its method clone().

Q4. Explain Encapsulation,Inheritance and Polymorphism
Encapsulation is a process of binding or wrapping the data and the codes that operates on the data into a single entity. This keeps the data safe from outside interface and misuse. One way to think about encapsulation is as a protective wrapper that prevents code and data from being arbitrarily accessed by other code defined outside the wrapper.
Inheritance is the process by which one object acquires the properties of another object.
The meaning of Polymorphism is something like one name many forms. Polymorphism enables one entity to be used as as general category for different types of actions. The specific action is determined by the exact nature of the situation. The concept of polymorphism can be explained as "one interface, multiple methods".

Q5. Explain garbage collection?
Garbage collection is one of the most important feature of Java.
Garbage collection is also called automatic memory management as JVM automatically removes the unused variables/objects (value is null) from the memory. User program cann't directly free the object from memory, instead it is the job of the garbage collector to automatically free the objects that are no longer referenced by a program. Every class inherits finalize() method from java.lang.Object, the finalize() method is called by garbage collector when it determines no more references to the object exists. In Java, it is good idea to explicitly assign null into a variable when no more in us

Q6. What is similarities/difference between an Abstract class and Interface?
Differences- Interfaces provide a form of multiple inheritance. A class can extend only one other class.
- Interfaces are limited to public methods and constants with no implementation. Abstract classes can have a partial implementation, protected parts, static methods, etc.
- A Class may implement several interfaces. But in case of abstract class, a class may extend only one abstract class.
- Interfaces are slow as it requires extra indirection to find corresponding method in in the actual class. Abstract classes are fast.
Similarities
- Neither Abstract classes or Interface can be instantiated

Q7. What are different ways to make your class multithreaded in Java
There are two ways to create new kinds of threads:
- Define a new class that extends the Thread class
- Define a new class that implements the Runnable interface, and pass an object of that class to a Thread's constructor.

Q8. What do you understand by Synchronization? How do synchronize a method call in Java? How do you synchonize a block of code in java ?
Synchronization is a process of controlling the access of shared resources by the multiple threads in such a manner that only one thread can access one resource at a time. In non synchronized multithreaded application, it is possible for one thread to modify a shared object while another thread is in the process of using or updating the object's value. Synchronization prevents such type of data corruption.
- Synchronizing a method: Put keyword synchronized as part of the method declaration
- Synchronizing a block of code inside a method: Put block of code in synchronized (this) { Some Code }

Q9. What is transient variable?
Transient variable can't be serialize. For example if a variable is declared as transient in a Serializable class and the class is written to an ObjectStream, the value of the variable can't be written to the stream instead when the class is retrieved from the ObjectStreamthe value of the variable becomes null.

Q10. What is Properties class in Java. Which class does it extends?
The Properties class represents a persistent set of properties. The Properties can be saved to a stream or loaded from a stream. Each key and its corresponding value in the property list is a string

Q11. Explain the concept of shallow copy vs deep copy in Java
In case of shallow copy, the cloned object also refers to the same object to which the original object refers as only the object references gets copied and not the referred objects themselves.
In case deep copy, a clone of the class and all all objects referred by that class is made.

Q12. How can you make a shallow copy of an object in Java
Use clone() method inherited by Object class

Q13. How would you make a copy of an entire Java object (deep copy) with its state?
Have this class implement Cloneable interface and call its method clone().

Q14. Which of the following object oriented principal is met with method overloading in java
- Inheritance
- Polymorphism
- Inheritance
Polymorphism

Q15. Which of the following object oriented principal is met with method overriding in java
- Inheritance
- Polymorphism
- Inheritance
Polymorphism

Q16. What is the name of collection interface used to maintain unique elements
Map

Q17. What access level do you need to specify in the class declaration to ensure that only classes from the same directory can access it? What keyword is used to define this specifier? It has to have default specifier.
You do not need to specify any access level, and Java will use a default package access level

Q18. What's the difference between a queue and a stack?
Stacks works by last-in-first-out rule (LIFO), while queues use the FIFO rule

Q19. How can you write user defined exceptions in Java
Make your class extend Exception Class

Q20. What is the difference between checked and Unchecked Exceptions in Java ? Give an example of each type
All predefined exceptions in Java are either a checked exception or an unchecked exception. Checked exceptions must be caught using try .. catch() block or we should throw the exception using throws clause. If you dont, compilation of program will fail.
- Example checked Exception: ParseTextException
- Example unchecked exception: ArrayIndexOutOfBounds

Q21. We know that FileNotFoundExceptionis inherited from IOExceptionthen does it matter in what order catch statements for FileNotFoundExceptionand IOExceptipon are written?
Yes, it does. The FileNoFoundExceptionis inherited from the IOException. Exception's subclasses have to be caught first.

Q22. How do we find if two string are same or not in Java. If answer is equals() then why do we have to use equals, why cant we compare string like integers
We use method equals() to compare the values of the Strings. We can't use == like we do for primitive types like int because == checks if two variables point at the same instance of a String object.

Q23. What is "package" keyword
This is a way to organize files when a project consists of multiple modules. It also helps resolve naming conflicts when different packages have classes with the same names. Packages access level also allows you to protect data from being used by the non-authorized classes

Q24. What is mutable object and immutable object
If a object value is changeable then we can call it as Mutable object. (Ex., StringBuffer) If you are not allowed to change the value of an object, it is immutable object. (Ex., String, Integer, Float)

Q25. What are wrapped classes in Java. Why do they exist. Give examples
Wrapped classes are classes that allow primitive types to be accessed as objects, e.g. Integer, Float etc

Q26. Even though garbage collection cleans memory, why can't it guarantee that a program will run out of memory? Give an example of a case when garbage collection will run out ot memory
Because it is possible for programs to use up memory resources faster than they are garbage collected. It is also possible for programs to create objects that are not subject to garbage collection. Once example can be if yuo try to load a very big file into an array.

Q27. What is the difference between Process and Thread?
A process can contain multiple threads. In most multithreading operating systems, a process gets its own memory address space; a thread doesn't. Threads typically share the heap belonging to their parent process. For instance, a JVM runs in a single process in the host O/S. Threads in the JVM share the heap belonging to that process; that's why several threads may access the same object. Typically, even though they share a common heap, threads have their own stack space. This is how one thread's invocation of a method is kept separate from another's

Q28. How can you write a indefinate loop in java
while(true) {
}
OR
for ( ; ; ){
}

Q29. How can you create singleton class in Java
Make the constructor of the class private and provide a static method to get instance of the class

Q30. What do keywords "this" and "super" do in Java
"this" is used to refer to current object. "super" is used to refer to the class extended by the current class

Q31. What are access specifiers in java. List all of them. Access specifiers are used to define score of variables in Java. There are four levels of access specifiers in java
- public
- private
- protected
- default

Q32. Which of the following 3 object oriented principals does access specifiers implement in java
- Encapsulation
- Polymorphism
- Intheritance
Encapsulation

Q33. What is method overriding and method overloading
With overriding, you change the method behavior for a subclass class. Overloading involves having a method with the same name within the class with different signature




SUNDAY, 10 MARCH 2013
24 Interview Questions & Answers for Hadoop

WHAT IS A JOBTRACKER IN HADOOP? HOW MANY INSTANCES OF JOBTRACKER RUN ON A HADOOP CLUSTER?

JobTracker is the daemon service for submitting and tracking MapReduce jobs in Hadoop. There is only One Job Tracker process run on any hadoop cluster. Job Tracker runs on its own JVM process. In a typical production cluster its run on a separate machine. Each slave node is configured with job tracker node location. The JobTracker is single point of failure for the Hadoop MapReduce service. If it goes down, all running jobs are halted. JobTracker in Hadoop performs following actions(from Hadoop Wiki:)
Client applications submit jobs to the Job tracker.
The JobTracker talks to the NameNode to determine the location of the data
The JobTracker locates TaskTracker nodes with available slots at or near the data
The JobTracker submits the work to the chosen TaskTracker nodes.
The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they are deemed to have failed and the work is scheduled on a different TaskTracker.
A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do then: it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may may even blacklist the TaskTracker as unreliable.
When the work is completed, the JobTracker updates its status.

Client applications can poll the JobTracker for information.
HOW JOBTRACKER SCHEDULES A TASK?

The TaskTrackers send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These message also inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated. When the JobTracker tries to find somewhere to schedule a task within the MapReduce operations, it first looks for an empty slot on the same server that hosts the DataNode containing the data, and if not, it looks for an empty slot on a machine in the same rack.
WHAT IS A TASK TRACKER IN HADOOP? HOW MANY INSTANCES OF TASKTRACKER RUN ON A HADOOP CLUSTER

A TaskTracker is a slave node daemon in the cluster that accepts tasks (Map, Reduce and Shuffle operations) from a JobTracker. There is only One Task Tracker process run on any hadoop slave node. Task Tracker runs on its own JVM process. Every TaskTracker is configured with a set of slots, these indicate the number of tasks that it can accept. The TaskTracker starts a separate JVM processes to do the actual work (called as Task Instance) this is to ensure that process failure does not take down the task tracker. The TaskTracker monitors these task instances, capturing the output and exit codes. When the Task instances finish, successfully or not, the task tracker notifies the JobTracker. The TaskTrackers also send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These message also inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated.
WHAT IS A TASK INSTANCE IN HADOOP? WHERE DOES IT RUN?

Task instances are the actual MapReduce jobs which are run on each slave node. The TaskTracker starts a separate JVM processes to do the actual work (called as Task Instance) this is to ensure that process failure does not take down the task tracker. Each Task Instance runs on its own JVM process. There can be multiple processes of task instance running on a slave node. This is based on the number of slots configured on task tracker. By default a new task instance JVM process is spawned for a task.
HOW MANY DAEMON PROCESSES RUN ON A HADOOP SYSTEM?

Hadoop is comprised of five separate daemons. Each of these daemon run in its own JVM. Following 3 Daemons run on Master nodes NameNode - This daemon stores and maintains the metadata for HDFS. Secondary NameNode - Performs housekeeping functions for the NameNode. JobTracker - Manages MapReduce jobs, distributes individual tasks to machines running the Task Tracker. Following 2 Daemons run on each Slave nodes DataNode – Stores actual HDFS data blocks. TaskTracker - Responsible for instantiating and monitoring individual Map and Reduce tasks.
WHAT IS CONFIGURATION OF A TYPICAL SLAVE NODE ON HADOOP CLUSTER? HOW MANY JVMS RUN ON A SLAVE NODE?

Single instance of a Task Tracker is run on each Slave node. Task tracker is run as a separate JVM process.
Single instance of a DataNode daemon is run on each Slave node. DataNode daemon is run as a separate JVM process.
One or Multiple instances of Task Instance is run on each slave node. Each task instance is run as a separate JVM process. The number of Task instances can be controlled by configuration. Typically a high end machine is configured to run more task instances.
WHAT IS THE DIFFERENCE BETWEEN HDFS AND NAS ?

The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. Following are differences between HDFS and NAS
In HDFS Data Blocks are distributed across local drives of all machines in a cluster. Whereas in NAS data is stored on dedicated hardware.
HDFS is designed to work with MapReduce System, since computation are moved to data. NAS is not suitable for MapReduce since data is stored seperately from the computations.
HDFS runs on a cluster of machines and provides redundancy usinga replication protocal. Whereas NAS is provided by a single machine therefore does not provide data redundancy.
HOW NAMENODE HANDLES DATA NODE FAILURES?

NameNode periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode. When NameNode notices that it has not recieved a hearbeat message from a data node after a certain amount of time, the data node is marked as dead. Since blocks will be under replicated the system begins replicating the blocks that were stored on the dead datanode. The NameNode Orchestrates the replication of data blocks from one datanode to another. The replication data transfer happens directly between datanodes and the data never passes through the namenode.
DOES MAPREDUCE PROGRAMMING MODEL PROVIDE A WAY FOR REDUCERS TO COMMUNICATE WITH EACH OTHER? IN A MAPREDUCE JOB CAN A REDUCER COMMUNICATE WITH ANOTHER REDUCER?

Nope, MapReduce programming model does not allow reducers to communicate with each other. Reducers run in isolation.
CAN I SET THE NUMBER OF REDUCERS TO ZERO?

Yes, Setting the number of reducers to zero is a valid configuration in Hadoop. When you set the reducers to zero no reducers will be executed, and the output of each mapper will be stored to a separate file on HDFS. [This is different from the condition when reducers are set to a number greater than zero and the Mappers output (intermediate data) is written to the Local file system(NOT HDFS) of each mappter slave node.]
WHERE IS THE MAPPER OUTPUT (INTERMEDIATE KAY-VALUE DATA) STORED ?

The mapper output (intermediate data) is stored on the Local file system (NOT HDFS) of each individual mapper nodes. This is typically a temporary directory location which can be setup in config by the hadoop administrator. The intermediate data is cleaned up after the Hadoop Job completes.
WHAT ARE COMBINERS? WHEN SHOULD I USE A COMBINER IN MY MAPREDUCE JOB?

Combiners are used to increase the efficiency of a MapReduce program. They are used to aggregate intermediate map output locally on individual mapper outputs. Combiners can help you reduce the amount of data that needs to be transferred across to the reducers. You can use your reducer code as a combiner if the operation performed is commutative and associative. The execution of combiner is not guaranteed, Hadoop may or may not execute a combiner. Also, if required it may execute it more then 1 times. Therefore your MapReduce jobs should not depend on the combiners execution.
WHAT IS WRITABLE & WRITABLECOMPARABLE INTERFACE?

org.apache.hadoop.io.Writable is a Java interface. Any key or value type in the Hadoop Map-Reduce framework implements this interface. Implementations typically implement a static read(DataInput) method which constructs a new instance, calls readFields(DataInput) and returns the instance.
org.apache.hadoop.io.WritableComparable is a Java interface. Any type which is to be used as a key in the Hadoop Map-Reduce framework should implement this interface. WritableComparable objects can be compared to each other using Comparators.
WHAT IS THE HADOOP MAPREDUCE API CONTRACT FOR A KEY AND VALUE CLASS?

The Key must implement the org.apache.hadoop.io.WritableComparable interface.
The value must implement the org.apache.hadoop.io.Writable interface.
WHAT IS A IDENTITYMAPPER AND IDENTITYREDUCER IN MAPREDUCE ?

org.apache.hadoop.mapred.lib.IdentityMapper Implements the identity function, mapping inputs directly to outputs. If MapReduce programmer do not set the Mapper Class using JobConf.setMapperClass then IdentityMapper.class is used as a default value.
org.apache.hadoop.mapred.lib.IdentityReducer Performs no reduction, writing all input values directly to the output. If MapReduce programmer do not set the Reducer Class using JobConf.setReducerClass then IdentityReducer.class is used as a default value.
WHAT IS THE MEANING OF SPECULATIVE EXECUTION IN HADOOP? WHY IS IT IMPORTANT?

Speculative execution is a way of coping with individual Machine performance. In large clusters where hundreds or thousands of machines are involved there may be machines which are not performing as fast as others. This may result in delays in a full job due to only one machine not performaing well. To avoid this, speculative execution in hadoop can run multiple copies of same map or reduce task on different slave nodes. The results from first node to finish are used.
WHEN IS THE REDUCERS ARE STARTED IN A MAPREDUCE JOB?

In a MapReduce job reducers do not start executing the reduce method until the all Map jobs have completed. Reducers start copying intermediate key-value pairs from the mappers as soon as they are available. The programmer defined reduce method is called only after all the mappers have finished.
IF REDUCERS DO NOT START BEFORE ALL MAPPERS FINISH THEN WHY DOES THE PROGRESS ON MAPREDUCE JOB SHOWS SOMETHING LIKE MAP(50%) REDUCE(10%)? WHY REDUCERS PROGRESS PERCENTAGE IS DISPLAYED WHEN MAPPER IS NOT FINISHED YET?

Reducers start copying intermediate key-value pairs from the mappers as soon as they are available. The progress calculation also takes in account the processing of data transfer which is done by reduce process, therefore the reduce progress starts showing up as soon as any intermediate key-value pair for a mapper is available to be transferred to reducer. Though the reducer progress is updated still the programmer defined reduce method is called only after all the mappers have finished.
WHAT IS HDFS ? HOW IT IS DIFFERENT FROM TRADITIONAL FILE SYSTEMS?

HDFS, the Hadoop Distributed File System, is responsible for storing huge data on the cluster. This is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant.
HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware.
HDFS provides high throughput access to application data and is suitable for applications that have large data sets.
HDFS is designed to support very large files. Applications that are compatible with HDFS are those that deal with large data sets. These applications write their data only once but they read it one or more times and require these reads to be satisfied at streaming speeds. HDFS supports write-once-read-many semantics on files.
WHAT IS HDFS BLOCK SIZE? HOW IS IT DIFFERENT FROM TRADITIONAL FILE SYSTEM BLOCK SIZE?

In HDFS data is split into blocks and distributed across multiple nodes in the cluster. Each block is typically 64Mb or 128Mb in size. Each block is replicated multiple times. Default is to replicate each block three times. Replicas are stored on different nodes. HDFS utilizes the local file system to store each HDFS block as a separate file. HDFS Block size can not be compared with the traditional file system block size.
WHAT IS A NAMENODE? HOW MANY INSTANCES OF NAMENODE RUN ON A HADOOP CLUSTER?

The NameNode is the centerpiece of an HDFS file system. It keeps the directory tree of all files in the file system, and tracks where across the cluster the file data is kept. It does not store the data of these files itself. There is only One NameNode process run on any hadoop cluster. NameNode runs on its own JVM process. In a typical production cluster its run on a separate machine. The NameNode is a Single Point of Failure for the HDFS Cluster. When the NameNode goes down, the file system goes offline. Client applications talk to the NameNode whenever they wish to locate a file, or when they want to add/copy/move/delete a file. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data lives.
WHAT IS A DATANODE? HOW MANY INSTANCES OF DATANODE RUN ON A HADOOP CLUSTER?

A DataNode stores data in the Hadoop File System HDFS. There is only One DataNode process run on any hadoop slave node. DataNode runs on its own JVM process. On startup, a DataNode connects to the NameNode. DataNode instances can talk to each other, this is mostly during replicating data.
HOW THE CLIENT COMMUNICATES WITH HDFS?

The Client communication to HDFS happens using Hadoop HDFS API. Client applications talk to the NameNode whenever they wish to locate a file, or when they want to add/copy/move/delete a file on HDFS. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data lives. Client applications can talk directly to a DataNode, once the NameNode has provided the location of the data.
HOW THE HDFS BLOCKS ARE REPLICATED?

HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time. The NameNode makes all decisions regarding replication of blocks. HDFS uses rack-aware replica placement policy. In default configuration there are total 3 copies of a datablock on HDFS, 2 copies are stored on datanodes on same rack and 3rd copy on a different rack

What is the Hadoop MapReduce API contract for a key and value Class?

◦The Key must implement the org.apache.hadoop.io.WritableComparable interface.
◦The value must implement the org.apache.hadoop.io.Writable interface.

What is the use of Combiners in the Hadoop framework?

Combiners are used to increase the efficiency of a MapReduce program. They are used to aggregate intermediate map output locally on individual mapper outputs. Combiners can help you reduce the amount of data that needs to be transferred across to the reducers.
You can use your reducer code as a combiner if the operation performed is commutative and associative.
The execution of combiner is not guaranteed; Hadoop may or may not execute a combiner. Also, if required it may execute it more than 1 times. Therefore your MapReduce jobs should not depend on the combiners’ execution.

Where the Mapper’s Intermediate data will be stored?
The mapper output (intermediate data) is stored on the Local file system (NOT HDFS) of each individual mapper nodes. This is typically a temporary directory location which can be setup in config by the Hadoop administrator. The intermediate data is cleaned up after the Hadoop Job completes.

Can Reducer talk with each other?
No, Reducer runs in isolation.

FRIDAY, 8 MARCH 2013
How does a NameNode handle the failure of the data nodes?

HDFS has master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients.

In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on.

The NameNode and DataNode are pieces of software designed to run on commodity machines.
NameNode periodically receives a Heartbeat and a Block report from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly.

A Blockreport contains a list of all blocks on a DataNode. When NameNode notices that it has not received a heartbeat message from a data node after a certain amount of time, the data node is marked as dead. Since blocks will be under replicated the system begins replicating the blocks that were stored on the dead DataNode.

The NameNode Orchestrates the replication of data blocks from one DataNode to another. The replication data transfer happens directly between DataNode and the data never passes through the NameNode.
How HDFA differs with NFS?

Following are differences between HDFS and NAS

o In HDFS Data Blocks are distributed across local drives of all machines in a cluster. Whereas in NAS data is stored on dedicated hardware.
o HDFS is designed to work with MapReduce System, since computation is moved to data. NAS is not suitable for MapReduce since data is stored separately from the computations.
o HDFS runs on a cluster of machines and provides redundancy using replication protocol. Whereas NAS is provided by a single machine therefore does not provide data redundancy.
What is NAS?
It is one kind of file system where data can reside on one centralized machine and all the cluster member will read write data from that shared database, which would not be as efficient as HDFS.

How many maximum JVM can run on a slave node?
One or Multiple instances of Task Instance can run on each slave node. Each task instance is run as a separate JVM process. The number of Task instances can be controlled by configuration. Typically a high end machine is configured to run more task instances.

How many daemon processes run on a Hadoop cluster?

Hadoop is comprised of five separate daemons. Each of these daemons runs in its own JVM.
Following 3 Daemons run on Master nodes.

NameNode - This daemon stores and maintains the metadata for HDFS.

Secondary NameNode - Performs housekeeping functions for the NameNode.

JobTracker - Manages MapReduce jobs, distributes individual tasks to machines running the Task Tracker. Following 2 Daemons run on each Slave nodes DataNode – Stores actual HDFS data blocks.

TaskTracker – It is Responsible for instantiating and monitoring individual Map and Reduce tasks.

What do you mean by TaskInstance?
Task instances are the actual MapReduce jobs which run on each slave node. The TaskTracker starts a separate JVM processes to do the actual work (called as Task Instance) this is to ensure that process failure does not take down the entire task tracker.Each Task Instance runs on its own JVM process. There can be multiple processes of task instance running on a slave node. This is based on the number of slots configured on task tracker. By default a new task instance JVM process is spawned for a task.

Explain the use of TaskTracker in the Hadoop cluster?

A Tasktracker is a slave node in the cluster which that accepts the tasks from JobTracker like Map, Reduce or shuffle operation. Tasktracker also runs in its own JVM Process.

Every TaskTracker is configured with a set of slots; these indicate the number of tasks that it can accept. The TaskTracker starts a separate JVM processes to do the actual work (called as Task Instance) this is to ensure that process failure does not take down the task tracker.

The Tasktracker monitors these task instances, capturing the output and exit codes. When the Task instances finish, successfully or not, the task tracker notifies the JobTracker.

The TaskTrackers also send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These messages also inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated.

What are the two main parts of the Hadoop framework?

Hadoop consists of two main parts
 Hadoop distributed file system, a distributed file system with high throughput,
 Hadoop MapReduce, a software framework for processing large data sets.

How many instances of Tasktracker run on a Hadoop cluster?
There is one Daemon Tasktracker process for each slave node in the Hadoop cluster.

How a task is scheduled by a JobTracker?
The TaskTrackers send out heartbeat messages to the JobTracker, usually every few minutes, to reassure the JobTracker that it is still alive. These messages also inform the JobTracker of the number of available slots, so the JobTracker can stay up to date with where in the cluster work can be delegated. When the JobTracker tries to find somewhere to schedule a task within the MapReduce operations, it first looks for an empty slot on the same server that hosts the DataNode containing the data, and if not, it looks for an empty slot on a machine in the same rack.

What is the JobTracker and what it performs in a Hadoop Cluster?

JobTracker is a daemon service which submits and tracks the MapReduce tasks to the Hadoop cluster. It runs its own JVM process. And usually it run on a separate machine, and each slave node is configured with job tracker node location.
The JobTracker is single point of failure for the Hadoop MapReduce service. If it goes down, all running jobs are halted.


Client applications submit jobs to the Job tracker.
 The JobTracker talks to the NameNode to determine the location of the data
 The JobTracker locates TaskTracker nodes with available slots at or near the data
 The JobTracker submits the work to the chosen TaskTracker nodes.
 The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they are deemed to have failed and the work is scheduled on a different TaskTracker.
 A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do then: it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may may even blacklist the TaskTracker as unreliable.
 When the work is completed, the JobTracker updates its status.
 Client applications can poll the JobTracker for information.

How many instances of JobTracker can run on a Hadoop Cluser?
Only one

What happens if number of reducers are 0?
In this case the outputs of the map-tasks go directly to the FileSystem, into the output path set by setOutputPath(Path). The framework does not sort the map-outputs before writing them out to the FileSystem.

It can be possible that a Job has 0 reducers?
It is legal to set the number of reduce-tasks to zero if no reduction is desired.

How many Reducers should be configured?

The right number of reduces seems to be 0.95 or 1.75 multiplied by (<no. of nodes> * mapreduce.tasktracker.reduce.tasks.maximum).
With 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. With 1.75 the faster nodes will finish their first round of reduces and launch a second wave of reduces doing a much better job of load balancing. Increasing the number of reduces increases the framework overhead, but increases load balancing and lowers the cost of failures.

What is reduce phase of a Reducer?
In this phase the reduce(MapOutKeyType, Iterable, Context) method is called for each pair in the grouped inputs. The output of the reduce task is typically written to the FileSystem via Context.write(ReduceOutKeyType, ReduceOutValType). Applications can use the Context to report progress, set application-level status messages and update Counters, or just indicate that they are alive. The output of the Reducer is not sorted.
Explain the Reducer’s Sort phase?
The framework groups Reducer inputs by keys (since different mappers may have output the same key) in this stage. The shuffle and sort phases occur simultaneously; while map-outputs are being fetched they are merged (It is similar to merge-sort).

How shuffle works?
Input to the Reducer is the sorted output of the mappers. In this phase the framework fetches the relevant partition of the output of all the mappers, via HTTP.

The primary phases of the Reducer?
Shuffle, Sort and Reduce

Explain the core methods of the Reducer?
The API of Reducer is very similar to that of Mapper, there's a run() method that receives a Context containing the job's configuration as well as interfacing methods that return data from the reducer itself back to the framework. The run() method calls setup() once, reduce() once for each key associated with the reduce task, and cleanup() once at the end. Each of these methods can access the job's configuration data by using Context.getConfiguration().
As in Mapper, any or all of these methods can be overridden with custom implementations. If none of these methods are overridden, the default reducer operation is the identity function; values are passed through without further processing.
The heart of Reducer is its reduce() method. This is called once per key; the second argument is an Iterable which returns all the values associated with that key.




0 comments:

Post a Comment