MapReduce Job Execution Process – Job scheduling

Hi Hadoopers,

We shall talk about 3rd circle today, as we talk about Job submission and Job initialilzation already.

hadoop037-job-submission-1

Scheduling the jobs is an interesting concept. I’m really excited to see the communication between Scheduler, Job tracker and  Task tracker.

hadoop040-job-schedule

  1. The task tracker keeps on sending heartbeats to Job Tracker about the status of the job. So, it says to Job Tracker that job is completed and it wants more jobs.
  2. Job Tracker updates the task status and make a note of Task Tracker’s message.
  3. Job Tracker goes to Scheduler asking for tasks.
  4. Scheduler updates the tasks scheduler record. Based on job scheduling policy, either it makes the job client to wait or process the job. It is based on execution policy, priority etc.
  5. Job tracker gets the task.
  6. It submits the task to the task tracker.
Advertisements

YARN process flow

Hi,

Yesterday I wrote my note on MR v1. After perceiving the problems, YARN, which is MR v2 is released.

hadoop_yarn

It has lot of advantages as given below.

  1. MR v1 has two major daemons – Job Tracker and Task Tracker. As all the tasks are aggregated under these daemons, it may hang when we process large set of data
  2. It does not provide HA.
hadoop027-yarn-process-flow

Yarn – MR2 – Process flow

  1. JOB SUMISSION: User executes his YARN code. You are going to launch a job. A new JVM is launched.
  2. JOB SUMISSION: The job is submitted to App Manager of Resource Manager.
  3. JOB SUMISSION: In turn, Resource manager gives you back an job identifier. Job resources are copied to the HDFS. Name node has the meta data of the resources. (It is not the file or content. It is the job detail)
  4. RESOURCE MGMT: Resource Manager talks to one of the machines where the node manager is running – Machine A. It will ask the node manager to start an Application Master in that machine.
  5. RESOURCE MGMT: Node manager accepts the call and spawns Application Master.
  6. RESOURCE MGMT: Application master goes to Name node to get the meta data of the job details. It calculates the resource usage, split information etc to execute the task.
  7. RESOURCE MGMT: It requests the RM for the application master instance ID.
  8. RESOURCE MGMT: The Application Manager identify machines and calculates the resource information of them. It informs the Resource Manager about the identification of the Node Manager to execute the task.
  9. RESOURCE MGMT: The Resource Manager gives an acknowledgement to Application manager to allocate container from the identified Node Manager.
  10. RESOURCE MGMT: Application Master talks to Node Manager of Machine B to inform the approval of containers to Node Manager on its box where the data lies.
  11. TASK EXECUTION: It creates the container. The task will be executed inside the same.
  12. TASK EXECUTION: The Task Container will ge the job resources from HDFS, we copied in Step #3.
  13. TASK EXECUTION: Task starts.
  14. TASK EXECUTION: It is the Application Master’s duty to monitor the container. Hence the container sends the MR status to the same. (Task related information)
  15. TASK EXECUTION: Node manager of Machine B will update the Resource Manager about the resource consumption (CPU/MEM..) Based on this data, Resource manager decides the Machines for another job execution.
  16. TASK COMPLETION: Container informs the Application Master about the job completion. Resource Manager is updated. Job queue is updated. Application master de-registers the container and the container is terminated.

Happy Weekend, Hadoopers!