java.io.IOException: Filesystem closed

Hi hadoopers,

Here is the exception that screwed up me on Saturday night and failed my Mapper task.

  • Mapper is reading the lines one by one and tokenize it.
  • The last token contains a path of a file in HDFS.
  • I need to open the file and read the contents.

For the above task, following is the flow I followed in the Mapper.

hadoop045-filesystem

Worse, my mapper failed with the following exception.

org.apache.hadoop.mapred.MapTask: Ignoring exception during close for org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader@1cb3ec38
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:689)
at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:617)

Filesystem object is suppose to be global. When I close the filesystem, the Mapper input is also closed which breaks the complete flow. So I closed only the filestream, but I didn’t close the file system explicitly which resolved the problem.

Ref: https://github.com/linkedin/gobblin/issues/1219

 

Advertisements

MapReduce Job Execution Process – Job Cleanup

Hi Hadoopers,

So we are looking at the 7th circle today – which is the job clean up.

 

hadoop037-job-submission-1

MR job writes many intermediate results and junk files during the operation. Once the job is completed, these junks would occupy space on HDFS which is of no benefit any more. Hence the clean up task is launched.

hadoop043-hadoop-job-cleanup

  1. Job tracker informs all the task trackers to perform the cleanup.
  2. Individual task tracker cleans up the work folders
  3. They clean up the temporary directory
  4. Once the cleanup task is successful, Task Tracker ends the job by writing _SUCCESS file

abd120008c51cb337696000eead36d3d

MapReduce Job Execution Process – Reduce Function

Hi Hadoopers,

We are in 6th circle today, which is the reducer function. A job is submitted by the user, which has been initiated in 2nd circle for which the setup is completed in 3rd circle.

Map Task was executed in 4th circle and sort & shuffle was completed in 5th circle.

hadoop037-job-submission-1

The reducer will collect the output from all the mappers to apply the user defined reduce function.

hadoop043-hadoop-reducer

  1. Task tracker launches the reduce task
  2. Reduce task (not reduce function) read the jar and xml of the job.
  3. It execute the shuffle. Because the time the reducer task started, all the mappers may not have completed the job. So it goes to individual mapper machines to collect the output and shuffles them.
  4. Once all the mapping activity is finished it invokes the user reducer function (one more reducers).
  5. Each reducers will complete their jobwrite the output records to HDFS.
  6. Those output would be stored in temporary output file first.
  7. Once all the reducers have completed their job, final output would be written to the reducer partition file.

MapReduce Job Execution Process – Map Task Execution

Hi Hadoopers,

The user had submitted his job. He has permissions. We have slots in the cluster. Job setup is completed. We look at 4th circle given below – The Map Task Execution

hadoop037-job-submission-1

 

The below given diagram depicts the Map Task Execution.

hadoop041-map-task-execution

  1. The task tracker launches the Map  Task
  2. The Map task read the jar file given the user. This is what we write in Eclipse. In the entire frameworks, this is what our contribution 🙂
    The Map task also reads the job config (input path, output path etc). It gets everything from HDFS, as all these are already uploaded to HDFS initially.
  3. The Map task reads the input splits from HDFS
  4. From the input splits, Map task creates the record.
  5. The Map task invokes the user Mapper with the record
  6. The mapper writes intermediate output
  7. The task sort them based on key and flush them to disk.
  8. Map task informs Task Tracker about the completion of the job.

MapReduce Job Execution Process – Job Submission

Hi Hadoopers,

After publishing many posts about MapReduce code, we’ll see the MR internals like, how the MR job is submitted and executed.

hadoop037-job-submission-1

This post talks about first circle – Job Submission.

We compiled the MR code and jar is ready. We execute the job with hadoop jar xxxxxx. First the job is submitted to hadoop. There are schedulers which runs the job, based on cluster capacity and availability.

I want to scribble down quick notes on Job Submission using the below given gantt diagram.

hadoop038-job-submission-2

  1. The user submits the job to Job Client.
  2. Job client talks to Job Tracker to get the job id
  3. The job client creates a staging directory in HDFS. This is where all the files related to the job would get uploaded.
  4. The MR code and configurations with their 10 replicas of the blocks would be uploaded to Staging directory. Jar file of the job, job splits, split meta data and job.xml which has the job description would be uploaded.
  5. Splits are computed automatically and input is read.
  6. Meta data of split is uploaded to HDFS
  7. Job is submitted and it is ready to execute.

Lab12 – Writing local file to HDFS

Hi Hadoopers,

You might have seen my earliest post on how to read from HDFS using APIs. Here is the post that can tell you how to write to HDFS.

Input file in HDD – /opt/hadoop/feed/output/2016-09-18
HDFS file – /user/hadoop/feed/2016-09-18

logo-mapreduce

The following code needs two inputs. One is the local file and another one is HDFS file.

package org.grassfield.hadoop.input;

import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.FileReader;
import java.io.IOException;
import java.io.OutputStreamWriter;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;

/**
 * upload local file to HDFS
 * 
 * @author pandian
 *
 */
public class LoadItemsHdfs {

    /**
     * @param args    localFile remoteFile
     * @throws IOException
     */
    public static void main(String[] args) throws IOException {
        Path path = new Path("hdfs://gandhari:9000"+args[1]);
        FileSystem fs = FileSystem.get(new Configuration());
        BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(fs.create(path, true)));
        BufferedReader br = new BufferedReader(new FileReader(args[0]));
        String line = null;
        while((line=br.readLine())!=null){
            bw.write(line);
            bw.write('\n');
        }
        br.close();
        bw.flush();
        bw.close();
        fs.close();
    }

}

Let’s execute it.

hadoop@gandhari:~/jars$ hadoop jar FeedCategoryCount-9.jar org.grassfield.hadoop.input.LoadItemsHdfs ../feed/output/2016-09-18 /user/hadoop/feed/2016-09-18
hadoop@gandhari:~/jars$ hadoop fs -ls /user/hadoop/feed
Found 1 items
-rw-r--r--   3 hadoop supergroup     120817 2016-09-18 06:51 /user/hadoop/feed/2016-09-18
hadoop@gandhari:~/jars$ hadoop fs -cat /user/hadoop/feed/2016-09-18

application/rss+xml     Today Online - Hot news null    http://www.todayonline.com/authors/ku-swee-yong null    Rosberg in pole position to claim victory on Sunday
 Today   http://www.todayonline.com/sports/motor-racing/rosberg-pole-position-claim-victory-sunday       Sat Sep 17 23:44:59 MYT 2016    []
 application/rss+xml     Today Online - Hot news null    http://www.todayonline.com/authors/ku-swee-yong null    No slowing Tang down despite qualifying setback Today
 http://www.todayonline.com/sports/motor-racing/no-slowing-tang-down-despite-qualifying-setback  Sat Sep 17 22:09:20 MYT 2016    []

HDFS Permissions

The Hadoop Distributed File System (HDFS) implements a permissions model for files and directories that shares much of the POSIX model. Each file and directory is associated with an owner and a group. The file or directory has separate permissions for the user that is the owner, for other users that are members of the group, and for all other users. For files, the r permission is required to read the file, and the w permission is required to write or append to the file. For directories, the r permission is required to list the contents of the directory, the w permission is required to create or delete files or directories, and the x permission is required to access a child of the directory.
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html

This assignment will create a new user, assign a folder in HDFS for him to demonstrate the permission capabilities.

HDFS

Add a Unix user

hadoop@gandhari:~$ sudo groupadd feeder
hadoop@gandhari:~$ sudo useradd -g feeder -m feeder
hadoop@gandhari:~$ sudo passwd feeder

Create a folder in HDFS and assign permissions

hadoop@gandhari:~$ hadoop fs -mkdir /feeder
hadoop@gandhari:~$ hadoop fs -chown -R feeder:feeder /feeder
hadoop@gandhari:~$ hadoop fs -ls /
Found 6 items
-rw-r--r--   1 hadoop supergroup       1749 2016-08-24 06:01 /data
drwxr-xr-x   - feeder feeder              0 2016-09-05 15:34 /feeder
drwxr-xr-x   - hadoop supergroup          0 2016-09-05 15:15 /hbase
drwxr-xr-x   - hadoop supergroup          0 2016-08-24 13:53 /pigdata
drwxrwx---   - hadoop supergroup          0 2016-08-24 16:14 /tmp
drwxr-xr-x   - hadoop supergroup          0 2016-08-24 13:56 /user

We need to enable the permissions in hdfs-site.xml

hadoop@gandhari:~$ vi etc/hadoop/hdfs-site.xml
        <property>
                <name>dfs.permissions</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.permissions.enabled</name>
                <value>true</value>
        </property>

After this change, we need to restart dfs daemon.

hadoop@gandhari:~$ stop-dfs.sh
hadoop@gandhari:~$ start-dfs.sh

Let’s test the permissions using another user kannan who does not have write permission to /data/feeder

kannan@gandhari:~$ /opt/hadoop/bin/hadoop fs -put javashine.xml /data/feeder
put: Permission denied: user=kannan, access=EXECUTE, inode="/data":hadoop:supergroup:-rw-r--r--

See you in another interesting post!