Preface
Files and Directories Used in this Exercise
In this exercise you will create a Map-only MapReduce job.
Your application will process a web server's access log to count the number of times gifs, jpegs, and other resources have been retrieved. You job will report three figures: number of gif requests, number of jpeg requests, and number of other requests.
Hints
1. You should use a Map-only MapReduce job, by setting the number of Reducers to 0 in the driver code.
2. For input data, use the Web access log file that you uploaded to the HDFS /user/training/weblog directory in the "Using HDFS" exercise.
3. Use a counter group such as ImageCounter, with names gif, jpeg and other.
4. In your driver code, retrieve the values of the counters after the job has completed and report them using System.out.println.
5. The output folder on HDFS will contain Mapper output files which are empty, because the Mapper did not write any data.
Solution Code
- Mapper
You use group name and counter name to retrieve
Counter from Context object passed as parameter in map()
- Driver
The
Job object provide getCounters() to retrieve counters for this job which will return Counters object.
Lab Experiment
1. Build the project, run MapReduce job
Files and Directories Used in this Exercise
In this exercise you will create a Map-only MapReduce job.
Your application will process a web server's access log to count the number of times gifs, jpegs, and other resources have been retrieved. You job will report three figures: number of gif requests, number of jpeg requests, and number of other requests.
Hints
1. You should use a Map-only MapReduce job, by setting the number of Reducers to 0 in the driver code.
2. For input data, use the Web access log file that you uploaded to the HDFS /user/training/weblog directory in the "Using HDFS" exercise.
3. Use a counter group such as ImageCounter, with names gif, jpeg and other.
4. In your driver code, retrieve the values of the counters after the job has completed and report them using System.out.println.
5. The output folder on HDFS will contain Mapper output files which are empty, because the Mapper did not write any data.
Solution Code
- Mapper
- package solution;
- import java.io.IOException;
- import org.apache.hadoop.io.IntWritable;
- import org.apache.hadoop.io.LongWritable;
- import org.apache.hadoop.io.Text;
- import org.apache.hadoop.mapreduce.Mapper;
- /**
- * Example input line:
- * 96.7.4.14 - - [24/Apr/2011:04:20:11 -0400] "GET /cat.jpg HTTP/1.1" 200 12433
- *
- */
- public class ImageCounterMapper extends
- Mapper
{ - @Override
- public void map(LongWritable key, Text value, Context context)
- throws IOException, InterruptedException {
- /*
- * Split the line using the double-quote character as the delimiter.
- */
- String[] fields = value.toString().split("\"");
- if (fields.length > 1) {
- String request = fields[1];
- /*
- * Split the part of the line after the first double quote
- * using the space character as the delimiter to get a file name.
- */
- fields = request.split(" ");
- /*
- * Increment a counter based on the file's extension.
- */
- if (fields.length > 1) {
- String fileName = fields[1].toLowerCase();
- if (fileName.endsWith(".jpg")) {
- context.getCounter("ImageCounter", "jpg").increment(1);
- } else if (fileName.endsWith(".gif")) {
- context.getCounter("ImageCounter", "gif").increment(1);
- } else {
- context.getCounter("ImageCounter", "other").increment(1);
- }
- }
- }
- }
- }
- Driver
- package solution;
- import org.apache.hadoop.fs.Path;
- import org.apache.hadoop.io.IntWritable;
- import org.apache.hadoop.io.Text;
- import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
- import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
- import org.apache.hadoop.mapreduce.Job;
- import org.apache.hadoop.conf.Configured;
- import org.apache.hadoop.conf.Configuration;
- import org.apache.hadoop.util.Tool;
- import org.apache.hadoop.util.ToolRunner;
- public class ImageCounter extends Configured implements Tool {
- @Override
- public int run(String[] args) throws Exception {
- if (args.length != 2) {
- System.out.printf("Usage: ImageCounter );
- return -1;
- }
- Job job = new Job(getConf());
- job.setJarByClass(ImageCounter.class);
- job.setJobName("Image Counter");
- FileInputFormat.setInputPaths(job, new Path(args[0]));
- FileOutputFormat.setOutputPath(job, new Path(args[1]));
- // This is a map-only job, so we do not call setReducerClass.
- job.setMapperClass(ImageCounterMapper.class);
- job.setOutputKeyClass(Text.class);
- job.setOutputValueClass(IntWritable.class);
- /*
- * Set the number of reduce tasks to 0.
- */
- job.setNumReduceTasks(0);
- boolean success = job.waitForCompletion(true);
- if (success) {
- /*
- * Print out the counters that the mappers have been incrementing.
- */
- long jpg = job.getCounters().findCounter("ImageCounter", "jpg")
- .getValue();
- long gif = job.getCounters().findCounter("ImageCounter", "gif")
- .getValue();
- long other = job.getCounters().findCounter("ImageCounter", "other")
- .getValue();
- System.out.println("JPG = " + jpg);
- System.out.println("GIF = " + gif);
- System.out.println("OTHER = " + other);
- return 0;
- } else
- return 1;
- }
- public static void main(String[] args) throws Exception {
- int exitCode = ToolRunner.run(new Configuration(), new ImageCounter(), args);
- System.exit(exitCode);
- }
- }
Lab Experiment
1. Build the project, run MapReduce job
沒有留言:
張貼留言