hadoop hellow world

实现先启动hadoop。

能进行hello world之前假设你的环境已经搭建完毕(我搭建的伪分布式) 
我用hadoop源码中的WordCount作为hadoop的hello world。 
(1)我们拿到hadoop源码中的WordCount类代码如下 

 
  1. package org.apache.hadoop.examples;   
      
    import java.io.IOException;   
    import java.util.StringTokenizer;   
      
    import org.apache.hadoop.conf.Configuration;   
    import org.apache.hadoop.fs.Path;   
    import org.apache.hadoop.io.IntWritable;   
    import org.apache.hadoop.io.Text;   
    import org.apache.hadoop.mapreduce.Job;   
    import org.apache.hadoop.mapreduce.Mapper;   
    import org.apache.hadoop.mapreduce.Reducer;   
    import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;   
    import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;   
    import org.apache.hadoop.util.GenericOptionsParser;   
      
    public class WordCount {   
      
      public static class TokenizerMapper    
           extends Mapper<Object, Text, Text, IntWritable>{   
           
        private final static IntWritable one = new IntWritable(1);   
        private Text word = new Text();   
             
        public void map(Object key, Text value, Context context   
                        ) throws IOException, InterruptedException {   
          StringTokenizer itr = new StringTokenizer(value.toString());   
          while (itr.hasMoreTokens()) {   
            word.set(itr.nextToken());   
            context.write(word, one);   
          }   
        }   
      }   
         
      public static class IntSumReducer    
           extends Reducer<Text,IntWritable,Text,IntWritable> {   
        private IntWritable result = new IntWritable();   
      
        public void reduce(Text key, Iterable<IntWritable> values,    
                           Context context   
                           ) throws IOException, InterruptedException {   
          int sum = 0;   
          for (IntWritable val : values) {   
            sum += val.get();   
          }   
          result.set(sum);   
          context.write(key, result);   
        }   
      }   
      
      public static void main(String[] args) throws Exception {   
        Configuration conf = new Configuration();   
        String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();   
        if (otherArgs.length != 2) {   
          System.err.println("Usage: wordcount <in> <out>");   
          System.exit(2);   
        }   
        Job job = new Job(conf, "word count");   
        job.setJarByClass(WordCount.class);   
        job.setMapperClass(TokenizerMapper.class);   
        job.setCombinerClass(IntSumReducer.class);   
        job.setReducerClass(IntSumReducer.class);   
        job.setOutputKeyClass(Text.class);   
        job.setOutputValueClass(IntWritable.class);   
        FileInputFormat.addInputPath(job, new Path(otherArgs[0]));   
        FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));   
        System.exit(job.waitForCompletion(true) ? 0 : 1);   
      }   
    }


(2)我们把这个文件放到hadoop的工作目录下,并在工作目录下面新建文件夹WordCount

cd hadoopdir
mkdir WordCount
(3)

  1. javac -classpath hadoop-common-0.21.0.jar:lib/commons-cli-1.2.jar:hadoop-mapred-0.21.0.jar -d WordCount WordCount.java
 
(4)进入到WordCount文件夹中执行 
     cd WordCount
jar -cvf wordcount.jar org/* 

然后把生成的jar拷贝到hadoop工作目录下面

cp wordcount.jar ../
(5)然后在hadoop工作目录下面新建一个input目录 mkdir input,在目录里面新建一个文件vi file1,输入以下内容: 
mkdir input
cd input
vi file1
键入如下
hello world 
hello hadoop 
hello mapreduce

 

,把该文件上传到hadoop的分布式文件系统中去 

./bin/hadoop fs -put input/file* input 

[liyan@cctv226 hadoop-0.21.0]$ ./bin/hadoop fs -put input/file* input  
12/04/19 10:10:21 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000
12/04/19 10:10:22 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id

(6)然后我们开始执行 

 
  1. ./bin/hadoop jar wordcount.jar org.apache.hadoop.examples.WordCount input wordcount_output  
./bin/hadoop jar wordcount.jar org.apache.hadoop.examples.WordCount input wordcount_output


(7)最后我们查看运行结果 

 
  1. ./bin/hadoop jar wordcount.jar org.apache.hadoop.examples.WordCount input wordcount_output 

    12/04/19 10:11:04 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000

    12/04/19 10:11:05 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
    12/04/19 10:11:05 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
    12/04/19 10:11:05 INFO input.FileInputFormat: Total input paths to process : 1
    12/04/19 10:11:05 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
    12/04/19 10:11:05 INFO mapreduce.JobSubmitter: number of splits:1
    12/04/19 10:11:06 INFO mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:null
    12/04/19 10:11:06 INFO mapreduce.Job: Running job: job_201204191009_0001
    12/04/19 10:11:07 INFO mapreduce.Job:  map 0% reduce 0%


    12/04/19 10:11:36 INFO mapreduce.Job:  map 100% reduce 0%
    12/04/19 10:11:45 INFO mapreduce.Job:  map 100% reduce 100%
    12/04/19 10:11:48 INFO mapreduce.Job: Job complete: job_201204191009_0001
    12/04/19 10:11:48 INFO mapreduce.Job: Counters: 33
            FileInputFormatCounters
                    BYTES_READ=45
            FileSystemCounters
                    FILE_BYTES_READ=59
                    FILE_BYTES_WRITTEN=150
                    HDFS_BYTES_READ=148
                    HDFS_BYTES_WRITTEN=37
            Shuffle Errors
                    BAD_ID=0
                    CONNECTION=0
                    IO_ERROR=0
                    WRONG_LENGTH=0
                    WRONG_MAP=0
                    WRONG_REDUCE=0
            Job Counters 
                    Data-local map tasks=1
                    Total time spent by all maps waiting after reserving slots (ms)=0
                    Total time spent by all reduces waiting after reserving slots (ms)=0
                    SLOTS_MILLIS_MAPS=22880
                    SLOTS_MILLIS_REDUCES=6505
                    Launched map tasks=1
                    Launched reduce tasks=1
            Map-Reduce Framework
                    Combine input records=6
                    Combine output records=4
                    Failed Shuffles=0
                    GC time elapsed (ms)=17
                    Map input records=4
                    Map output bytes=65
                    Map output records=6
                    Merged Map outputs=1
                    Reduce input groups=4
                    Reduce input records=4
                    Reduce output records=4
                    Reduce shuffle bytes=59
                    Shuffled Maps =1
                    Spilled Records=8
                    SPLIT_RAW_BYTES=103

(7)最后我们查看运行结果 

./bin/hadoop fs -cat wordcount_output/part-r-00000



hadoop  1 
hello   3 
mapreduce       1 
world   1