1

I have an ouput from my mapper:

Mapper: KEY, VALUE(Timestamp, someOtherAttrbibutes)

My Reducer does recieve:

Reducer: KEY, Iterable<VALUE(Timestamp, someOtherAttrbibutes)>

I want Iterable<VALUE(Timestamp, someOtherAttrbibutes)> to ordered by Timestamp attribute. Is there any possibility to implement it?

I would like to avoid manual sorting inside Reducer code. http://cornercases.wordpress.com/2011/08/18/hadoop-object-reuse-pitfall-all-my-reducer-values-are-the-same/

I'll have to "deep-copy" all objects from Iterable and it can cause huge memory overhead. :(((

4

2 に答える 2

6

It's relatively easy, you need to write comparator class for your VALUE class.

Take a closer look here: http://vangjee.wordpress.com/2012/03/20/secondary-sorting-aka-sorting-values-in-hadoops-mapreduce-programming-paradigm/ especially at A solution for secondary sorting part.

于 2013-01-14T14:31:04.713 に答える
-1

you need to write comparator class for your VALUE class.

@Override
protected void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
    final SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
    sdf.setTimeZone(TimeZone.getTimeZone("UTC"));
    List<String> list = new ArrayList<String>();
    for (Text val : values) {
        list.add(val.toString());

    }
    Collections.sort(list, new Comparator<String>() {
       public int compare(String s1, String s2) {
           String str1[] = s1.split(",");
           String str2[] = s2.split(",");
          int time1 = 0;
           int time2 = 0;
           try {
               time1 = (int)(sdf.parse(str1[0]).getTime());
               time2 = (int) (sdf.parse(str2[0]).getTime());

           } catch (ParseException e) {
               e.printStackTrace();
           } finally {
               return time1 - time2;
           }
       }
    });
    for(int i = 0; i < list.size(); ++i)
    context.write(key, new Text(list.get(i)));
}
于 2016-03-09T12:07:40.137 に答える